Welcome to the CAP2024 Indico site. This site is being used for abstract submission and congress scheduling. Abstracts are still being accepted for post-deadline poster submissions, until May 6, 2024. Questions can be directed to programs@cap.ca. The Congress program is available by selecting "Timetable" in the left menu. Congress registration is now available, with early registration closing at 23h59 ET on Monday, May 6. You can access the fees and link to register by selecting the "registration" button in the left menu.
Bienvenue au site web Indico pour ACP2024. Ce site servira à la soumission de résumés et à la préparation de l'horaire. Les résumés sont encore acceptés pour les soumissions d'affiches après la date limite, jusqu'au 6 mai 2024. Les questions peuvent être adressées à programs@cap.ca. Le programme du congrès est disponible en sélectionnant "Timetable" dans le menu de gauche. L'inscription au congrès est maintenant disponible, l'inscription anticipée se terminant à 23h59 ET le lundi 6 mai. Vous pouvez accéder aux tarifs et au lien pour vous inscrire en sélectionnant le bouton "inscription" dans le menu de gauche.
|
What is the purpose of an introductory physics lab? Often instructional labs are structured such that students perform experiments to observe or discover classic physics phenomena. In this talk, I’ll present data that questions this goal and argues for transforming labs to focus instead on the skills and understandings of experimental physics. I’ll provide several examples of experimentation-focused labs and research on their efficacy for students’ skill development.
SNOLAB is a world-class underground science facility - operated fully as a cleanroom - 2km deep underground in VALE's active Creighton Mine in Sudbury Ontario. The program focuses around Neutrino Science and Dark Matter Searches, but also includes life science projects and new initiatives around Quantum Technology. In addition, SNOLAB has a number of analytical capabilities, such as ICPMS and low background technologies - germanium counters and radon mitigation. This presentation will give an overview over the SNOLAB science and point out some new initiatives.
The NEWS-G experiment uses spherical proportional counters (SPC) to probe for low mass dark matter. An SPC is a metallic sphere filled with gas with a high-voltage anode at its centre producing a radial electric field. The interaction between a dark matter particle and a nucleus can cause ionization of the gas, which leads to an electron avalanche near the anode and a detectable signal.
The latest NEWS-G detector, S-140, is a copper sphere of 140 cm of diameter, which took 10 days of data with methane at the LSM, and is now taking data with various gases at SNOLAB. The LSM campaign brought forward some interesting new techniques to build upon and a few issues to try to mitigate for the future of the detector and data analysis in SNOLAB.
This talk will describe the NEWS-G experiment, present the latest results from the LSM data and discuss the progress on data taking and analysis at SNOLAB.
The NEWS-G experiment at SNOLAB uses spherical proportional counters, or SPCs, to detect weakly interacting massive particles (WIMPs), which are a prime candidate for dark matter. Interactions within the gas-filled sphere create a primary ionization. The signal from the resulting electrons is passed through a digitizer and this generates raw pulses that are observed as time-series data. However, these signals have electronic noise and some signals are non-physics pulses. I will discuss the use of machine learning techniques for removing noise from different pulse shape types, as well as rejecting non-physics pulses in the data. There is a large amount of data available which is used to train and test neural networks. Models are trained on this data, and subsequently can be applied to real data once fully trained. These models can potentially denoise and clean data more efficiently and with less error than traditional pulse processing, making them an important tool for the NEWS-G experiment.
The Scintillating Bubble Chamber (SBC) collaboration is combining the well-established technologies of bubble chambers and liquid noble scintillators to develop a detector sensitive to low-energy nuclear recoils with the goal of a GeV-scale dark matter search. Liquid noble bubble chambers benefit from excellent electronic recoil suppression intrinsic in bubble chambers with the addition of energy reconstruction provided by scintillation signals. The detector to be operated at SNOLAB is currently in development, featuring 10 kg of xenon-doped liquid argon superheated to 130 K at 1.4 bar. Surrounding the active volume are 32 FBK VUV-HD3 silicon photomultipliers to detect the emitted scintillation light. Deploying at SNOLAB allows for excellent cosmogenic suppression from exposure to 6010 m.w.e. of overburden, however, radiocontaminants embedded in the rock become a major source of background. Monte Carlo simulations in GEANT4 were performed to study the imposed background event rate from both the high energy gamma-rays and fast neutrons in the cavern environment. This talk aims to discuss the development of external shielding around SBC to suppress the background flux with the goal of a quasi-background-free low-mass (< 10 GeV/c2) WIMP dark matter search.
The highest energy range (∼MeV) of the solar neutrino spectrum are dominated by 8B neutrinos produced in the pp-chain in the Sun and hep neutrinos. Previous work by R.S. Raghavan, K. Bhattacharya, and others predicted the neutrinos above 3.9 MeV can be absorbed by 40Ar producing an excited state of 40K. These neutrinos can be identified by the detection of the gamma rays produced as the excited 40K state from the neutrino absorption deexcites. A search for this process relies on a detailed understanding of the background namely the radiogenic background from neutron capture and the cosmogenic background from muons interacting with material surrounding the detector. Above around 10 MeV, just past the end of the neutron capture spectrum, the expected neutrino signal dominates the background so the search for this process relies on a highly accurate background model to identify excess events that can be attributed to neutrino absorption.
We propose to search for this process using 3 years of data from the DEAP-3600, a liquid argon (LAr) direct dark matter detection experiment designed to detect WIMP-nucleon scattering in argon. DEAP-3600’s ultra-low background and high sensitivity could make it possible to make the first observation of this neutrino absorption process in
LAr.
Our universe is expected to emerge from an era dominated by quantum effects, for which a theory of quantum gravity is necessary. Loop Quantum Gravity, in its covariant formulation, provide a tentative yet viable framework to perform reliable computations about the physics of the early universe. In this talk I will review the strategy to be follow to apply the spinfoam formalism to cosmology. I review in particular the most recent results concerning the definition of the primordial vacuum state from the full theory, and the computation of primordial quantum fluctuations. I consider the singularity resolution mechanism in this framework and the modelling of a quantum bounce. Finally, I discuss the effective equations that are obtained in the semiclassical regime of this theory.
I will describe recent work on gravitational collapse of dust using effective equations.
Solutions of these equations exhibit formation of horizons, with a shock wave emerging as the horizons evaporate. The lifetime of a black hole turns out to be proportional to the square of its mass.
Although black holes have recently been detected through gravitational wave observations and intensively studied through the past decades, we are far away from a complete understanding of their life cycle. In this presentation I'll show a loop quantum gravity-based model of star collapse in which the classical central singularity is replaced by a quantum bounce happening when the star energy density becomes planckian. Immediately after the bounce a shockwave of matter arises carrying all the initial star mass, that then slowly moves outward. The shockwave requires a time proportional to the square of the original star mass to reach the black hole horizon and when this happens, the horizon disappears. This signals the end of the black hole, while the outgoing shockwave becomes visible to external observers. This picture is robust as it holds for a wide range of initial data, in particular including non-marginally trapped configurations.
Arguments from general relativity and quantum field theory suggest that black holes evaporate through Hawking radiation, but without a full quantum treatment of gravity the endpoint of the process is not yet understood. Two dimensional, semi-classical theories of gravity can be useful as toy models for studying black hole dynamics and testing predictions of quantum gravity. Of particular interest are non-singular black holes, since quantum gravity is expected to resolve the singularities that are pervasive in general relativity. This talk will present a general model of evaporating black holes in 2D dilaton gravity, with a focus on a Bardeen-like regularized black hole model. I will discuss results from numerical simulations including the dynamics of the apparent horizons and additional trapped anti-trapped regions formed by backreaction.
Non-perturbative quantum geometric effects in loop quantum cosmology (LQC) result in a natural bouncing scenario without any violation of energy conditions or fine tuning. In this work we study numerically an early universe scenario combining a matter-bounce with an ekpyrotic field in an LQC background setting.
We explore this unified phenomenological model for a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) universe in LQC filled with one scalar field mimicking dust and another scalar field with a negative exponential, ekpyrotic-like potential.
The dynamics of the homogenous background and the power spectrum of the comoving curvature perturbations are numerically analyzed with various initial conditions. By varying the initial conditions we consider different cases of dust and ekpyrotic field domination in the contracting phase. We use the dressed metric approach in LQC to numerically compute the primordial power spectrum of the comoving scalar and tensor curvature perturbations.
This presentation will delve into the latest advancements in X-ray imaging techniques and technologies, with a focus on cutting-edge hardware developments. Key topics will include X-ray computed tomography (CT), X-ray tomosynthesis, multi-energy X-ray imaging, cone-beam computed tomography (CBCT) and real-time X-ray imaging for interventional procedures. The discussion will then shift to explore emerging techniques and technologies in the field, such as photon-counting computed tomography, phase-contrast imaging, cold-cathode X-ray tubes, and multi-layer energy-selective X-ray detectors. Attendees will gain a comprehensive understanding of both the current capabilities and future directions of X-ray imaging technology.
Angioplasty is an interventional procedure for blood vessel stenosis where a catheter is navigated to the obstruction under fluoroscopy to place a permanent wire stent on the blockage to force it open. Clear stent visualization is critical to ensure a stent has not collapsed or fractured, which could lead to re-stenosis and even more severe complications. Overlapping anatomic structures make stents and vessels difficult to visualize non-invasively. Work by Yamamato et al. used the maximum pixel value across a set fluoroscopy frames to create a synthetic mask, but soft tissue motion was too severe and the method did not succeed. We plan to use dual energy subtraction x-ray imaging (DES) to eliminate soft tissue in conjunction with processing techniques similar to Yamamato et al. to enhance visualization of wire stents without catheterization. We created a MATLAB simulation to calculate the nickel signal-to-noise ratio (SNR) for a range of x-ray parameters to determine the optimal settings for DES. We then did a proof-of-concept experiment using an anthropomorphic chest phantom with a nitinol stent overlaid using x-ray settings optimized in the simulation. The stent was shifted to simulate cardiac motion, and a set of DES images were acquired to create the synthetic mask. A prototype, ultra low noise CMOS detector and kV switching generator were installed in our facilities for the first ever testing and experimentation of this novel technique. Quantification of this equipment was performed using an in-house software to generate the detector MTF, DQE, and waveforms of the kV switching techniques. Simulation results revealed parameters to optimize the nickel SNR per unit dose, and material suppression using weighted DES calculations removed soft tissue. Waveform measurements showed that step kV switching could be achieved within 1 millisecond, achieving consecutive DES images at a rate of 30 frames per second. DES imaging allowed for successful mask creation so that all background structures were suppressed and only the stent was visible. By using DES imaging for this technique, soft tissue motion is eliminated and allows for a digitally subtracted image of the stent alone. With the use of advanced prototype equipment, this technique may improve confidence in the diagnosis of collapsed and fractured stents in real time, non-invasively.
Zinc and selenium are essential elements that are necessary for a human’s health. Researchers have found that deficiencies in these elements can significantly affect the human body. In this study, I analyzed nail clippings from mothers and their infants from New Zealand to observe zinc and selenium concentration levels over the first year postpartum. Several biomarkers, including nails, were collected at three, six, and twelve months postpartum. Every mother had two sample cups prepared with nail clippings (one with big toenail clippings and one with other toenail clippings), each containing four samples labeled MB and MO, respectively. Each mother had a corresponding infant with one sample cup prepared with four nail-clipping samples labeled I. This study used portable X-ray fluorescence to examine the nail samples from the 12-month visit. These results were then compared to the 3-month and 6-month visits. The average zinc XOS concentration in this study (3rd visit) for the MB, MO, and I was 115 ppm, 96 ppm, and 84 ppm, respectively. The average zinc total area ratio (TAR) for the 3rd visit for MB, MO, and I was 1.51%, 1.35%, and 1.45%, respectively. Selenium TAR results for the 3rd visit for MB, MO, and I was 0.022%, 0.021%, and 0.023%, respectively. Several significant differences were found when comparing the three visits. Between the first and third visits, infant zinc concentrations significantly decreased for XOS (p=0.035) and TAR (p=0.014). Several significant differences were also found in selenium concentration between visits for MB, MO, and I. Selenium is often below the detection limit for XOS concentration reporting and would benefit from additional measurement time, such as 3 minutes. A correlation was found between the concentrations of mothers' big toenails and their other toenails.
A magnetometer that has a high temporal (≤1 ns) and a high spatial (≤1 mm) resolution, a large magnetic field range (0–0.5 T), and that does not perturb the magnetic field requires an innovative and unprecedented design. Such a magnetometer is key, for example, to measure magnetic fields produced by transcranial magnetic stimulation (TMS) coils used to neuromodulate the brain in the treatment of various psychological and neurological disorders, such as major depressive disorder, Parkinson’s disease, etc. TMS coils placed against the head of a patient produce rapid and intense magnetic field pulses that induce electric fields in the brain, stimulating or inhibiting neural activity for therapeutic applications. With time-resolved magnetic field measurements, time resolved electric fields can be calculated. Various TMS studies investigate the therapeutic impact of varying the frequency, intensity, and burst count of the pulse, but are limited in studying the time-resolved pulse shape and its ability to neuromodulate. To date, only peak electric fields generated by the coils are measured. Electric field or magnetic field pulse shape can be inferred from the current applied but have not been verified. Since neuron action potentials have temporal pulse shapes unique to their neural task, an important but unanswered question in TMS research to date is how the TMS temporal pulse shape impacts the efficacy of the therapy.
In this work, we present the design and construction of a fiber-based magnetometer (ENOMIS) based on the magneto-optic Kerr effect and Fabry-Perot interferometry. Our solution is based on a nickel and dielectric material multilayer deposited onto the tip of an optical fiber. Kerr rotation of 0.4° typical of air-nickel interfaces does not provide a significant SNR for resolving the typical 1-µs-wide TMS pulses in a single acquisition. Our results show that the Fabry-Perot nanoscale multilayer cavity theoretically can increase the Kerr rotation by over 1000 times. Other studies achieve good SNR at fast and ultrafast time scales, but are limited to small magnetic field ranges, unlike the 0 – 0.5 T range presented in this work. Temporal resolution of ~1 ns is limited by instrumentation used here, whereas the theoretical limit of the sensor is ~100 ps. This work compares modeled enhancement results to the experimental prototype results.
The tin isotopic chain with its magic 50 proton closed shell is a benchmark for models of nuclear structure. While the neutron-rich tin nuclei around the magic 82 neutron shell play an important role in the rapid-capture nuclear process, the mid-shell region of the tin isotopes can display collective phenomena known as shape coexistence [1]; for example, in $^{116}$Sn$_{66}$ deformed bands based on 2 particle – 2 hole excitations across the proton 50 shell gap exist [2,3]. Furthermore, at energies below the particle threshold, a new phenomenon called Pygmy Quadrupole Resonance (PQR) have been recently observed in $^{124}$Sn below 5 MeV [4]. Coupled with theoretical calculations, the new excitation mode was interpreted as a quadrupole-type oscillation of the neutron skin. This study prompted investigations for corresponding states in the neighboring $^{118,120}$Sn nuclei populated using thermal neutron capture of $^{117,119}$Sn(n,g).
Thermal neutron capture of $^{117,119}$Sn populates states in $^{118,120}$Sn at the neutron separation energy of about 9 MeV. The capture states in these experiments consist of 0$^+$ and 1$^+$ spins, ideal for populating subsequent 2$^+$ states which could be attributed to the PQR predicted to exist in the 3-5 MeV range.
In the experiments performed at the Institut Laue-Langevin in Grenoble, France, a continuous high-flux of thermal neutrons of 10$^8$ s$^{-1}$ cm$^{-2}$ from the 57 MW research reactor was used for capture reactions on enriched odd-A Sn targets. Gamma-ray transitions from excited states in nuclei of interest were detected by the Fission Product Prompt gamma-ray Spectrometer (FIPPS) [5] consisting of eight large n-type high purity germanium (HPGe) clover detectors and augmented with eight additional Compton-suppressed HPGe clovers from INFN Horia Hulubei, in Bucharest, Romania for enhanced gamma-ray efficiency and additional angular coverage used to produce angular correlations for spin assignments. In addition, 15 fast response LaBr$_3$(Ce) were used to allow for fast timing measurements of nuclear states using the centroid-shift method as described in [3].
Preliminary results from the $^{117,119}$Sn(n,g)$^{118,120}$Sn experiments will be presented highlighting the newly observed levels within the 3-5 MeV energy range of interest for PQR and lifetimes of excited states in $^{120}$Sn.
[1] K. Heyde and J. L. Wood. Rev. Mod. Phys., 83, (2011).
[2] J.L. Pore et al., Eur. Phys. J A 53, 27, (2017).
[3] C. M. Petrache et al., Phys. Rev. C 99, 024303 (2019).
[4] M. Spieker et al., Phys. Lett. B 752, 102 (2016).
[5]. C. Michelagnoli et al., EPJ A 193, 04009, (2018).
Motivated by fundamental symmetry tests, a measure of large electric dipole moment (EDM) would represent a clear signal of the violation of the CP symmetries. This observation highlights the imbalance in the matter and antimatter observed in our Universe. Since the best theory for particle physics: the Standard Model (SM) of particles predicts an EDM lower ($10^{-30}$) than the experimental reach, it is necessary to explore physics beyond the SM, models at the nucleus level such as Schiff moment theoretical model that predicts more accurate EDM. The strengths E2 and E3 that connect the ground state of $^{199}Hg$ to its excited state are useful to obtain EDM which in comparison to other species previously measured, provides one of the most precise upper limits on an atomic EDM (order of $10^{-28}$). Performing an experiment for $^{199}Hg$ is very challenging. As such, several experiments on $^{198}Hg$ and $^{200}Hg$ at the Maier-Leibnitz Laboratorium of the Ludwig-Maximilians Universität München have been conducted. To extract matrix elements E2 and E3 for 198Hg from the data collected, a deuteron beam bombarded the target of the compound of $^{198}Hg^{32}S$ producing scattering particles that were separated and detected using the quadruple three-dipole (Q3D) magnetic spectrograph. Very high-statistics data sets were collected from this reaction, resulting in considerable new states, angular distributions, therefore spin and parities assignments for new states, and cross sections. We also provide additional insight into the distribution of the matrix elements of $^{199}Hg$.
Details of the analysis of the $^{198}Hg(d,d’)$ reaction to date will be given.
Nuclei away from the line of stability have been found to demonstrate behavior that is inconsistent with the traditional magic numbers of the spherical shell model. This has led to the concept of the evolution of nuclear shell structure in exotic nuclei, and the neutron-rich calcium isotopes are a key testing ground of these theories; there have been conflicting results from various experiments as to the true nature of a sub-shell closure for neutron-rich nuclei around $^{52}$Ca. An experiment was performed at the ISAC facility of TRIUMF; $^{52}$K, $^{53}$K, and $^{54}$K were delivered to the GRIFFIN gamma-ray spectrometer paired with the SCEPTAR and the ZDS ancillary detectors for beta-tagging, as well as DESCANT for neutron-tagging. Using this powerful combination of detectors, we combine the results to construct level schemes for the isotopes populated in the subsequent beta-decay. Preliminary results from the analysis of the gamma, beta, and neutron spectra will be presented and discussed in the context of shell model calculations in neutron-rich nuclei.
Many outstanding fundamental topics in nuclear physics are addressed in the NSERC Subatomic Physics Long Range Plan. For several of these critical research drivers, such as " How does nuclear structure emerge from nuclear forces and ultimately from quarks and gluons?", gamma-ray spectroscopy is the investigative technique of choice. However, analysis of data from large-scale gamma-ray spectrometers is often a bottleneck for progress due to the extremely complex nature of the decays of excited nuclear states. In some cases, thousands of individual gamma rays must be analyzed in order to construct excited state decay schemes. To date, this is largely done laboriously by hand with the final result depending on the skill of the individual performing the analysis.
This project aims to develop an efficient machine-learning algorithm to perform the analysis of large spectroscopic data sets, initially concentrating on the analysis of gamma-gamma coincidence matrices. The essence of this research lies in its multi-pronged approach, enabling a rigorous comparison of two dominant machine learning paradigms: supervised and unsupervised techniques. The ultimate goal is to determine the most effective framework for solving problems of this nature and, if applicable, to subsequently enhance the chosen framework by integrating quantum computing, harnessing the power of qubits and quantum operations to overcome the computational restrictions inherent in classical computing.
Research on the learning and teaching of physics has been done in university physics departments for more than 50 years. Unfortunately, much of this work has been done in the United States and there are structural and cultural differences between the US and Canadian higher education systems. In this talk I will present an overview of PER work recently done at the University of Waterloo including our revision of undergraduate laboratory courses to refocus them on experimental process skills, as well as our efforts to bring EDI related principles to collaborative groupwork in our first-year physics courses. I will make a case for why PER work like this should be supported in Canada, what that support could look like and how you can get involved.
Examining the motivations and influences impacting undergraduate student program choice not only assists physics departments in recruitment efforts but also enables the development of curricula tailored to meet the needs and interests of students. In our 2003 first-year physics courses at the University of Guelph, science majors participated in a survey exploring the diverse motivations and influences shaping their choice of undergraduate program. Two decades later, we have conducted the same survey, to assess whether the perspectives of undergraduate students have evolved. We incorporated additional questions to delve into the development of student physics identity at various points in their educational journey. We will discuss comparisons between students from 2003 and 2023, with attention given to gender and majors in the physical- and biological-sciences.
We will discuss the two most recent iterations of a Physical Science in Contemporary Society course, a senior-level physics course at the University of Toronto that encourages physics students to explore how physics and society influence each other. A different instructor taught each iteration of the course while an education PhD student acting as a “critical friend” assisted in the handover of principles between the iterations. These principles included ungrading, student-led instruction, and student-defined final projects.
In the course, student groups are encouraged to select their topics for in-class facilitations on the topic and final projects that may take on different formats. Some topics explored include “gender bias in physics careers,” “physics funding and politics,” and “invention’s effects on society.” The students were asked to prepare an in-class facilitation where they should avoid lectures and instead use active learning techniques to engage their classmates on the topic. Each facilitation week finished with a 500-word reflective writing assignment (six in total) where the students had to discuss the topics presented that week, link them to another example outside the classroom, and reflect on their learning from the facilitation.
This course used ungrading as the assessment practice for the students’ facilitations and the reflective essays. Ungrading involves giving students feedback without numerical grades on their assignments, facilitating learning and inclusion. Students are then included in the discussions to determine final grading decisions based on demonstrated growth. The students’ writing abilities in both course iterations also showed dramatic improvement through the use of ungrading and feedback-focused assessments by the teaching assistants.Students, despite some initial reluctance to the purpose and design of the course, praised the course’s usefulness and were surprised by how much it changed their understanding of physics.
There has been noted concern regarding the retention, academic success, and motivation of students in STEM courses, especially physics. Additionally, problem solving is a highly valued 21st Century workforce skill in Canada (Hutchison, 2022) that recent graduates seem to lack (Cavanagh, Kay, Klein, & Meisinger, 2006; Deloitte & The Manufacturing Institute, 2011; Binkley et al., 2012; Finegold & Notabartolo, 2010). The aim of our project is to address these concerns by implementing novel cognitive strategies – retrieval practice – in physics instruction and assess its impact on students’ academic performance and attitudes of physics learning. Our objectives are: 1) Develop problem solving materials based on retrieval practice. 2) Implement these materials in a first year physics course and prepare teaching assistants to facilitate learning using these materials. 3) Assess the impact of these interventions on success in the course as well as attitudes and approaches to problem solving. Here, we will describe the development of course materials promoting retrieval practice, our implementation strategies, and present student success findings from a first year physics course.
We show theoretically that a modulated longitudinal cavity-qubit coupling can be used to control the path taken by a multiphoton coherent-state wavepacket conditioned on the state of a qubit, resulting in a qubit-which-path (QWP) entangled state [1]. We further show that QWP states have a better potential sensitivity for quantum-enhanced phase measurements (characterized by the quantum Fisher information), than either NOON states or entangled coherent states having the same average number of photons. QWP states can generate long-range multipartite entanglement using strategies for interfacing discrete- and continuous-variable degrees-of-freedom. Entanglement can therefore be distributed in a quantum network via QWP states without the need for single-photon sources or detectors.
[1] Z. M. McIntyre and W. A. Coish, arXiv:2306.13573 (to appear in Phys. Rev. Lett.)
We investigate and compare a number of different strategies for rapidly estimating the values of unknown Hamiltonian parameters of a quantum system. Rapid and accurate Hamiltonian parameter estimation has applications in quantum sensing, quantum control, and quantum computing. We show that an adaptive Bayesian method based on minimizing the Shannon entropy in each shot of a measurement sequence can successfully predict multiple unknown parameters more efficiently than a simple non-adaptive protocol. The adaptive protocol can be directly applied to ongoing experiments on spin qubits in double quantum dots, where multiple parameters (e.g.: exchange and magnetic fields) must be continuously estimated for good performance.
Non-Gaussian operations are essential for most bosonic quantum technologies. Yet, realizable non-Gaussian operations are rather limited in type and generally suffer from accuracy-duration tradeoffs. In this work, we propose to use quantum signal processing to engineer non-Gaussian operations. For systems dispersively coupled to an auxiliary qubit, our scheme can generate a new type of non-linear phase gate. Such a gate is an extension of the selective number-dependent arbitrary phase (SNAP) gate, but an extremely high accuracy can be achieved within a reduced, fixed, excitation-independent interaction time. Our versatile formalism can also engineer operations for a variety of tasks, e.g. processing rotational symmetric codes, entangling qudits, deterministically generating multi-component cat states, and converting entanglement from continuous- to discrete-variable.
Atomic and solid-state spin ensembles are promising quantum technological platforms, but practical architectures are incapable of resolving individual spins. The state of an unresolvable spin ensemble must obey the condition of permutational invariance, yet no method of generating general permutationally-invariant (PI) states is known. In this work, we develop a systematic strategy to generate arbitrary PI states. Our protocol involves first populating specific effective angular momentum states with engineered dissipation, then creating superposition through a modified Law-Eberly scheme. We illustrate how the required dissipation can be engineered with realistic level structure and interaction. We also discuss possible situations that may limit the practical state generation efficiency, and propose pulsed-dissipation strategies to resolve the issues. Our protocol unlocks previously inaccessible spin ensemble states that can be advantageous in quantum technologies, e.g. more robust quantum memory.
Antimicrobial peptides (AMPs) are of growing interest as potential candidates that may offer more resilience against antimicrobial resistance than traditional antibiotic agents. In this article, we perform the first in silico study of the synthetic $\beta$ sheet-forming AMP GL13K. Through atomistic simulations of single and multi-peptide systems under different conditions, we are able to shine a light on the short timescales of early aggregation. We find that isolated peptide conformations are primarily dictated by sequence rather than charge, whereas changing charge has a significant impact on the conformational free energy landscape of multi-peptide systems. We demonstrate that the loss of charge-charge repulsion is a sufficient minimal model for experimentally observed aggregation. Overall, our work explores the molecular biophysical underpinnings of the first stages of aggregation of a unique AMP, laying necessary groundwork for its further development as an antibiotic candidate.
Soft colloids are microscopic particles that, when dispersed in a solvent, can adjust their size and shape in response to changes in local environment. Typical examples are microgels, made of loosely crosslinked networks of polymer chains, that respond to changes in concentration by deswelling and faceting. Practical applications of microgels include drug delivery, chemical and biosensors, and photonic crystals. Within a coarse-grained model of elastic particles that interact via a hertzian pair potential and swell according to the Flory-Rehner theory of polymer networks, we explore the response of microgels to two fundamental types of crowding. First, we investigate the influence of nanoparticle crowding on microgel swelling by extending the Flory-Rehner theory from binary to ternary mixtures and adapting polymer field theory to model the entropic cost of nanoparticle penetration. Second, we examine the impact of particle compressibility on liquid-solid phase transitions in microgel suspensions. In both studies, we perform Monte Carlo simulations to model equilibrium properties of single particles and bulk suspensions [1]. Novel trial moves include random changes in microgel size and shape and in nanoparticle concentration. Our results demonstrate that particle softness and penetrability can profoundly affect single-particle and bulk properties of soft colloids in crowded environments. In particular, we find that addition of nanoparticles can significantly modify microgel swelling and pair structure and that particle compressibility tends to suppress crystallization. Our conclusions have broad relevance for interpreting experiments on soft matter and guiding the design of smart, responsive materials.
[1] M. Urich and A. R. Denton, Soft Matter 12, 9086 (2016).
Supported by National Science Foundation (DMR-1928073).
Soft solids play an important role in stretchable electronics, cellular membranes and water collection. Upon introduction of a liquid contact line, soft solids can deform substantially causing changes to geometry and dynamics. On the nanoscale, the deformation at the liquid/solid contact line is a capillary ridge. We study these capillary ridges for a system which consists of a thin polymer film in the melt state atop an elastomeric poly(dimethylsiloxane) (PDMS) film. We use a thorough washing procedure to create our PDMS films which creates a true elastomer composed of only a crosslinked network. Our bilayer polymer films sit atop a solid silicon substrate. The liquid polymer layer dewets on the soft elastomer PDMS base. We vary the thickness of the underlying elastomer film, which changes the effective stiffness, therefore changing the size of the capillary ridge. We use atomic force microscopy to directly measure the shape of the capillary ridge in our system.
The phase behavior of binary blends of AB diblock copolymers of compositions f and 1-f is examined using field theoretic simulations (FTSs). Highly asymmetric compositions (i.e., f ≈ 0) behave like homopolymer blends macrophase separating into coexisting A- and B-rich phases as the segregation is increased, whereas more symmetric diblocks (f ≈ 0.5) microphase separate into an ordered lamellar phase. In self-consistent field theory, these behaviors are separated by a Lifshitz critical point at f= 0.2113. However, its lower critical dimension is believed to be four, which implies that the Lifshitz critical point should be destroyed by fluctuations. Consistent with this, the FTSs find that it transforms into a tricritical point with a lower critical dimension of three. Furthermore, the highly swollen lamellar phase near the mean-field Lifshitz critical point is transformed into a bicontinuous microemulsion (BμE), consisting of large interpenetrating A- and B-rich microdomains. The BμE has been previously reported in ternary blends of AB diblock copolymer with its parent A- and B-type homopolymers, but in that system the homopolymers have a tendency to macrophase separate from the microemulsion. Our alternative system for creating BμE should be less prone to this macrophase separation.
Phase change materials (PCMs) are materials that can change their optical properties by switching between different phases in response to external stimuli, such as temperature, light, or electric field. This makes PCMs promising for tunability and reconfigurability of nanophotonic devices, including switches, modulators, and sensors. PCMs can be classified into two categories. The first category includes chalcogenide materials like Ge2Sb2Se4Te1 (GSST) and Ge2Sb2Te5 (GST), which change phase without altering their physical state but exhibit variations in their optical characteristics. The second category comprises materials such as gallium-based liquid metals (Ga-based LMs) and their alloys, such as Ga-In, Ga-Ag, and Ga-In-Sn, where both the physical state and optical properties undergo changes during phase transitions. The Ga-based LMs are particularly noteworthy due to their low melting points, allowing for solid-liquid phase transitions at room temperature. In this talk, we show how hybridizing PCMs with plasmonic materials like gold (Ag) or silver (Ag) enhance their functionality and performance in applications requiring precise control over optical properties. We also show how the phase transition of the PCMs can be actively controlled by the light absorption of the hybrid nanostructure, and how this phase transition affects the optical responses of the nanostructure, such as absorption, scattering, and extinction cross-sections. We also investigate induced photothermal process, heat transfer mechanism, and electric field enhancement of the hybrid nanostructure, as functions of the laser wavelength and intensity variations. We employ a self-consistent approach that couples electromagnetism with thermodynamics, employing numerical simulations to study the interactions between light and material properties. The findings demonstrate that hybrid nanostructure can achieve remarkable tunability and reconfigurability of its optical properties.
The development of coherent XUV radiation sources is leading to significant advancements in imaging and ultrafast studies. High harmonic generation (HHG) is one technique used to generate laser based coherent ultrashort XUV pulses but is relatively inefficient. This process is normally carried out in the beamwaist of a focused laser and, because of the limited intensity range for efficient HHG, can only generate a small amount of energy per pulse. One strategy to increase the XUV pulse energy is to use a high-power laser and have the HHG process occur upstream of focus. This focal cone HHG (FCHHG) process also has the advantage of creating a focusing XUV radiation beam which can be useful in many applications.
We present modeling results and the initial experimental results for the development of such a FCHHG beamline at the University of Alberta. A 15TW Ti:Sapphire laser is used to generate harmonics through a gas target positioned upstream from focus allowing for a high energy XUV beam to be created in the optimum intensity regime. The fundamental laser is focused with a long focal length lens to the gas target placed at varying positions from focus. The resulting XUV spectra and energy yield are examined as well as other diagnostics such as interferometry of the gas target. Based on previous studies into this FCHHG technique the wavefront of the driving laser will significantly impact the quality of the resulting harmonics. Thus, the wavefront quality is examined and the impact on the XUV generation is studied.
Identifying a means of efficiently separating XUV from the pump laser is important for applying such high energy XUV beams. One technique to achieve such separation is by means of non-colinear HHG which we are starting to explore. Results of the modeling and experimental investigations will be presented.
The terahertz (THz) frequency band, lying between the microwave and infrared regions of the electromagnetic spectrum, has enabled significant developments in a variety of fields such as wireless communications, product quality control, and condensed matter research. To improve the photonics systems used for these applications, intense efforts are being made to develop faster and more sensitive THz detectors. Conventional detection schemes relying on semiconductor devices fail at frequencies above 1 THz due to limited electronic response time and thermal fluctuations at room temperature. The highest sensitivity THz detection schemes presently available, such as superconducting tunnel junctions and single-quantum dot detectors, require cryogenic operation, making them expensive and cumbersome to use. Here, we demonstrate a high-sensitivity room-temperature detection scheme for THz radiation based on parametric frequency upconversion of the THz radiation to higher frequencies (in the near-infrared (NIR)), preserving the spatial, temporal, and spectral information of the THz wave. The upconverted photons, generated by the mixing of a THz pulse with a NIR pulse in a nonlinear optical crystal, are spectrally resolved using a monochromator and a commercial single-photon detector in the NIR. With this technique, we can detect THz pulses with energy as low as 1.4 zJ (1 zJ = 10-21 J) at a frequency of 2 THz (or a wavelength of 150 µm) when averaged over only 50k pulses. This corresponds to the detection of about 1.5 photons per pulse and a noise-equivalent power of 1.3 × 10-16 W/Hz1/2. To demonstrate potential applications of our system, we perform spectroscopy of water vapor between 1 and 3.7 THz with a spectral resolution of 0.2 THz. Our technique offers a fast and sensitive alternative to current THz spectroscopy techniques and could notably be used in future wireless communication technologies.
With many regions of the electromagnetic spectrum already being allocated for wireless communications in mobile, satellite and military sectors, there is a growing need to exploit new frequency regions. The terahertz (THz) band, which lies between the microwave and infrared regions, serves as a possible solution to achieve high data transfer rates at Terabytes/sec (Tbps). For transmission in atmospheric conditions, water vapour molecules attenuate the THz signal in certain frequency regions, primarily due to rotational resonances. There are a few spectral windows with negligible absorption, with some allowing signal propagation over several meters and others over several hundreds of meters. The short distance propagation windows can be used for secure communications in a small area with limited possibilities of eavesdropping. The latter can be used for transferring data over relatively long distances in turbulent atmospheres. We study the propagation distance of different spectral bands and investigate their potential for one of the above-mentioned applications. Our study relies on a nonlinear optical technique to achieve sensitive detection of THz signals. We demonstrate a parametric up conversion process allowing all information contained within a THz signal to be retrieved with a commercial optical detector sensitive to near-infrared light. Our optical configuration combines a monochromator and a single-photon avalanche diode to achieve spectral resolution up to 3 THz with a <0.2 THz resolution and an unprecedented detection sensitivity. These results pave the way towards the development of 6G wireless communication relying on new spectral bands above 1 THz, enabling higher data transfer rates and increasing the security of local networks.
Join the Canadian Journal of Physics team (including Editors-in-Chief Robert Mann (UWaterloo) and Marco Merkli (MUN), and Journal Development Specialist Jocelyn Sinclairr) to discuss current trends and horizons in academic publishing. This workshop will touch on current trends and horizons in academic publishing, including in peer review, open science and open access, research integrity and ethical publishing standards. Open discussion to follow, please pre-purchase (through Congress registration) or bring your lunch!
At CAP 2023, the Canadian Journal of Physics hosted a discussion around Open Access and its current and future impacts on publishing in physics. You can read a summary of the information presented and the following discussion in the attached document.
Silicon photomultipliers (SiPMs) are single-photon sensitive light sensors. The excellent radio-purity and high gain of SiPMs along with a high VUV detection efficiency make them ideal for low-background photon counting applications, such as in neutrino-less double beta decay and dark matter experiments employing noble liquid targets. The Light only Liquid Xenon (LoLX) experiment is an R&D liquid xenon (LXe) detector located at McGill University. LoLX aims to perform detailed characterization of SiPM performance, and to characterize the light emission and transport from LXe to inform future detectors. During Phase-1 of operations, LoLX employed 96 Hamamatsu VUV4 SiPMs in a cylindrical geometry submerged in LXe. Photons detected by a SiPM trigger an avalanche process in the individual photodiodes within the SiPM. The avalanche produces near infra-red photons that are emitted and can transport across the detector to other SiPMs which may produce correlated pulses on other channels, a process known as SiPM external crosstalk (eXT). With the Phase-1 LoLX detector, we performed measurements of SiPM external crosstalk in LXe with similar geometric acceptance as future planned experiments. In this presentation, we will present the measurement of SiPM eXT detection within LoLX, with comparisons to GEANT4 eXT simulations informed by ex-situ measurements of SiPM photon emission characteristics.
Searches for neutrinoless double beta decay conducted with Xe-136 can be improved by detecting the decay's daughter, the Ba-136 ion. This technique offers complete rejection of the residual radioactive background, but its practical implementation remains challenging. At Carleton University, Ba ion tagging R&D is being conducted using a cryogenic liquid xenon setup. As a proof-of-concept, untargeted ion extraction tests are being carried out in argon gas using radioactive ions captured and extracted using a thin capillary probe into an analysis chamber and then detected using a passivated implanted planar silicon detector. To better understand the experimental results, a Monte Carlo simulation of this process has been developed. This talk will present the design considerations, apparatus and procedures used, as well as discuss and compare the experimental results and simulations.
The Light-only Liquid Xenon (LoLX) experiment at McGill University, in collaboration with TRIUMF, examines liquid xenon (LXe) for its potential in detecting rare physical events using Silicon photomultipliers (SiPMs). This research seeks to evaluate the long-term stability of Vacuum Ultraviolet (VUV)-sensitive SiPMs in LXe, understand LXe's optical properties, and develop new methods to separate Cherenkov and scintillation light. Outcomes will set benchmarks for SiPMs in LXe environments and enhance particle identification, aiding future rare event search experiment, such as nEXO, in achieving higher sensitivity.
LoLX2 is a 4 cm cube composed of two types of SiPMs, Hamamatsu VUV4 and FBK HD3, as well as a VUV-sensitive photomultiplier tube (PMT). In this phase of the study, we compare the performance of these two types of SiPMs to the PMT. The initial data acquisition is currently under analysis and will be discussed in this presentation.
The Milky Way’s (MW) most massive satellite, the Large Magellanic Cloud (LMC) has just passed its first pericenter approach. The presence of the LMC has a considerable impact on the position and velocity distributions of DM particles in the MW. This directly affects the expected DM annihilation rate, especially in the case of velocity-dependent annihilation models since the LMC may boost the relative DM velocity distributions. I will discuss the impact of the LMC using MW-LMC analogues in the Auriga magneto-hydrodynamical simulations.
We aim to provide the effect of accelerated frames in cosmology and identify the origins of thermalization in the evolution of the universe. We begin our discussion by discussing general relativity and cosmology, as well as their successes and failures, which leads to the need for quantum cosmology. We then discuss the canonical formulation of general relativity, which is the basis of quantum cosmology, and its issues. We constructed a wavefunction for the universe whose dynamics are governed by the Wheeler-Dewitt equation.
Semiclassical approximations simplify assumptions and approximations that bring the equation closer to a form that can be more easily analyzed. The WKB method is used to approximate the wave function.
We constructed a transformation that is similar to the Rindler transformation motivated by the Klein-Gordon equation in Minkowski spacetime. We performed the Bogoliubov transformation and obtained a result which suggested thermalization. However, we were not using creation and annihilation operators. To interpret this result, we calculated the density matrix and the square of the density matrix to see if the WKB state is a pure or mixed state. The result from the density matrix calculation suggested that the WKB state is a mixed state, which suggested that the result we obtained from the Bogoliubov transformation can be interpreted as thermalization.
We study the classical-quantum (CQ) hybrid dynamics of homogeneous cosmology from a Hamiltonian perspective where the classical gravitational phase space variables and matter state evolve self-consistently with full backreaction. We compare numerically the classical and CQ dynamics for isotropic and anisotropic models, including quantum scalar-field induced corrections to the Kasner exponents. Our results indicate that full backreaction effects leave traces at late times in cosmological evolution; in particular, the scalar energy density at late times provides a potential contribution to dark energy. We also show that the CQ equations admit exact static solutions for the isotropic, and the anisotropic Bianchi IX universes with the scalar field in a stationary state.
In quantum gravity it expected that the Big Bang singularity is resolved and the
universe undergoes a bounce. We argue that matter-gravity entanglement entropy
rises rapidly during the bounce, declines, and then approaches a steady state value
higher than before the bounce. These observations suggest that matter-gravity en-
tanglement is a feature of the macroscopic universe that there is no second law of
entanglement entropy.
Using quantum field theory, we calculate the total effect on the photon flux in the microwave background due to some photons being gravitationally scattered toward us and others being gravitationally scattered away from us. The scattering is produced by the density fluctuations which act like point masses in a FLRW background, which can be of either sign. The net effect of having masses of either sign is to give a Debye screening of the graviton.
Loop Quantum Cosmology offers a successful quantization of cosmological models using techniques adapted from Loop Quantum Gravity (LQG). However, the connection with LQG remains unclear, primarily due to the absence of the $SU(2)$ gauge symmetry, which is a fundamental aspect of LQG. We aim to address this issue by demonstrating that the Gauss constraint can always be reformulated into abelian constraints within the cosmological framework, indicating the inherent abelian nature of the model in the minisuperspace.
To overcome this challenge, we propose employing a symmetry reduction approach inspired by Yang-Mills theory. This approach compels us to leave the minisuperspace, but, on the other hand, it allows us to construct a classical cosmological sector for the theory within the LQG framework and provide an analogous quantization.
Since the derivation of a well-defined D→4 limit for 4D Einstein Gauss-Bonnet (4DEGB) gravity coupled to a scalar field, there has been interest in testing it as an alternative to Einstein’s general theory of relativity. Using the Tolman-Oppenheimer-Volkoff equations modified for 4DEGB gravity, we model the stellar structure of quark stars using a novel interacting quark matter equation of state. We find that increasing the Gauss-Bonnet coupling constant α or the interaction parameter λ both tend to increase the mass-radius profiles of quark stars described by this theory, allowing a given central pressure to support larger quark stars in general. These results logically extend to cases where λ<0, in which increasing the magnitude of the interaction effects instead diminishes masses and radii. We also analytically identify a critical central pressure in both regimes, below which no quark star solutions exist due to the pressure function having no roots. Most interestingly, we find that quark stars can exist below the general relativistic Buchdahl bound and Schwarzschild radius R=2M, due to the lack of a mass gap between black holes and compact stars in the 4DEGB theory. Even for small α well within current observational constraints, we find that quark star solutions in this theory can describe extreme compact objects, objects whose radii are smaller than what is allowed by general relativity.
Introduction: We have recently demonstrated$^1$ a compressed-sensing (CS)-based undersampling method capable of improving signal-to-noise ratio and image quality of low field images. An optimal choice of pulse sequence would reduce undersampling artefacts and improve image quality; in this work, different sampling patterns in k-space for the X-Centric$^2$ and Sectoral$^{3,4}$ sequences are investigated at high acceleration factors (AF = 7, 10, 14).
Method: The X-Centric sequence acquires each half of k-space separately, in the readout direction, reducing signal loss from diffusion and relaxation. Both halves normally acquire the same phase-encode lines in k-space (non-alternating), but they can also sample a unique set of lines (alternating). The Sectoral sequence splits a circular area of k-space into sectors (here, 64), and acquires each sector from the centre-out, oversampling the contrast-rich centre. The proposed sampling pattern consists of stopping each sector prematurely, ensuring the undersampling is confined to the edges of k-space.
In-vitro $^1$H MRI was performed at 73.5mT. Seven sets of 9 images each were acquired with X-Centric: one set per AF for each sampling pattern, and one fully-sampled set to be retrospectively undersampled using the proposed Sectoral sampling. The Fourier-transformed (FT) images were compared to the CS-based reconstructions using the structural similarity index (SSI); all images were 128px$^2$, FOV=8cm$^2$.
Results: The FT images acquired using X-Centric had SSI scores around 35%; however, the FT Sectoral images had a SSI score of 96% and virtually no artefacting, with only slight blurring. The CS reconstructions of all 3 sampling patterns had SSI scores around 87%, with Sectoral exhibiting fewer artefacts.
Conclusion: Although the CS reconstructions of all 3 proposed sampling patterns had similar SSI scores and artefacting, in line with our previous work, the direct FT images of Sectoral were free of artefacts, comparable to the fully-sampled images, even at AF=14 (only 7% of k-space): the artefacts in the CS image are likely due to over-fitting the reconstruction parameters. These results suggest that the proposed Sectoral sampling pattern is well suited for accelerated low field MRI.
References:
1 Perron, S. et al. ISMRM (2022); 2 Ouriadov, A.V. et al. MRM. (2017); 3 Khrapitchev, A. A., et al. JMR (2006); 4 Perron, S. et al. JMR (2022).
In this work we present the first low-field TRASE technique capable of encoding 2D axial slices without switching gradients of the main magnetic field (B0). TRASE is an MR imaging technique that utilizes phase gradients within the radiofrequency (RF) fields to achieve k-space encoding. In doing so, TRASE does not require as many technologies of the main magnetic field, significantly reducing the cost and size of the overall system. The TRASE encoding principle ideally requires two and four different RF phase gradient fields for 1D encoding and 2D imaging respectively. Preventing interactions between these RF transmit coils has been the primary challenge, especially for 2D imaging. To address this problem, we constructed a head sized TRASE coil pair capable of 1D encoding any transverse axis. By method of rotation, the encoding axis can be changed, allowing a full 2D k-space acquisition in a radial spoke fashion. This radial TRASE technique requires half the RF transmit coils and accompanying RF electronics than typical cartesian TRASE imaging. As a first demonstration of this technique, a head sized coil pair was constructed and experimentally verified on a uniform 8.65 MHz bi-planar permanent magnet with a constant B0 gradient used for slice-selection. Decoupling of the two transmit coils is performed geometrically and a parallel-transmit system (PTx) is presented as a method to reduce any residual coupling. This work demonstrates that 2D slice-selective imaging is feasible without the use of any B0 switched gradients.
Introduction: Ventilation defects in the lungs (1), characterized by impaired airflow and reduced gas exchange, can arise from various factors such as small airway obstructions, mucus accumulation, and tissue damage(2). Hyperpolarized 129Xe/3He lung MRI is an efficient technique used to investigate and assess pulmonary diseases and ventilation defects (3). Current methods for quantifying these defects rely on semi-automated-techniques (4), involving hierarchical K-means clustering (5) for 3He MR images and seeded region-growing algorithms (6) for 1H MR images. Despite their effectiveness, these methods are time-consuming. Deep Learning (DL) has revolutionized medical imaging, particularly in image segmentation (7). While Convolutional Neural Networks (CNNs) like UNet (8) are currently the standard, Vision Transformers (ViTs) (9, 10) have emerged as a compelling alternative. ViTs have excelled in various computer vision tasks (11), owing to their multi-head self-attention mechanisms that capture long-range dependencies with less inductive bias. SegFormer (12), a specific ViT architecture, addresses some of the limitations of CNNs, such as low-resolution output and inability to capture long-range dependencies. It also introduces a Positional-Encoding-Free design, making it more robust.
The purpose of this study is to explore the efficacy of SegFormer in the automatic segmentation and quantification of ventilation defects in hyperpolarized gas MRI. We aim to demonstrate that SegFormer not only outperforms semi-automated-techniques and CNN-based methods in accuracy but also significantly reduces the training time.
Methods: We collected data from 56 study participants, comprising 9 healthy individuals, 28 with COPD, 9 with asthma, and 10 with COVID-19. This resulted in 1456 2D slices segmented using MATLAB R2021b and the hierarchical K-means clustering method. The dataset was balanced, with an even distribution of data from each participant group across the training (80%), validation (10%), and testing (10%) sets. The code was implemented in PyTorch and executed on two parallel NVIDIA GA102, GeForce RTX 3090 GPUs. Proton and hyperpolarized slices were registered using a landmark-based image affine registration approach.
In our research, we utilized the SegFormer architecture that incorporates hierarchical decoding for enriched feature representation, employs overlapping patches to enhance boundary recognition, uses an MLP-head for pixel-wise segmentation mask creation, and integrates a Bottleneck Transformer for reduced computational demands. Segformer utilizes both coarse and fine-grained features in lung MRI. While coarse features distinguish lung from non-lung tissues, fine-grained ones enable precise boundary identification and early disease feature detection, enhancing overall MRI interpretation.
Results: In this study, the efficacy of SegFormer was assessed through various Mix Transformer encoders (MiT), with MiT-B0 offering rapid inference and MiT-B2 targeting peak performance. Without pretraining, SegFormer registered a Dice Similarity Coefficient (DSC) of 0.96 generally, and 0.94 for hyperpolarized gas MRI within the training dataset. Remarkably, with ImageNet (13) pretraining, SegFormer surpassed CNN-based counterparts while requiring fewer computational resources.
After ImageNet pretraining, MiT-B2 achieved a training DSC of 0.980 for Proton MRI and 0.974 for hyperpolarized gas MRI, with testing scores of 0.975 and 0.965, respectively, utilizing 24 million parameters. MiT-B0 recorded training DSCs of 0.973 (proton MRI) and 0.969 (hyperpolarized gas MRI), with test scores of 0.969 and 0.951. In contrast, the pretrained Unet++ with VGG 16 (14) backbone reported training DSCs of 0.964 and 0.953, and testing DSCs of 0.955 and 0.942, using 14 million parameters. The pretrained UNet with a ResNet 50 (15) backbone yielded training DSCs of 0.971 and 0.962, and test DSCs of 0.960 and 0.951, utilizing 23 million parameters.
These findings underscore SegFormer's excellence, especially the MiT-B2 configuration, in segmenting and quantifying ventilation defects in hyperpolarized gas MRI. The SegFormer's fewer learnable parameters also led to a reduced training time in contrast to the CNN-based models, without compromising on performance. DSC results are tabulated in Table 1. Case studies for proton MRI segmentation can be seen in Figure 1, with hyperpolarized gas MRI cases in Figure 2, and a VDP value comparison across select cases presented in Figure 3.
Discussion and Conclusion: Our study underscores the effectiveness of SegFormer in hyperpolarized gas MRI for segmenting and quantifying ventilation defects. SegFormer not only outperformed UNet and Unet ++ with various backbones in DSC but also excelled in training time efficiency. SegFormer's implicit understanding of spatial context, without traditional positional encodings, is particularly promising for medical imaging. However, our study is limited to a specific patient cohort, warranting further validation for broader applicability. In conclusion, SegFormer presents a transformative approach for efficient and precise quantification of ventilation defects in hyperpolarized gas MRI. Its superior performance in both accuracy and computational efficiency positions it as a promising tool for broader clinical applications in hyperpolarized gas MRI.
References:
1. Altes TA, Powers PL, Knight-Scott J, et al.: Hyperpolarized 3He MR lung ventilation imaging in asthmatics: preliminary findings. J Magn Reson Imaging 2001; 13:378–384.
2. Harris RS, Fujii-Rios H, Winkler T, Musch G, Melo MFV, Venegas JG: Ventilation Defect Formation in Healthy and Asthma Subjects Is Determined by Lung Inflation. PLOS ONE 2012; 7:e53216.
3. Perron S, Ouriadov A: Hyperpolarized 129Xe MRI at low field: Current status and future directions. Journal of Magnetic Resonance 2023; 348:107387.
4. Kirby M, Heydarian M, Svenningsen S, et al.: Hyperpolarized 3He Magnetic Resonance Functional Imaging Semiautomated Segmentation. Academic Radiology 2012; 19:141–152.
5. MacQueen J: Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. Volume 5.1. University of California Press; 1967:281–298.
6. Adams R, Bischof L: Seeded region growing. IEEE Transactions on Pattern Analysis and Machine Intelligence 1994; 16:641–647.
7. Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W: Deep Neural Networks for Medical Image Segmentation. Journal of Healthcare Engineering 2022; 2022:1–15.
8. Ronneberger O, Fischer P, Brox T: U-Net: Convolutional Networks for Biomedical Image Segmentation. 2015.
9. Dosovitskiy A, Beyer L, Kolesnikov A, et al.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. 2020.
10. Al-hammuri K, Gebali F, Kanan A, Chelvan IT: Vision transformer architecture and applications in digital health: a tutorial and survey. Vis Comput Ind Biomed Art 2023; 6:14.
11. Thisanke H, Deshan C, Chamith K, Seneviratne S, Vidanaarachchi R, Herath D: Semantic segmentation using Vision Transformers: A survey. Engineering Applications of Artificial Intelligence 2023; 126:106669.
12. Xie E, Wang W, Yu Z, Anandkumar A, Alvarez JM, Luo P: SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. 2021.
13. Deng J, Dong W, Socher R, Li L-J, Kai Li, Li Fei-Fei: ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE; 2009:248–255.
14. Simonyan K, Zisserman A: Very Deep Convolutional Networks for Large-Scale Image Recognition. 2015.
15. He K, Zhang X, Ren S, Sun J: Deep Residual Learning for Image Recognition. 2015.
Diffusion magnetic resonance imaging (dMRI) is a method that sensitizes the MR signal to water molecule diffusion, probing tissue on a microstructural level not attainable with traditional MRI techniques. While conventional dMRI has proven useful in many research areas, more advanced techniques are necessary to further characterize tissue microstructure at spatial scales not available with conventional dMRI. Encoding diffusion using an oscillating gradient spin echo (OGSE) sequence increases sensitivity to smaller spatial scales (<10 µm), and diffusional kurtosis imaging (DKI) provides a comprehensive representation of the dMRI signal, increasing sensitivity to microstructure. While combining these techniques may allow for probing cellular length scales with high sensitivity, generating the large b-values (strength of diffusion weighting) required for DKI is challenging when using OGSE, and DKI maps are often confounded by noise. In this work, we present a method that combines an efficient diffusion encoding scheme and a fitting algorithm utilizing spatial regularization to address these challenges and provide robust estimates of DKI parameters. DKI data was acquired in 8 mice on a 9.4 Tesla scanner using an OGSE sequence with b-value shells of 1,000 and 2,500 s/mm2 (each with a 10-direction scheme which maximizes b-value), TE/TR=35.5/15,000 ms, 4 averages. For comparison, in one mouse we acquired the same dataset but using a commonly used 40-direction scheme, TE/TR=52/15,000 ms, no averaging. We compared our implementation of spatial regularization with a commonly used denoising technique in dMRI, Gaussian smoothing on diffusion-weighted images (DWIs) prior to fitting. We show that using the efficient 10-direction scheme results in much higher signal-to-noise ratio in non-DWIs (30.6 vs 11.4) and improved DKI map quality compared to the 40-direction protocol. Spatial regularization was shown to outperform Gaussian smoothing in terms of contrast preservation both qualitatively and quantitatively. The presented method allows for DKI fitting when using OGSE sequences by addressing key challenges when combining their use, and we showed the advantages of the various elements over conventionally used methods. This pipeline will allow for investigation of normal and pathological brain microstructure at cellular and sub-cellular spatial scales with high sensitivity.
Magnetic particle imaging (MPI) is an emerging tracer-based imaging modality that employs the use of magnetic excitation to detect superparamagnetic iron oxide (SPIO) particles. MPI signal is only generated from SPIO and thus there is no signal from tissue. As well, the signal is linearly quantitative with SPIO concentration so the number of SPIO-labeled cells can be calculated from images. The sensitivity and resolution of MPI depend heavily on the type of SPIO used and the imaging parameters. Lower gradient field strength, higher drive (excitation) field amplitude and signal averaging are known to increase the MPI signal, however, the degree to which these changes improve SPIO and cellular sensitivity has not been tested experimentally. Our goal was to test the effects of changing various MPI imaging parameters on the MPI signal strength and cellular detection limits.
Experiments were performed on a MomentumTM MPI scanner (Magnetic Insight Inc.). SPIO (ProMag) samples were imaged using an advanced user interface which allows editing of pulse sequences to change the parameters. 2D images were acquired to compare 2 gradient field strengths, 2 drive field amplitudes, and signal averaging. Stem cells were labeled by overnight incubation with ProMag and collected to create samples of 100K to 1K cells. 2D images were acquired to compare the 2 gradient field strengths and the 2 drive field amplitudes. An in vivo pilot experiment was performed where cell pellets of 50K, 25K, 10K, and 5K cells were injected subcutaneously into the back of nude mice. MPI was performed using the optimal parameters as determined from the in vitro cell sample experiments.
The mean MPI signal of the SPIO samples was 1.7 times higher using the low gradient field strength compared to the high strength and 4.2 times higher for the high drive field strength compared to low showing improved sensitivity but also lower resolution. As well, a low gradient field strength and a high drive field amplitude produced higher signal from SPIO-labeled cells. The highest cellular sensitivity (1K cells) was achieved using a low gradient field strength and a high drive field amplitude. Signal averaging increased the signal-to-noise ratio by approximately the square-root of the number of averages. When using a 12cm FOV to image the whole mouse the 25K and 5K cells could be clearly visualized but the lower cell numbers were faint. This is the result of the known dynamic range limitation in MPI. With a 3D acquisition (35 projections) the 10K and 5K cell injections could also be detected.
To conclude, in this study we showed that MPI imaging parameters can be adjusted to improve cell detection limits in vitro and in vivo. Further improvements to our in vivo detection limit are expected as MPI-tailored SPIOs are developed.
Molecular imaging techniques can be used to track tumour cell proliferation, metastasis, and viability. Tumour cells labelled with superparamagnetic iron oxide (SPIO) and transfected with a luciferase reporter gene can be dually tracked using magnetic particle imaging (MPI) and bioluminescence imaging (BLI). MPI is highly sensitive as signal is generated directly from SPIO. This allows for direct quantification of iron mass and cell number. BLI specifically detects live cells. In this study, we directly compared the cellular detection limits of BLI and MPI in vitro and in vivo for the first time. Murine 4T1 cancer cells were labelled with SPIO and transfected with luciferase. For the in vitro study, cells were serially diluted at a 1:2 ratio from 51,200 to 100 cells. BLI images were acquired until each sample reached peak radiance (20 min scan). MPI images were then acquired using a 2D high sensitivity scan (5.7 T/m gradient strength, 20 mT drive field amplitude, 2 min scan). For samples that could not be detected with 2D MPI, 3D images were acquired (30 min scan). For the in vivo study, 6400 cells were injected subcutaneously on the back of three nude mice. Each mouse was imaged with BLI until peak radiance was reached (30 min scan). Then, each mouse underwent 2D and 3D MPI using the high sensitivity scan mode. In vitro, we detected as few as 100 cells with BLI and as few as 3200 cells with 2D MPI. 3D imaging improved the in vitro MPI detection limit to 800 cells. In vivo, 6400 cells were detected using both modalities. However, tissue attenuation prevented the detection of 6400 cells with BLI when mice were imaged in the supine position. Although BLI detected fewer cells in vitro, MPI sensitivity is expected to improve over time with the development of MPI-tailored SPIO. Future work will aim to further assess the in vivo cellular detection limits of BLI and MPI by using lower cell numbers.
Explosive stellar events, such as X-ray bursts, novae, and supernovae, play a pivotal role in synthesizing the chemical elements observed in our galaxy and on Earth. The field of nuclear astrophysics seeks to unravel the mysteries behind the origin of the chemical elements and understand the underlying nuclear processes governing the evolution of stars. Particularly, the investigation of radiative capture reactions, involving the fusion of hydrogen or helium and subsequent emission of gamma rays, is crucial for the understanding of nucleosynthesis pathways in stellar environments.
Continuous advancements in accelerated rare isotope beam production offer a unique opportunity to replicate and study reactions occurring inside stars in the laboratory. However, many astrophysically significant reactions involve radioactive isotopes, thus presenting challenges for beam production and background reduction. Furthermore, direct measurements of radiative capture cross sections are extremely challenging due to the vanishingly small cross sections in the astrophysically relevant energy regime.
To address these challenges, dedicated facilities, such as the DRAGON (Detector of Recoils And Gammas Of Nuclear reactions) recoil separator, TUDA, the TRIUMF UK Detector Array for charged particle detection as well as the EMMA (ElectroMagnetic Mass Analyser) recoil mass spectrometer situated at the TRIUMF-ISAC Radioactive Ion Beam Facility have been designed to experimentally determine nuclear reaction rates of interest for nuclear astrophysics with inverse kinematics methods.
In this contribution I will outline the achievements and latest advances of the nuclear astrophysics program at TRIUMF, and present recent highlights from studies utilizing radioactive and high-intensity stable ion beams. Our findings contribute to a deeper understanding of astrophysical processes and pave the way for future breakthroughs in nuclear astrophysics research.
Neutron star mergers are an ideal environment for rapid (r-process) neutron captures to take place that lead to the production of neutron-rich nuclei far from the valley of stability. This is one encouraging site to investigate for where abundances of the heaviest elements in our Solar System and beyond are thought to have come from. We explored the r-process regime in mergers through the testing of various mass models, fission yields, and astrophysical conditions; covering three distinct hydrodynamic simulations, some of which make use of more than 1000 tracer particles. We considered elemental abundance ratios involving the key indicators Barium, Lanthanum, and Europium, ultimately aiming to investigate the spread in these ratios that the r-process can accommodate, with current conclusions discussed here. Further, we compared to stellar data, drawn from literature results compiled by JINAbase, for metal-poor stars. This work has allowed us to gain a better understanding about the production of elemental abundances in the universe and to further test the expected bounds of known nucleosynthesis process regimes.
The equation of state of ultra-dense matter, which gives a relation between microscopic and macroscopic quantities of ultra-dense objects and describes the core of the most energetic events of the universe, remains incompletely understood, particularly under extreme conditions such as high temperatures (i.e. in the order of ~10 MeV). In order to compute the hydrodynamic simulation of a binary neutron star merger, the choice of an equation of state is required, and this choice will influence the evolution of the system. For instance, the spectrum of neutrinos emitted during this event and that we can detect on Earth will be different for a different equation of state. Therefore, binary neutron star merger’s neutrinos carry information about the equation of state of ultra-dense matter; their number as well as the shape of their predicted spectrum can be compared to detection in neutrino observatories. However, binary neutron star mergers are rare, and neutrinos are hard to detect. Rather than focusing on the neutrinos coming from a single event, this study suggests examining the contribution of binary neutron star mergers to the diffuse neutrino background. This comparative analysis between theoretical predictions and observed data will allow to constrain the equation of state of ultra-dense matter for use in simulations.
Nuclear pairing, i.e., the tendency of nucleons to form pairs, has important consequences to the physics of heavy nuclei and compact stars. While the pairing found in nuclei typically happens between identical nucleons and in spin-singlet states, the exotic spin-triplet and mixed-spin pairing phases have also been hypothesized. In this talk, I will present new investigations confirming the existence of these novel superfluids, even at the face of the antagonizing nuclear deformation, at regions that can be experimentally accessible. These results also provide general conclusions on superfludity in deformed nuclei. These exotic superfluid phases can modify proposed manifestations of pairing in nuclear collisions and have clear signatures in experiments in spectroscopic quantities and two-particle transfer direct reaction cross sections.
Measurement and uncertainty are important concepts that show up across a standard physics curriculum, from laboratory instruction to quantum mechanics courses. Little work, however, has examined how students reason about uncertainty beyond the introductory level and has generally focused on a single perspective: students' procedural reasoning about measurements. Our team has developed new ways of looking at students' reasoning about measurement and uncertainty that span these contexts, and also explore students' ideas about sources of uncertainty, predictive reasoning about measurements, and ideas about the existence of "true values". I will present our work exploring the interesting variability in student reasoning across these perspectives, classical and quantum mechanics contexts, and introductory and upper-division students.
Laboratory courses are a fundamental part of physics education with proficiency in scientific writing being one of their key learning outcomes. While research has been conducted into how to teach this skill in various STEM field, no such effort has been reported for physics. We attempt to address this by measuring the impact of the (WIT) program on student self-reported confidence in a variety of skills that characterize scientific writing. This program, pioneered at the University of Toronto, of has been successfully implemented in several departments within the faculty of Arts and Science, most recently including junior laboratory courses (Practical Physics I & II) at the Department of Physics. The course structures have been adjusted to allow for review and resubmission of the laboratory report, allowing students space to practice and improve, simultaneously to the development and compilation of writing resources, teaching assistant training, and focus on feedback. Initial results of the study show improvement but lead to the conclusion that further work and refinement is needed, especially when it comes to providing feedback and curating the repository of resources.
Schöllkopf and Toennies first demonstrated the existence of Helium dimer by making use of matter -wave interference (Journal of Chemical Physics, 104, 1155 (1996)). The concept of a molecule being comprised of two helium atoms is perhaps a surprise to students, based on their secondary-level chemistry knowledge. The process used by Schöllkopf and Toennies to demonstrate the existence of Helium dimer made use of several physics concepts that are already appreciated by beginner physics learners. Specifically, these are diffraction phenomena, and the de Broglie matter wave relationship. Also, the Heisenberg uncertainty principle can be used to reason about the controversy regarding the existence of the Helium dimer. Our work aims to bridge the experiment carried out by Schöllkopf and Toennies with the physics knowledge already made available to students. We also introduce an analogy between the Helium atoms and molecules using frequency doubled light, as second harmonic light has half the wavelength of its fundamental counterpart, much like Helium atoms have half the mass of the molecules, and thus half the wavelength if the atoms and molecules are travelling at the same speed. The van der Waals bond itself is the conduit to presenting the application of the concepts already appreciated by physics learners. Our presentation introduces to other physics educators the video lessons and instructional materials that we have created to strengthen the link between pedagogy of physics and a specific example from the research literature. Ultimately, this presentation will take listeners on a similar learning journey to that of our target audience of formally educated physics students, and potentially general enthusiasts of physics learning. We hope that this will result in further conversation about “declassifying” interesting physics experiments in a way that can extend physics pedagogy to lifelong learning outside the lecture hall or laboratory classroom.
Multiple choice questions are a common valuable teaching and evaluation tool in large-enrolment introductory physics classes across North-American universities. However, they do not provide students with the opportunity to construct and formulate their own ideas. It is desirable to enrich student experience with the activities that reduce the reliance on multiple choice questions, while providing students with the additional opportunities to collaborate on analyzing more open-ended scenarios and, preferably, with some real-life content. Case studies were developed for use in the introductory physics courses for science students. The case studies scenarios target important concepts of the introductory physics curriculum and are focused on common students’ misconceptions. The case studies based on a real-life scenario can captivate students' imagination and increase the engagement with the material. The talk will focus on a case study that explores a real-life example of air resistance: a record-setting jump from the stratosphere completed by the Austrian skydiver Felix Baumgartner on October 14, 2012. Baumgartner fell to Earth from an altitude of 39,045 meters, after reaching the elevation in a helium balloon. He managed to break the existing world records for the highest “freefall” as well as the highest manned balloon flight. He also became the first person to break the sound barrier in “freefall”, reaching a maximum speed of 1,357.6 km/h while moving through the stratosphere. The video recording and the data from the fall (the elevation and the speed versus time) are available as open-source information. Guided by a series of questions, the students analyze the data set from the event.
Reservoir Computing (RC) is a simple and efficient machine learning (ML) framework for learning and forecasting the dynamics of nonlinear systems. Despite RC's remarkable successes—for example, learning chaotic dynamics—much remains unknown about its ability to learn the behavior of complex systems from data (and data alone). In particular, real physical systems typically possess multiple stable states—some desirable, others undesirable. Distinguishing which initial conditions go to "good" vs. "bad" states is a fundamental challenge with applications as diverse as power grid resilience, ecosystem management, and cell reprogramming. As such, this problem of basin prediction is a key test RC and other ML models must pass before they can be trusted as proxies of large, unknown nonlinear systems.
Here, we show that there exist even simple physical systems which leading RC frameworks utterly fail to learn unless key information about the underlying dynamics is already known. First, we show that predicting the fate of a given initial condition using traditional RC models relies critically on sufficient model initialization. Specifically, one must first "warm-up" the model with almost the entire transient trajectory from the real system, by which point forecasts are moot. Accordingly, we turn to Next-Generation Reservoir Computing (NGRC), a recently-introduced variant of RC that mitigates this requirement. We show that when NGRC models possess the exact nonlinear terms in the original dynamical laws, they can reliably reconstruct intricate and high-dimensional basins of attraction, even with minimal training data (e.g., a single transient trajectory). Yet with any features short of the exact nonlinearities, their predictions can be no better than chance. Our results highlight the challenges faced by data-driven methods in learning the dynamics of multistable physical systems and suggest potential avenues to make these approaches more robust.
A three-component description of nonlinear body waves in porous media is presented. The processes observed and described here have been patented and applied commercially to oil production and groundwater remediation. It is shown here that even if the correct nonlinear equations are used, three-component wave descriptions of porous media cannot be constructed solely from the equations of motion for the components. This is because of the introduction of the complexity of multiple scales into this nonlinear field theory. Information about the coupling between the components is required to obtain a physical description. It is observed that the fields must be coupled in phase and out of phase, and this result is consistent with the description of three- and n-body gravitational fields in Newtonian gravity and general relativity.
Korteweg-de Vries (KdV) is a useful partial differential equation (PDE) that models the evolution of waves in shallow water with weak dispersion and weak nonlinearity. Kadomtsev-Petviashvili (KP) equation can be thought of as an extension of KdV to two spatial dimensions. As a result, in addition to containing the weak nonlinearity and weak dispersion, it is also weakly two-dimensional. Despite the elegance of these integrable models, finding solutions analytically and numerically, although possible, is still challenging. More recent advances in machine learning, specifically, physics-informed neural networks (PINNs), allow us to find solutions in a novel way by utilizing the PDE in the network’s loss function to regularize the network parameters. We show how to use PINNs to find soliton solutions to the KdV and KP, compare the results to the analytical solutions and present the hyperparameters used.
In a prediction market, traders buy and sell contracts linked to the outcome of real-world events such as “Will Donald Trump be Re-Elected President on November 5, 2024”. Each contract (share) pays the bearer 1 dollar if the event happens by the given date, and expires worthless (0 dollars) otherwise. Because contracts trade between 0 and 1 dollar, the price at any given time represents the aggregate investors perceived likelihood of a given event’s outcome (e.g. 0.63 dollars = a 63% probability). In addition, these prices fluctuate quickly in response to new information – such as revealed scandals, political successes or failures, and economical changes – thereby representing a change in investor opinion. Due to this probability analog, most prediction market literature focuses on how accurate these “crowdsourced” assessments are in predicting final outcomes. Yet little attention has been paid as to how investor interactions and the flow of information can push the price of a contract toward (or away) from an accurate price.
Here, we use an approach rooted in statistical physics and information theory to analyze statistical trends linked to investor behaviors within prediction markets. We analyze over 4,800 unique contracts from a popular online prediction market – PredictIt – covering a wide range of events; including election outcomes, legislative votes, and career milestones of politicians. Our novel technique uncovers striking universal patterns not only in contract price and trade volume fluctuations, but also where these fluctuations occur in time. Moreover, we find that these universal patterns persist regardless of the heterogeneous nature of our dataset. Our findings suggest that the interactions between investors that give rise to price dynamics in prediction markets can be embedded in a relatively low-dimensional space of variables. This work opens the door to mechanistic modeling of apparently high-dimensional socio-financial systems, and offers a new way of analyzing economic data.
Real networked systems are fundamentally vulnerable to attacks that fragment the system via removal of a handful of key nodes, akin to percolation transitions in lattice systems. Though the problem of optimally attacking a network is NP hard [1], deep reinforcement learning is often able to learn near-optimal solutions to similar problems on disordered topologies (graphs) [2,3]. This raises the question: "Does there exist a strategy to mitigate such an attack?" Here, we address this problem by casting network attack/defense as a two-player, zero-sum game. Specifically, we consider an attacker, who aims to fragment the network---reducing its largest connected component below a specified threshold---with a minimum number of node removals and a defender, who obfuscates the network by strategically hiding links before the attacker makes its decisions [Figure 1]. In this game, concealed links---which are invisible to the attacker---introduce a novel layer of strategic complexity, potentially providing a strategy to defend networks against attacks.
In our findings, the defender's strategic concealment consistently increases the complexity and uncertainty of the attacker's task. The more links the defender is allowed to conceal, the more challenging it becomes for the attacker to effectively fragment the network [Figure 1]. At low concealment percentages, the defender's actions can successfully confound the attacker relative to heuristics like random concealment. However, the diminution in attacker performance is sublinear; only when essentially all network structure is hidden does the attacker perform no better than random. Our results suggest that network weaknesses are inferrable even with only partial topological information available These results shed light on defense mechanisms that are (in)effective at maintaining network robustness. In conclusion, our study underscores the vital role of strategic planning in network defense, providing a new perspective into enhancing network resilience to malicious AI-equipped agents.
The dynamics of a polymer in solution are affected by hydrodynamics. It has often been assumed that these affects are mostly long range and therefore should be less significant in confined environments. However, there are a growing number of experiments on polymers in micro- and nano-fluidic devices where the hydrodynamic flow field is an essential part of the nonequilibrium dynamics of the system and cannot be ignored. My group has created, and maintains, a package for the open-source molecular dynamics package LAMMPS for simulations of particles in a fluid that includes full hydrodynamics which we use to study these systems. We demonstrate how the interaction between a polymer and fluid flow in a nano-fluidic device can be used to unfold and stretch out a polymer’s configuration. This, in-turn can be exploited to maximize the probability of single-file translocation. In contrast, in a different configuration, we show how the flow around a pushed polymer can result in a compacted configuration and coexistence between a jammed and unjammed state for a long polymer.
Phytoglycogen (PG) is a glucose-based polymer with a dendritic architecture that is extracted from sweet corn as a soft, compact, monodisperse, 22 nm radius nanoparticle. Our recent model for a PG particle in solvent (water), based on dynamical self-consistent field theory (dSCFT), was successful in producing a dendrimer with a core-chain morphology, radius, and hydration, in close agreement with observations [1]. However, this model assumed, for simplicity, that the solvent distribution around the particle was spherically symmetric. This prevented us from studying heterogeneous structures on the particle surface. In this talk, we extend our dSCFT model, and consider a fully three-dimensional solvent distribution. We compare the new predictions for the morphology, radius, and hydration of PG to our earlier results. Motivated by experimental investigations of chemically modified versions of PG, we discuss preliminary results for the surface structures produced by the association of small, hydrophobic molecules with PG.
[1]: Morling, B.; Luyben, S.; Dutcher, J. R.; Wickham, R. A. Efficient modeling of high-generation dendrimers in solution using dynamical self-consistent field theory (submitted).
Computer simulations are used to characterize the entropic force of one or more polymers tethered to the tip of a hard conical object that interact with a nearby hard flat surface. Pruned-enriched-Rosenbluth-method (PERM) Monte Carlo simulations are used to calculate the variation of the conformational free energy, $F$, of a hard-sphere polymer with respect to cone-tip-to-surface distance, $h$, from which the variation of the entropic force, $f\equiv |dF/dh|$, with $h$ is determined. We consider the following cases: (1) a single freely-jointed tethered chain, (2) a single semiflexible tethered chain, and (3) several freely-jointed chains of equal length each tethered to the cone tip. The simulation results are used to test the validity of a prediction by Maghrebi {\it et al.} (EPL, {\bf 96}, 66002(2011); Phys. Rev. E {\bf 86}, 061801 (2012)) that $f\propto (\gamma_\infty-\gamma_0) h^{-1}$, where $\gamma_0$ and $\gamma_\infty$ are universal scaling exponents for the partition function of the tethered polymer for $h=0$ and $h=\infty$, respectively. The measured functions $f(h)$ are generally consistent with the predictions, with small quantitative discrepancies arising from the approximations employed in the theory. In the case of multiple tethered polymers, the entropic force per polymer is roughly constant, which is qualitatively inconsistent with the predictions.
The study of organic solar cells is intriguing from a fundamental point of view because of the very short lifetime of excitons -strongly-correlated electron-hole pairs- in these devices. While the origin of short exciton lifetime is still an open scientific problem, it is now apparent it is somewhat linked to strong electron-phonon coupling, which is also depending on the dielectric function of the excitonic environment in these devices. The photoactive layers of organic solar cells is made by polymers, small organic molecules, or their combination. To date, photoconversion efficiencies (PCEs) approaching 20% have been reported for binary organic photovoltaics by modulating the exciton recombination processes, which allows for enhanced electron-hole separation. Here we will show that tunable electron transfer is possible between poly[2-(3-thienyl)ethyloxy-4-butylsulfonate]-sodium (PTEBS, a water soluble organic polymer) and bathocuproine (BCP).a small organic molecule. We demonstrate PTEBS:BCP electron transfer through quenching of the photoluminescence of PTEBS in the presence of BCP in aqueous acidic solutions, and in thin films fabricated from these solutions. As UV-visible spectroscopy shows only moderate changes of the optical band gap of PTEBS depending on the pH of the starting solution, the dramatic change in PTEBS:BCP electron transfer when the pH of the solutions change from basic to acidic is assigned to the increase of exciton Bohr radius at lower pH (of 4 or more). We also corroborated this effect by direct measurements of the dielectric constant of PTEBS which is shown to decrease at increasing pH, where electron spin resonance (ESR) measurements on PTEBS, show increasing free carrier concentrations in the polymer chain. All of these data has been used to design organic solar cells with PTEBS:BCP as the active layer, C60-fullerene as electron transport layers and Nickel Oxide as hole-blocking layers, with energy levels matching, respectively the conduction and valence band of PTEBS. OPV photoconversion efficiency (PCE) is about 2.8% for PTEBS:BCP active layers prepared from acidic water solutions, while dropping to significantly lower values (PCE < 0.5%) from basic solutions. Therefore, our study presents among the best organic photovoltaics obtained to date from water-based polymer solutions, and highlights the importance of the dielectric environment and exciton dissociation at the donor-acceptor interface in designing high-quality organic solar cells.
Hyperspectral infrared (IR) images contain a large amount of spatially resolved information about the chemical composition of a sample. However, the analysis of hyperspectral IR imaging data for complex heterogeneous systems can be challenging because of the spectroscopic and spatial complexity of the data. We implement a deep generative modeling approach using a β-variational autoencoder to learn disentangled representations of the generative factors of variance in our large data set of IR spectra collected on crosslinked polyethylene (PEX-a) pipe. We identify three distinct factors of aging and degradation learned by the model and apply the trained model to high-resolution hyperspectral IR images of cross-sectional slices of unused virgin, used in-service, and cracked PEX-a pipe. By mapping the learned representations of aging and degradation to the IR images, we extract detailed information on the physical and chemical changes that occur during aging, degradation, and cracking in PEX-a pipe. This study shows how representation learning by deep generative modeling can significantly enhance the analysis of high-resolution IR images of complex heterogeneous samples.
We report an improved variational upper bound for the ground state energy of H$^-$ using Hylleraas-like wave functions in the form of a triple basis set having three distinct distance scales. The extended precision DQFUN of Bailey, allowing for 70 decimal digit arithmetic, is implemented to retain sufficient precision. Our result exceeds the previous record [1], indicating that the Hylleraas triple basis set exhibits comparable convergence to the widely used pseudorandom all-exponential basis sets, but the numerical stability against roundoff error is much better. It is argued that the three distance scales have a clear physical interpretation. The new variational bound for infinite nuclear mass is -0.527 751 016 544 377 196 590 814 478 a.u. [2]. New variational bounds are also presented for the finite mass cases of the
hydrogen, deuterium and tritium negative ions H-, D- and T-, including an interpolation formula for the mass polarization term.
[1] A. M. Frolov, Euro. J.Phys. D 69, 132 (2015).
[2] E. M. R. Petrimoulx, A. T. Bondy, E. A.Ene, Lamies A. Sati, and
G. W. F. Drake, Can. J. Phys. in press (2024).
In this work, the Bragg Scattering for metallic nanohybrid made of an ensemble of metallic nanorods doped in a substrate. Such substrate can be any suitable gas, liquid and solid. Moreover, a theory was developed to describe the relation between an external incident laser intensity and Bragg scattered light intensity. When the external laser was applied to the metallic nanohybrids, the photons from the laser will interact with the surface polaritons in the nanorods and produced surface plasmon polaritons (SPPs). On the other hand, the incident photon induced dipoles in the ensembled nanorods, so the nanorods can interact with each other via dipole-dipole interactions (DDI). The developed theory involved the coupled-mode formulism based on Maxwell’s equation with the presence of SPP and DDI fields and analytical expressions for the SPP/DDI coupling constants were obtained in a similar manner as [1]. It is found that, the intensity of Bragg scattering would depend on the susceptibility induced by SPP and DDI field. The susceptibility was calculated by the quantum mechanical density matrix method [2]. Combining these methods, an analytical expression for the Bragg scattering intensity as a function of incident laser intensity. Next, the theory was used to compare with the experimental data for a nanohybrids made of doping Au-nanorods into water. A decent agreement between the theoretical model and experimental data was observed. Later, several numerical simulations were performed to investigate the effects of SPP/DDI coupling, laser detuning and phase factor. The theoretical model was used to predict the Bragg intensity due to different parameters. The Bragg scattering intensity was found to be enhanced by higher SPP/DDI coupling constant. Such an enhancement can be interpreted by the extra coupling mechanism from the SPP and DDI polaritons with acoustic phonons. On the other hand, the peaks for the Bragg scattering intensity can be split into many peaks due to SPP/DDI coupling and the phase constant. Such a splitting of the peaks can be explained by the Bragg factor in the theory. In conclusion, the enhancement effect can be used to fabricate new nano-sensors, and the splitting effect can be used to design new nano-switches where the peaks can be interpreted as the ON position.
Reference:
[1] Singh, M.R. and Black, K., J. Phys. Chem. C. 122, 26584-26591 (2018).
[2] . Singh, M. R., Electronic, Photonic, Polaritonic and Plasmonic Materials, Wiley Custom. Toronto, 2014.
One of the major discoveries resulting from the invention of the laser was the existence of nonlinear optical processes: phenomena only described by nonlinear dependencies of a material’s electric polarization on the electric field of incident light. Two of these processes are second harmonic generation (SHG) and third harmonic generation (THG), which are frequency-doubling and frequency-tripling processes respectively. Metallic nanoparticles (MNPs) are a promising host for these effects as they exhibit surface plasmon resonance which can enhance the harmonic generation signals. In this project, we develop a theory for SHG and THG in nanohybrids of gold, aluminum, and copper sulfide MNPs. We utilize a semi-classical theory in which the coupled-mode formalism of Maxwell’s equations is used to describe the input and output light and the quantum mechanical density matrix formulation is used to calculate the nonlinear susceptibilities of the material. This theory agrees with recent experiments. Furthermore, a hybrid system including quantum dots is considered, where the harmonic generation signals are further enhanced by the dipole-dipole interaction between the MNPs and quantum dots. The enhanced harmonic generation in MNPs allows for a wide array of potential applications spanning several areas of science and technology including photothermal cancer treatments in nanomedicine.
Stimulated Raman spectroscopy in the femtosecond (1 fs = 1$\times 10^{-15}$~s) regime provides a versatile route to measuring the dynamics of molecules on the timescale at which they occur. A tunable and broadband probe pulse allows for detecting molecular signatures across a wide range of energies (frequencies). We develop a novel method for generating the probe pulse that results in the broadest and most tunable probe pulse reported to date.
Four-wave mixing (FWM) occurs when two pump photons ($\omega_p$) amplify a signal photon ($\omega_s$) to create an idler ($\omega_i$): $\omega_p+\omega_p=\omega_s+\omega_i$. We show that at high intensities, FWM can be extended to include the nonlinear response of the gain medium. We exploit the $\chi^{(3)}$ (Kerr) nonlinearity of materials to amplify broad spectra. We use the resulting amplified spectrum as the probe pulse for stimulated Raman scattering. The benefits of this approach are twofold. First, there is an inherent tunability of the amplified spectrum, defined by the phase-matching condition. Second, we generate Raman frequencies that span the terahertz, fingerprint, and OH-stretching regimes in a single shot.
We prove the usefulness of our method by measuring the methyl stretching mode of 1-decanol, shown in Fig. 1.
Nipun Vats, ADM of the Science and Research Sector at ISED, will discuss the various ways in which policy intersects with science and the role different factors play in helping to shape science policy discourse within the federal government.
Current MPP for Kingston and the Islands, former MP, and former party leadership candidate, Ted Hsu, answers your questions about what to do, and what not to do, in order to get the attention of elected officials.
Ted Hsu, actuel député de Kingston et des Îles, ancien député et ancien candidat à la direction du parti, répond à vos questions sur ce qu'il faut faire et ne pas faire pour attirer l'attention des élus.
Liquid scintillators are a commonly used detection medium for particle and rare-event search detectors. The vessels containing the liquid scintillator are often made of transparent acrylic. In the case of a UV-emitting scintillator, to make the scintillation light observable, the acrylic can be coated with a wavelength shifter like 1,1,4,4-tetraphenyl-1,3-butadiene (TPB). Another coating of particular interest is Clevios, a conductive material that, when in thin films, is optically transparent. The high conductive properties of Clevios makes it a useful material for use in Time Projection Chambers (TPC) as transparent electrodes. Additionally, the optical transparency of the material allows scintillation light to pass through, meaning Clevios is a good candidate for dual-phase detectors.
Materials used in the construction of the detector can emit fluorescent or scintillation light that can produce higher background signals, and modify the pulse shape of events. The fluorescent properties of Clevios have been studied as function of temperature and compared to the known fluorescence of acrylic and TPB. I will present the experimental methodology and the results of this study.
Radon is one of the most troublesome backgrounds in dark matter and neutrino detectors. Nitrogen is commonly used in cover gas systems at SNOLAB, such as in the SNO+ detector. To determine the concentration of radon in them, a method of extraction and counting has been developed with the help of radon traps at cryogenic temperatures. I present our methodology and the progress made on understanding the efficiency of an activated charcoal trap at high gas flow rates and on varying extraction parameters.
High-purity germanium detectors are used in the search for rare events such as neutrinoless double-beta decay, dark matter and other beyond Standard Model physics. Due to the infrequent occurrence of signal events, extraordinary measures are taken to reduce background interactions and extract the most information from data. An efficient signal denoising algorithm can improve measurements of pulse shape characteristics, resulting in better energy resolution, background rejection and event classification. It can also help identify low-energy events where the signal-to-noise ratio is small.
In this work, we demonstrate the application of Cycle Generative Adversarial Network (CycleGAN) with deep convolutional autoencoders to remove electronic noise from high-purity germanium p-type point contact detector signals. Built on the success of denoising using a convolutional autoencoder, we show that CycleGAN applied on autoencoders allows for more realistic model training conditions. This includes training with unpaired simulated and real data, as well as training with only real detector data without the need of simulation.
Aerogel threshold Cherenkov counters are developed to identify pions and muons in the range of 240-980MeV/c for the T9 beam test facility at CERN PS East Hall. These counters are part of the Water Cherenkov Test experiment (WCTE) particle identification system. The WCTE is a test-beam
experiment to test the design and capabilities of the photosensor system under development for the Hyper-Kamiokande Intermediate Water Cherenkov Detector. In this talk, I will cover the WCTE goals, the T9 beam monitor system and particle identification with a focus on aerogel threshold Cherenkov counters. Results obtained from a beam test using prototypes of the T9 beam monitor system in the summer of 2023 will be also presented.
The High Energy Light Isotope eXperiment (HELIX), a multistage balloon-borne detector, aims to measure the composition of light cosmic-ray isotopes up to 10 GeV/n. One of the primary scientific objectives of HELIX is to study the propagation of cosmic rays in our galaxy by measuring the ratio of Be_10 and Be_9 fluxes. The detector's first stage, which will measure cosmic rays with energies up to 3 GeV/n, is scheduled to launch in the summer of 2024 from Kiruna, Sweden. To obtain information about the isotopic composition, the detector must measure particle properties, such as mass, energy, charge, and velocity with high precision.
For particles that exceed 1 GeV/n, HELIX will utilize a Ring Imaging Cherenkov (RICH) detector to measure the velocity of incident particles. The RICH detector employs 10cm x10cm x1cm aerogel tiles with a refractive index of 1.15 as a radiator. To distinguish between the mass isotopes of Beryllium, a 2.5% mass resolution is required. This requirement mandates a comprehensive understanding of the refractive index as a function of the aerogel tile's position.
This presentation proposes a novel method to measure the refractive index of aerogel tiles based on Optical Coherence Tomography (OCT). The OCT method uses an interferometer to obtain micrometer-level depth resolution. In this talk, I will present the results of measuring the refractive index of aerogel with the OCT method.
The TRIUMF Ultracold Advanced Neutron (TUCAN) collaboration is building a surface coating facility at the University of Winnipeg. The primary purpose of this facility is to prepare ultracold-neutron (UCN) guides to transport UCNs from the TUCAN source to the TUCAN Electric Dipole Moment (EDM) experiment. UCN losses during the transport can be minimized by the application of special coatings. The facility specializes in providing diamond-like-carbon (DLC) coatings onto the inside of tubes using a high-power excimer laser and a custom vacuum-deposition chamber. This facility provided DLC-coated UCN guides for the LANL UCNA experiment in the 2000s and was moved from Virginia Tech to Winnipeg in June 2023. The first DLC guide samples are expected to be made in the spring of 2024 where coating properties will be assessed from various surface science tools. This talk will discuss the progress of the facility setup and the surface science results of the coated samples.
We adapt a machine-learning approach to study the many-body localization transition in interacting fermionic systems on disordered 1D and 2D lattices. We perform supervised training of convolutional neural networks (CNNs) using labelled many-body wavefunctions at weak and strong disorder. In these limits, the average validation accuracy of the trained CNNs exceeds 99.95%. We use the disorder-averaged predictions of the CNNs to generate energy-resolved phase diagrams, which exhibit many-body mobility edges. We provide finite-size estimates of the critical disorder strengths at $W_c\sim2.8$ and $9.8$ for 1D and 2D systems of 16 sites respectively. Our results agree with the analysis of energy-level statistics and inverse participation ratio. By examining the convolutional layer, we unveil its feature extraction mechanism which highlights the pronounced peaks in localized many-body wavefunctions while rendering delocalized wavefunctions nearly featureless.
The one-body density matrix (ODM) for a d-dimensional non-interacting Fermi gas can be approximately obtained in the semiclassical regime through different $\hbar$-expansion techniques. One would expect any method of approximating the ODM should yield equivalent density matrices which are both Hermitian and idempotent to any order in $\hbar$. The method of Grammaticos and Voros does ensure these properties for any order of $\hbar$. Meanwhile, the Kirzhnits and Wigner-Kirkwood methods do not yield these properties when truncated, which would suggest these methods provide non-physical ODM’s. Here we show explicitly, for arbitrary $d\geq1$-dimensions through an appropriate change into symmetric coordinates, that each of the methods are not only identical but also Hermitian and idempotent. This change of variables resolves the inconsistencies between the various methods discussed in previous literature. We show that the non-Hermitian and non-idempotent behaviour of the Kirzhnits and Wigner-Kirkwood methods is an artifact of performing a non-symmetric truncation to the semiclassical $\hbar$-expansions.
The Triamond lattice is the only maximally isotropic lattice where three links meet at each vertex, and for technical reasons, that provides an elegant bookkeeping method for quantum field theories on a lattice. Considering that until now, most researchers have not attempted to simulate Hamiltonians in three spatial dimensions, this work is an important step toward large-scale simulation on quantum computers. Specifically, we studied the geometry of the Triamond lattice, derived its Hamiltonian, and calculated the ground state of the unit cell of this lattice by imposing the periodic boundary condition on each face of the unit cell.
Analyzing the long term behaviour of solutions to a model gives insight on the physical relevance and numerical stability of the solutions. In our work, we consider the formulation presented by Blyth and Părău (2019), in which they derive the water-wave problem exclusively in terms of the free-boundary of a cylindrical geometry, and use it to solve for periodic travelling waves on the surface of a ferrofluid jet. We use this formulation to compute travelling waves in various parameter regimes and analyze their stability using the Fourier-Floquet-Hill method — presenting both; our methodology and the numerical stability results of the solutions. This stability analysis technique is an approach generalizable to a wide range of physically-motivated problems, making it a useful method for analyzing the viability of models.
Dirac crystals are zero-bandgap semiconductors in which the valence and conduction bands are linear over the crystal momentum (and, therefore, non-dispersive) in the proximity of the Fermi level at the Brillouin zone boundary. They are therefore the quantum material analogue of the Dirac cone of light in special relativity. To understand a number of different properties of 2D Dirac crystals (including their electron-related lattice thermal conductivity) demands models that consider the interaction between valence electrons and acoustic phonons beyond perturbation theory in these strongly correlated quantum systems. It is commonly assumed that the exceptionally high thermal conductivity of two-dimensional (2D) Dirac crystals is due to nearly ideal phonon gases. Therefore, electron-phonon collisions, when present, may control the thermal transport. Nonetheless, their accurate description beyond first-order collisions has seldom been undertaken. The Fermi level, and therefore the concentration of conduction electrons in 2D Dirac crystals can be tuned by many forms of doping, which also controls the acoustic phonon scattering rate by electrons.
Here, we are using a modified formulation of the Lindhard model for electron screening by phonons in strongly correlated systems to demonstrate that a proportional relationship exists between electron-lattice thermal conductivity and the phonon scattering rate, for bands of electrons and phonons that are linear over the crystal momentum. Furthermore, although the phonon scattering rate is usually calculated in the literature only at the first-order degree (i.e., with EP-E and E-EP processes consisting of 2 electrons and 1 phonon) we are here presenting an accurate expression for the phonon scattering rate and the electron-phonon interaction that is calculated at the higher order, where electron-in/phonon-in, electron-out/phonon-out (EP-EP) processes are also considered. We show that, even at temperatures as small as 300 K, the EP-EP* become critical in the accurate determination of the phonon scattering rates and, therefore the electron-lattice thermal transport. Collectively, our work points at the necessity of an accurate description of the electron-phonon interaction to comprehensively understand the electron-related lattice properties of strongly-correlated 2D Dirac crystals.
Rigorous derivations of the approach of individual elements of large isolated systems to a state of thermal equilibrium, starting from arbitrary initial states, are exceedingly rare. We demonstrate how, through a mechanism of repeated scattering, an approach to equilibrium of this type actually occurs in a specific quantum system.
In particular, we consider an optical mode passing through a reservoir composed of a large number of sequentially-encountered modes of the same frequency, each of which it interacts with through a beam splitter. We analyze the dependence of the asymptotic state of this mode on the assumed stationary common initial state of the reservoir modes and on the transmittance τ = cos λ of the beam splitters. These results allow us to establish that at small λ such a mode will, starting from an arbitrary initial system state, approach a state of thermal equilibrium even when the reservoir modes are not themselves initially thermalized.
Magnetotactic bacteria are ubiquitous motile single-cell organisms that biomineralize magnetic nanoparticles, allowing them to align with the Earth’s magnetic field and navigate their aquatic habitats. We are interested in the swimming mechanism of one particular type of magnetotactic bacteria, Magnetospirillum magneticum, which has a helical body and use two helical flagella to move up and down magnetic field lines. We take advantage of both the helical shape of the cell and the possibility to align them with magnetic fields to precisely measure both their translational and rotational motions from phase microscopy images. This allows us to precisely measure both the translational and rotational friction coefficients of these micron-size chiral particles, and from them calculate the propulsion forces exerted by the body and the flagella of the cell. Our results suggest that for this bacterial species cell body rotation significantly contributes to cellular propulsion.
We examine the kinetic process of an anionic A-block from an ABA triblock copolymer hopping between the solvophillic, cationic A-domains of an ABA triblock copolymer membrane. One motivation is to use this toy model to provide insight into the nature of a rapid, charge-mediated reconstitution mechanism observed for anionic membrane proteins reconstituted into cationic ABA triblock copolymer membranes. We use dynamic self-consistent field theory (dSCFT) to efficiently simulate this interacting, many-chain system, and we introduce screened electrostatics by coupling the Poisson-Boltzmann equation into the dSCFT equations. We equilibrate membranes by imposing the condition of isotropic stress and find that, under this condition, the area per A block is an increasing function of the charge on the A block. dSCFT enables us to track the position of each polymer bead, and to observe rapid hopping events as the anionic A-block traverses the solvophobic membrane mid-block. By measuring many such events, we create a probability distribution for the time interval between hops. We will present results for the behaviour of this distribution as we change the charge per A-block, and the charge asymmetry between A blocks. Our results could suggest whether it is direct charge interactions, or indirect effects like softening of the membrane, that are mainly responsible for modifications to the free-energy barrier to A-blocks hopping across the membrane.
Solid-state nanopore sensors continue to hold great potential in addressing the increasing worldwide need for genome sequencing. However, formation and translocation of folded conformations known as haripins poses readability and accuracy challenges. In this work, we investigate the impact of applying a pressure-driven fluid flow and an opposing electrostatic force as an approach to increase single-file capture probability. By optimizing the balance between forces, we show that the single-file capture can be amplified up to almost 95%. We find two mechanisms responsible for the increase in the single-file capture probability.
Introduction: Endothelial cells (ECs) form the innermost lining of blood vessels and can sense and respond, via mechanotransduction, to local changes in wall shear stress (WSS) imposed by blood flow. Blood flow through a vessel can become disturbed when passing through bifurcations or plaque-burdened regions, which disrupts the direction and magnitude of WSS experienced by cells. ECs in these regions show activation of pro-inflammatory phenotypes, manifesting in the development and progression of atherosclerosis. The earliest cell responses to these flow disturbances – particularly the mechanisms by which ECs sense and respond to variations in direction and magnitude of WSS – are not well understood. Excessive increases in reactive oxygen species (ROS) generation within endothelial cells are an early indicator of a disruption of homeostasis and are thought to accelerate the progression of vascular diseases such as atherosclerosis and diabetes. It is hypothesized that ECs will exhibit indications of oxidative stress and damage within minutes of being exposed to WSS disturbances.
Methods: A novel microfluidic device has been designed and fabricated (from polydimethylsiloxane Sylgard-184) for recapitulating the various forms of WSS observed in regions of disturbed flow within the vasculature. It consists of a small channel for fluid to pass over cultured ECs with two opposing jets to create varying levels of bi-directional and multi-directional WSS scrubbing. ECs cultured in this device are grown to confluence and loaded with a ROS dye (5 μM CM-H2DCFDA). Cells are imaged with a confocal inverted microscope (Nikon Ti2-E) while applying disturbed-flow WSS.
Results: Within 30 minutes of being exposed to disturbed flow, ECs exhibited 65% signal increases in ROS, with detectable changes beginning at just 10 minutes. Notably, a differential response was seen for different types of WSS scrubbing, where regions with higher magnitude mean stress and more multidirectional WSS patterns correlated with larger increases in ROS generation.
Conclusion: The results of this experiment will contribute to the understanding of the differential response of endothelial cells to differential forms of WSS. The characterization of EC responses to varying flow patterns is essential in strengthening the link between blood flow dynamics and atherosclerotic development.
The microcirculation serves to deliver oxygen (O2) to tissue as red blood cells (RBCs) pass through the body’s smallest blood vessels, capillaries. Imaging techniques quantify O2 present in capillaries but lack effective modalities quantifying O2 entering tissue from capillaries. Thus, mathematical simulation has been used to investigate how O2 is distributed locally over a variation of metabolic demands, and to investigate mechanisms regulating capillary blood flow to meet such metabolic tissue O2 demands. Being present throughout the microcirculation, RBCs have been hypothesized as potential candidates initiating signals at the capillary level that are transmitted upstream to arterioles thereby altering capillary blood flow. It has been found that RBC deformation, as well as oxyhemoglobin desaturation, can cause release of adenosine triphosphate (ATP). It has been theorized that as RBCs deform with local blood flow, released ATP modulates upstream vessel diameter, but requires a model to systematically investigate. At baseline, RBCs possess unique shapes formed from a balance between the phospholipid bilayer membrane’s surface tension, surrounding fluid osmolarity, and the curvature-dependent Canham-Helfrich-Evans (CHE) energy. To investigate how RBCs deform with blood flow stresses, a novel algorithm for red blood cell (RBC) equilibrium geometry was developed as the first step of a quantitative model for RBC-ATP release. This condensed matter theory model relied on the developing coordinate-invariant computational framework of discrete exterior calculus (DEC). Using this algorithm, several RBC geometries were observed at different surface tension (area) and osmolarity (volume) constraints. First seen throughout literature, the algorithm was able to be expressed as an implicit system, and utilized a Lie-derivative based vertex drift method to ensure the RBC meshes were well-behaved throughout deformation. The algorithm was shown to be highly stable, quantified through tracking the RBC membrane energy. Equilibrium geometries were shown to agree with literature in vivo observations, and qualitatively reproduced phenomena seen with in vivo experiments where RBCs are subjected to solutions of varying osmolarity. Future work will allow investigation of how RBCs behave under flow stresses to simulate combined shear-O2-dependent ATP release.
When force is applied to tissue in a healthcare setting, tissue perfusion is reduced in response to the applied force; it is perfusion that is important in assessing tissue health and potential injury from the force 1,2. Traditional means of measuring force involve quantifying the mechanical strain or electrical responses of a sensor; these techniques do not necessarily correspond to the physiological responses to the applied force.
It is also known that contact force is a confounding issue in reflectance type optical measurements of tissue, such as Near Infrared Spectroscopy (NIRS) and Photoplethysmography (PPG) 3-6. We propose that the signal from reflectance type optical measurements can be used to predict sensor contact force, due to the physiological response of the underlying perfused tissue.
There is a complex relationship between the reflected optical signals and the underlying physiological response; there is no simple biophysical model to apply. Because of this, we used machine learning to explore this relationship. We used a PPG sensor to collect reflected optical data from the index finger from a participant (n=1). The applied force was also measured simultaneously with a load cell. We collected 240,000 data points with a range of 0 to 10 N of applied force.
While many models worked well to estimate the applied force, we decided on using the random forest model. We were able to achieve an accuracy between the machine learning predictions and the measured ground truth with a median absolute error of 0.05 N and an R2 score of 0.97. From this, we have determined that it is possible to predict the amount of applied force on a vascularized tissue from reflected optical signals. This has potential applications in neurosurgery or robotic surgeries, where careful sensing of the amount of applied force on delicate tissues may reduce injuries.
[1] Roca, E., and Ramorino, G., “Brain retraction injury: systematic literature review,” Neurosurg Rev, 46(1), 257 (2023).
[2] Zagzoog, N., and Reddy, K. K., “Modern Brain Retractors and Surgical Brain Injury: A Review,” World Neurosurg, 142, 93-103 (2020).
[3] Chen, W., Liu, R., Xu, K. et al., “Influence of contact state on NIR diffuse reflectance spectroscopy in vivo,” Journal of Physics D: Applied Physics, 38(15), 2691 (2005).
[4] May, J. M., Mejia-Mejia, E., Nomoni, M. et al., “Effects of Contact Pressure in Reflectance Photoplethysmography in an In Vitro Tissue-Vessel Phantom,” Sensors (Basel), 21(24), (2021).
[5] Reif, R., Amorosino, M. S., Calabro, K. W. et al., “Analysis of changes in reflectance measurements on biological tissues subjected to different probe pressures,” J Biomed Opt, 13(1), 010502 (2008).
[6] Teng, X. F., and Zhang, Y. T., “The effect of contacting force on photoplethysmographic signals,” Physiol Meas, 25(5), 1323-35 (2004).
The goal of the ALPHA experiment at CERN is to perform high precision comparison between antihydrogen and hydrogen to test the fundamental symmetries that underpin the Standard Model and General Relativity. For decades, there have been many speculations about the gravitational behaviour of antimatter. The ALPHA collaboration has developed the ALPHA-g apparatus to measure the gravitational acceleration of antihydrogen. We have recently shown, directly for the first time, that the antihydrogen gravitational acceleration is compatible with the corresponding value for hydrogen [Nature 621, 716 (2023)]. To push antihydrogen research into an entirely new regime, new techniques, such as anti-atomic fountains and anti-atom interferometers, must be developed. The HAICU experiment at TRIUMF in Vancouver aims to use laser-cooled hydrogen atoms [Nature 592, 35 (2021)] to do just that. In this talk, we will report our first measurement on antihydrogen gravity with ALPHA-g and discuss the status of development towards an atomic hydrogen fountain and atomic hydrogen interferometer with HAICU.
The ALPHA-g experiment at CERN aims to perform the first-ever direct measurement of the effect of gravity on antimatter, determining its weight to within 1% precision. At TRIUMF, we are working on a new deep learning method based on the PointNet architecture to predict the height at which the antihydrogen atoms annihilate in the detector. This approach aims to improve upon the accuracy, efficiency, and speed of the existing annihilation position reconstruction. In this presentation, I will report on the promising preliminary performance of the model and discuss future development.
The development of new techniques to trap and cool antimatter is of interest for fundamental studies that use antimatter as a testbed for new physics. The HAICU experiment, which is in its initial phase at TRIUMF, ultimately aims to cool and trap antihydrogen in such a way that quantum effects used in the precision measurements of normal atoms could also be exploited for measurements on antihydrogen. One such precision measurement technique is the “atomic fountain”, which is the focus of HAICU. Following a brief overview of the HAICU experimental setup, this talk will focus on the technical challenges and procedures associated with the construction and testing of a “Bitter” style electromagnet that will be used to confine neutral hydrogen in the first stage of the experiment.
The proposed nEXO experiment is a tonne-scale liquid xenon (LXe) time projection chamber that aims to uncover properties of neutrinos via the neutrinoless double beta decays ($0\nu\beta\beta$) in the isotope Xe-136. The observation of $0\nu\beta\beta$ would point to new physics beyond the Standard Model and imply lepton number violation, indicating that neutrinos are their own antiparticle. The nEXO detector is expected to be constructed at SNOLAB in Sudbury, Canada, with a projected half-life sensitivity of $1.35\times10^{28}$ years. The collaboration has been pursuing the development of new technologies to further improve upon the detection sensitivity of nEXO, such as Barium (Ba)-tagging. This extremely challenging technique aims to extract single Ba ions from a LXe volume. Ba-tagging would allow for an unambiguous identification of true $\beta\beta$-decay events, and if successful would result in an impactful improvement to the detection sensitivity. Groups at McGill University, Carleton University, and TRIUMF are developing an accelerator-driven ion source to implant radioactive ions inside a volume of LXe. Additional extraction and detection methods are under development by other groups within the nEXO collaboration. In the first phase of this development, ions will be extracted using an electrostatic probe for subsequent identification using $\gamma$-spectroscopy. In this contribution, I will provide a status update on the commissioning of the Ba-tagging setup at TRIUMF and present results on ion extraction efficiency simulations using an electrostatic probe.
Potassium-40 (40K) is one of the largest sources of natural radioactivity we are exposed to in daily life. It is the only isotope decaying by electron capture, beta- and beta+. The KDK collaboration has carried out the first measurement of the electron capture to ground state of 40Ar and found a branching ratio of IEC0 =(0.098±0.025)% [1,2]. In order to confirm theoretical predictions on EC/beta+ ratio, the KDK+ collaboration will remeasure the even smaller beta+ decay branch that has not been studied since the 1960s [3]. This will be done by dissolving potassium in a liquid scintillator vessel surrounded by a sodium iodide detector. Triple coincidences between the scintillation caused by the positron and two back-to-back 511keV gammas from its annihilation will be used to distinguish the signal from the background. We will present the work that consists of optimizing the compatibility of potassium with a liquid scintillator, as well as the design of the experimental setup to carry out the measurement.
[1] M. Stukel et al. (KDK Collaboration), “Rare 40K decay with implications for fundamental physics and geochronology”, Phys. Rev. Lett. 131, 052503 (2023).
[2] L. Hariasz et al. (KDK Collaboration), “Evidence for ground-state electron capture of 40K”, Phys. Rev. C 108, 014327 (2023).
[3] D. W. Engelkemeir et al., “Positron emission in the decay of K40”, Phys. Rev. 126, 1818 (1962).
A number of recent $\beta$-decay studies of neutron-rich rubidium isotopes which utilised Total Absorption Spectroscopy (TAS) revealed significant discrepancies in $\beta$-feeding probabilities from High Resolution Spectroscopy (HRS) studies performed over 40 years ago. These discrepancies can be attributed to the $pandemonium$ effect which was a significant challenge in spectroscopy studies performed with early generation Ge(Li) detectors. Given their large cumulative yields from nuclear fission and the large $Q_{\beta}$ values, incorrect $\beta$-feeding patterns of these isotopes have a significant impact for reactor physics.
While TAS studies are free of and the measured $\beta$-feeding probabilities are confidently considered robust, the method is a largely insensitive probe into the nature of these levels and much key spectroscopic information is missed.
We report results of a new $\beta$-decay study of $^{92}$Rb with the GRIFFIN spectrometer at TRIUMF providing complementary data to recent TAS studies. These results significantly expand the known level scheme of $^{92}$Sr with over 180 levels and 850 $\gamma$-ray transitions identified providing one of the most complex decay schemes across the nuclear chart. As $^{92}$Rb has a $0^-$ ground state and large $Q_{\beta}$ value, the decay populates numerous high-lying $1^-$ levels associated with the Pygmy Dipole Resonance (PDR) which is responsible for an enhancement of $E1$ strength below the neutron separation energy at the low-energy tail of the Giant Dipole Resonance. The PDR is interpreted as an out-of-phase oscillation between the neutron-skin and an isospin saturated core. From this, the PDR can be connected to the symmetry term of the nuclear binding energy and the nuclear equation of state. This interpretation however, is a matter of debate.
As the underlying nature of the PDR remains uncertain, $\beta$-decay offers an alternative probe compared to often employed Nuclear Resonance Fluorescence method and provide further complementary data.
Perimeter Institute’s Education and Outreach team has developed a suite of innovative, world-class resources that have been used by millions of students around the world. These resources go take standard science curriculum topics and connect it to the open questions in physics from quantum mechanics to cosmology using a hands-on, collaborative approach. A core piece of our success has been engaging teachers in every step of resource development and offering extensive training through a network of teachers who have attended Perimeter workshops. This presentation will touch on key aspects of our teacher training process and share some insights gathered over 20 years of introducing novel topics into high school physics classrooms.
Project-based courses have a huge pedagogical potential. They provide an opportunity for students to integrate knowledge acquired in previous courses, and in various disciplines. By working on a project over a few months, students can inquire, formulate plans, hypothesize, develop and evaluate solutions. They have to make some decisions on the information that should be acquired and how to apply it. The process leads them toward a deeper understanding of concepts and principles necessary to realize the project.
This talk will look at the case study of a course I developed in the last five years, aimed at introducing cegep students to the world of multidisciplinary research. It is given in the last semester before entering university and intertwines physics, chemistry, biology, math and psychology in projects studying brain behavior. The students went through the complete process of an experiment: literature search, choice of research question and hypothesis, writing of a letter of informed consent and design, execution, analysis, and dissemination of the results of the experiment. To captivate their interest over a full term, they were given some control on the choice of project and on the way to proceed. However, as they had never done anything this extensive before, they needed a fair amount of guidance. A structured framework was designed to lead them through the several steps of the process. In teams of three or four, they investigated a hypothesis of their choice involving a cognitive process by using a behavioral task and a simple and portative system recording electroencephalograms. In so doing, they learned about the production and transmission of electric fields in the brain and how it relates to the cognitive process studied. They tested their hypothesis on some thirty to sixty participants, usually other students from the cegep.
This presentation will focus on the learning of physical concepts, their relation to the chemistry and physiology of the brain and their application in a realistic situation. It will describe some best practices that were developed to “teach” this course. The course has been given five times so far and has been refined each time. Hopefully, these best practices will inspire other professors interested in a holistic approach to physics education.
In this presentation, I discuss the efforts at the University of Waterloo in developing upper year inquiry lab materials based on the SQILabs as designed by Dr. Natasha Holmes and Dr. Carl Wieman (Physics Today 71 (1), 38–45 (2018)).
In our work, we have proposed a set of experiments that make use of ultrafast lasers to situate students in an environment in which they can test their agency as it relates to learning in the physics lab. We have begun preliminary analysis on the impact of these labs on undergraduate students through the use of qualitative methods. We hope to use these methods to develop quantitative assessments to evaluate the impact these upper year inquiry labs have on student learning and engagement with experimental physics.
These experiments were designed based on the results of the replication studies of the Sense of Agency Survey and the Physics Lab Inventory of Critical Thinking, both completed at the University of Waterloo. These surveys were originally developed and validated by Dr. Natasha Holmes et al.
This work will have an accompanying set of manuscripts that will be available upon request, and hopefully published in the near future.
Purpose: For many students, a visual aid of the material presented in a physical curriculum is quintessential for a good understanding. In many cases, a simple diagram is sufficient and the student is able to intuit the impact of the various parameters.
In more complex topics, the role of each parameter can be difficult to perceive.
Method: This interface, completely built in Python, aims to present graphical representations of physical phenomena. Whether used by a student or an instructor, it is possible to modulate the parameters and see the impacts on the whole process.
For instance, topics currently available include refraction through multiple parallel interfaces, wavefunctions of basic quantum mechanical potentials, Riemann integrals, operations on complex numbers, attenuation law for photons in medical physics, 1D and 2D convolutions and their Fourier representations, and more.
The tool is freely available and presently available in both English and French.
Results: This tool was used in various class contexts, both as a tutor and an instructor. Students were able to use it by themselves and develop a better qualitative understanding of the physical processes. Instructors also have the possibility of creating precise diagrams quickly, which can alleviate the workload when preparing material.
The tool allows users to view complex phenomena without the need to resort to programming skills, which is necessary when discussing specific topics, such as Fourier transforms, convolutions, and filter. This permits the presentation of the material to group of students who are not yet able to work everything in detail, but might be interested in qualitative aspects. Although developed with the curriculum of physics in mind, it should be of interest to students in other neighbouring disciplines, such as engineering, computer sciences, and mathematics.
It was easily usable to create new material quickly.
Future Work: The whole project is still actively in development. In the short term, modules for classical mechanics and electromagnetism will be included.
This presentation will describe the pilot offering of a program that aims to develop both cognitive and psychomotor skills in students between the ages of 10 and 14 by exploring physics through the use of hand tools and general woodworking techniques. Weekly activity sessions taking place over a six-month period allowed students to work their way through the concepts of force, pressure, friction, torque, mechanical advantage and other topics in elementary-level physics, and culminated in each student creating a useful object of their own design. The development of lessons and activities was guided not only by student interest and the skills necessary to complete a project, but also by shortcomings in student knowledge, understanding and capability that presented themselves in each session. In this way, each new activity attempted to overcome an obstacle discovered in previous activities. This presentation will briefly describe how concerns about student safety and behaviour are addressed, as well as some of the traditional methods of manual training (or educational handwork) that help to continually inform the development of this project.
Glass-formers represent an important family of natural and manufactured materials ubiquitous in nature, technology, and our daily lives. Approaching their glass transition temperatures ($T_g$) makes them resemble solids lacking long-range structural order, similar to liquids. Careful detection of the glass transition and accurate measurement of the $T_g$-value constitute fundamental steps in both fully resolving the enigma of this phenomenon and making application-oriented choices and advancements for glass-formers. Given the complexities of experimental synthesis and characterization, modern computer simulation methods based on the application of chemically realistic models can play a pivotal role in tackling the glass transition. Based on our previous studies of polymeric systems [1,2], here we will cover common approaches to evaluating the $T_g$-value from simulations and discuss their pros and cons. We will then introduce promising machine learning (ML) methods that may permit exploration of molecular patterns of the glass transition, fully utilizing available microscopic details within complex high-dimensional datasets from simulations. Finally, we will overview our progress in the development of a novel framework that fuses atomistic computer simulations and several ML methods for computing $T_g$ and studying the glass transition in a unified way from various molecular descriptors for glass-formers.
[1] A.D. Glova, S.G. Falkovich, D.I. Dmitrienko, A.V. Lyulin, S.V. Larin, V.M. Nazarychev, M. Karttunen, S.V. Lyulin, Scale-dependent miscibility of polylactide and polyhydroxybutyrate: molecular dynamics simulations, Macromolecules, 51, 552 (2018)
[2] A.D. Glova, S.V. Larin, V.M. Nazarychev, M. Karttunen, S.V. Lyulin, Grafted dipolar chains: Dipoles and restricted freedom lead to unexpected hairpins, Macromolecules, 53, 29 (2020)
We are studying stable polystyrene (PS) glasses prepared by PVD (physical vapour deposition) with N up to ~12. These glasses have fictive temperatures as low as Tg -20 K with respect to its supercooled liquid line, and a kinetic stability of down to deposition temperatures of ~ 0.84*Tg. Employing increased surface dynamics, vapour deposition can yield an effectively packed amorphous material in a layer-by-layer pattern. In our lab, recently we have started determining the elastic modulus of PS films via atomic force microscopy (AFM). We examined the elastic modulus of PS, with a film thickness of ~ 100 nm, as a function of Mn (i.e., 11,200, 60,000 and 214,000 kg/mol), if the molecular size impacts the mechanical properties of the PS films. We observed a decrease in the magnitude of elastic modulus for PS as moving down to lower Mn. We also studied PS film with Mn = 214,000 kg/mol as a function of annealing time, annealed at the temperature of Tg + 20 K. The non-destructive nature of AFM allows us to determine the moduli of as-deposited glass, the supercooled liquid, and ordinary glass from a single sample. We will explore the mechanical properties of stable polymer vapour-deposited glasses of PS as a function of stability (down to Tg – 20 K) and the film thickness (50 nm – 200 nm). We expect to observe an increase in the elastic modulus (i.e., 20 - 30%) of the stable polymer vapour deposited glasses of PS compared to the ordinary glass of PS with the same N.
We measure the isothermal rejuvenation of stable glass films of poly(styrene) and poly(methylmethacrylate). We demonstrate that the propagation of the front responsible for the transformation to a supercooled-liquid state can serve as a highly localized probe of the local supercooled dynamics. We use this connection to probe the depth-dependent relaxation rate with nanometric precision for a series of polystyrene films over a range of temperatures near the bulk glass transition temperature.
The analysis shows the spatial extent of enhanced surface mobility and reveals the existence of an unexpected large dynamical length scale in the system.
The results are compared with the cooperative-string model for glassy dynamics. The data reveals that the film-thickness dependence of whole film properties arises only from the volume fraction of the near-surface region. While the dynamics at the middle of the samples shows the expected bulk-like temperature dependence, the near-surface region shows very little dependence on temperature.
When continuum materials with cohesive forces are perturbed from an equilibrium configuration, they relax over time tending toward the lowest energy shape. We are interested in studying the physics of a similar ageing process in a two-dimensional granular system in which individual particle rearrangements can be directly observed. We present an experiment in which a two-dimensional raft of microscopic cohesive oil droplets is elongated then allowed to relax back to a preferred shape. As the droplet raft is gently confined by a curved meniscus, we can study the relaxation toward equilibrium for hours to days. Over sufficiently long times, coalescence plays a crucial role introducing disorder in the system through local defects, and promotes particle rearrangements. Varying the size of droplets and strength of cohesive forces, we investigate the geometry and dynamics of short- and long-term structure ageing due to large scale relaxation and local coalescence events.
Granular systems can be great analogies to the molecular structures of materials and introducing an intruder to the system can provide novel insight into their dynamics. Here, we study the response of a disordered bi-disperse two-dimensional aggregate of oil droplets to a moving ferrofluid droplet which acts as a controlled intruder. The frictionless and cohesive oil droplets form a compact two dimensional disordered aggregate. The mobile ferrofluid droplet is controlled with a localised magnetic field and as the intruder is moved through the aggregate, the intruder forces rearrangements within the aggregate. The speed of the intruder, disorder of the 2D aggregate, and the adhesion between the oil droplets is controlled, and we probe the extent of the rearrangements caused by the intruder as it moves through the aggregate.
Collective properties of granular materials are determined by both interparticle forces and packing fraction. The conical shape of piles of granular material, like a pile of sand is dependant on the interparticle friction and is characterized by the angle of repose of the pile. Surprisingly, we observe formation of conical piles for aggregates of frictionless particles. Our model system is composed of monodisperse oil droplets that are frictionless but cohesive. Previous studies on this system have shown that aggregation of the droplets against an unbounded barrier resembles a liquid puddle rather than a sand pile: rather than growing taller as more droplets are added to the aggregate, a characteristic height is reached after which the aggregate just spreads. In contrast, when the barrier is bounded, we see that the aggregate exhibits a conical growth pattern reminiscent of sand piles. We systematically measure the angle of repose across varying cohesion strengths and droplet sizes and present a theory that explains our findings.
γ and β radiation emitted from fission and activation products in the U$O_2$-fuel matrix decay to insignificant levels after 1000 years, leaving 𝛂 particles as the primary source of radiation. 𝛂 radiation induces 𝛂 radiolysis of water, a well-known key contributor to the oxidative dissolution of the U$O_2$-fuel matrix. Extensive studies have been conducted to investigate the effect of water radiolysis on fuel dissolution in the unlikely event of used fuel container failure. In contrast, this study explores the direct impact of residual 𝛂 radiation on the solubility of uranium fuel in solid state. Controlled doses of 𝛂 radiation (at 40 keV and 3000 keV) are applied to uranium fuel to investigate nuclear and electronic interactions near the surface and bulk for varying irradiation damage (in DPA). The goal is to replicate the hypothetical tailed radiation dose rate expected for uranium fuel in deep geological repositories (DGR) simulated for 1000 years; and investigate possible effects of the irradiation damage on the uranium fuel in solid state. X-ray photoelectron spectroscopy (XPS) analysis was used to track changes in U$O_{(2+x)}$ oxidation states before and after irradiation. The results reveal a reduction of U$O_{(2+x)}$, with an increased percentage of U(IV) states alongside reduced percentages of U(V) and U(VI) states. Our findings suggest that prolonged exposure of uranium fuel to 𝛂 radiation in simulated DGR conditions, without container failure, decreases the availability of U(VI), the soluble form of uranium (in the U$^{VI}O_2$$^{2+}$ state). This outcome does not raise additional safety concerns regarding nuclear waste containment. Changes in the oxidation states after irradiation in vacuum will be compared to the changes induced by irradiation in the aqueous environment in the next steps.
Zinc and cadmium compounds are indispensable to critical sectors such as corrosion control, energy, and manufacturing. In applications ranging from coatings to battery electrodes and photovoltaic devices, the ability to precisely characterize different zinc and cadmium compounds is essential. This ability aids our understanding of changes in surface chemistry, surface mechanics, and material properties. X-ray photoelectron spectroscopy (XPS) has been repeatedly demonstrated as a powerful analytical tool to achieve such speciation, provided there is sufficient quality reference data available. Typically, speciation is achieved by analyzing shifts in photoelectron binding energies, and occasionally, Auger electron kinetic energies. Due to overlapping main photoelectron binding energies in many zinc and cadmium compounds, Auger electrons and the modified Auger parameter are also crucial for reliably detecting changes in chemical state. Despite zinc and cadmium's prevalence in surface applications, there is a notable scarcity of high-quality XPS reference data for these compounds beyond the metals and oxides. The available data often lacks the breadth and reliability required for precise chemical state analyses, with inconsistencies, uncertainties, and issues of reproducibility. Existing literature also frequently overlooks Auger signals and Auger parameters, despite their proven utility.
In this presentation, recent work to extend upon previously published XPS data and curve-fitting procedures will be detailed for a wide range of high-purity zinc- and cadmium-containing compounds. This will include a summary of current literature data, with careful exclusion of any sources that contain issues related to reliability. A summary of novel XPS data collected for forty unique zinc and cadmium materials including photoelectron binding energies, Auger kinetic energies, Auger parameters, and counterion binding energies will also be highlighted. Lastly, the applicability of curve-fitting Auger signals to analyze unknown mixed-species systems that contain zinc or cadmium will also be showcased.
The growth of nanomaterials in a biphasic system is an intriguing physical diffusion process where two immiscible, or partially miscible, phases are used to disperse two distinct precursors that merge at the interface, leading to the directional growth of crystals. In our method for the synthesis of spirocyclic nanorods, an aqueous phase (containing hydroxylated molybdenum disulfide nanosheets and thioglycolic acid) is interfaced with a butanol phase containing ninhydrin. The diffusion of these two phases one into another creates a system where the synthesis of spirocyclic nanorods occurs. Using advanced imaging techniques such as electron and atomic force microscopy, we show that this process allows for the controlled synthesis of nanorods with specific length and diameter depending on the concentration of precursors and diffusion-promoting additives, thus making it a promising approach for nanomaterial growth applications. Surface chemical features were examined using FTIR, UV-visible spectroscopy, Raman spectroscopy, X-ray photon spectroscopy (XPS), and Atomic Force Microscopy (AFM). Our method for growing spirocyclic organic nanorods was applied to fabricate nanorod sensors capable to the detection of a variety of proteinogenic amino acids, pointing at the unique physico-chemical properties of our system.
Understanding of the effect of active layer morphology on the operation of photovoltaics is crucial to the development of higher efficiency devices. A particular parameter with a complex dependence on local environment is the mobility of photogenerated charge carriers, upon which carrier extraction is highly dependent, and therefore overall device performance. Bulk device photo-carrier mobility is available through several single-point measurements, and cross-sectional mobility mapping with sub-micron scale resolution is achievable on moderately thin film devices. However, nano-scale resolution lateral imaging of intrinsic optoelectronic properties has only extended as far as surface photovoltage based measurements, which garner recombination information, and are speculative on carrier dynamics. Here, we present a novel integration of scanning near-field optical microscopy (SNOM) with charge extraction by linearly increasing voltage (CELIV) for direct mobility mapping, acquired in conjunction with atomic force microscopy (AFM) topography scans. By utilizing near-field illumination and nano-probe charge extraction via a conducting cantilever, our technique is both photonically and electronically localized, offering improved resolution and eliminating incidental measurement of delocalized material properties. This technique allows for measurements on a range of photoactive samples, as measurements on exposed active layer surfaces of PN homojunctions allows for investigation of morphological influence on free charge extraction, and measurements on bulk heterojunction samples allows for correlation of charge extraction to phase interface morphology. Freedom to change extraction voltage polarity and DC offset allows for variability in probed carrier type and device operation mode. This helps us achieve a versatile method for direct measurement of photogenerated charge dynamics in photovoltaic devices with nano-scale resolution.
Chaotic classical systems exhibit extreme sensitivity to small changes in their initial conditions. In a spin chain, chaos can be tracked not only in time, but in space. The propagation of small changes in the initial conditions results in a “light cone” bounding the spatial region and time interval when the trajectories have diverged. For nearest-neighbour interactions, the light cone produced is linear, defining a “butterfly velocity” that characterizes the speed at which chaos propagates. Realistic systems are more complicated, and can include interactions beyond immediate neighbours. We examine how more realistic, longer-range interactions affect the spread of chaos in spin chains, and how the light cone is modified by their presence. Using a classical analogue of the out-of-time-ordered correlator (OTOC), we measure the decorrelation of the two spin chains in time and space, modifying the equations of motion to incorporate further neighbor interactions. We explore two cases: exchange interactions with exponential and power-law decays. For the exponentially decaying case, we find the slope of the front at long times is modified even for very small interactions, but there is a critical decay constant below which we recover the nearest neighbour result. For the power-law case, the front becomes logarithmic at long-times, independent of the power-law exponent. We demonstrate that this behavior emerges from the superposition of nearest-neighbor linear cones with the initial disturbances giving rise to an envelope defining the front of the modified light cone. Finally, we discuss potential future directions in understanding chaotic behaviour in higher-dimensional classical systems and realistic interaction terms, such as anisotropy.
We theoretically investigate Weyl superconductivity in quasicrystals. Weyl superconductivity is a topological phase in three-dimensional crystals with topologically protected point nodes in the Brillouin zone called Weyl nodes, at which the Chern number changes its value [1]. Quasicrystals (QCs) are materials whose structure is aperiodic with a long-range order. As they lack translational symmetry and hence the Brillouin zone, the Chern number cannot be defined for their topological characterization. Accordingly, a theory of Weyl superconductivity has not been established for QCs in spite of recent extensive studies on quasicrystalline topological phases.
We extend the concept of Weyl superconductors to periodically stacked, two-dimensional quasicrystalline topological superconductors. To visualize this new concept, we examine quasicrystalline Weyl superconductivity realized in layered Ammann-Beenker and Penrose quasicrystals with spin-orbit coupling under an external magnetic field. We calculate the Bott index in real space as a reliable topological invariant [2] to characterize quasicrystalline Weyl nodes [3]. In the presence of surface boundaries, zero-energy Majorana surface modes emerge between two Weyl nodes in momentum space corresponding to the stacking direction. We find that the Majorana zero modes are decomposed into an infinite number of components resolved in momentum in the direction along surfaces within each layer. The distribution forms quasiperiodic arcs, which we call aperiodic Majorana arcs. We show that, in layered Ammann-Beenker (Penrose) quasicrystals, the position of the aperiodic Majorana arcs is characterized by the silver (golden) ratio associated with the quasicrystalline structure.
[1] T. Meng and L. Balents, Phys. Rev. B 86, 054504 (2012).
[2] R. Ghadimi, T. Sugimoto, K. Tanaka, and T. Tohyama, Phys. Rev. B 104, 144511 (2021).
[3] A. G. e Fonseca et al., Phys. Rev. B 108, L121109 (2023).
Majorana-based quantum computing harnesses the non-Abelian exchange statistics of Majorana zero modes (MZMs) in order to perform gate operations via braiding. It is paramount that braiding protocols keep a given system within its ground state subspace, as transitions to excited states lead to decoherence and constitute a “diabatic error.” Typical braiding protocols are envisioned on networks of superconducting wires where MZMs are shuttled by using electric gates to tune sections of a wire (“piano keys”) between topologically trivial and non-trivial phases. The focus of our work is to further study the diabatic error, defined as the transition probability to excited states, as MZMs are shuttled using piano keys through a single wire. Previous work has established that the behavior of the error can be adequately captured by Landau-Zener physics [1] and that the use of multiple piano keys may be optimal in reducing the error in certain situations [2]. We extend upon these works and consider MZM transport through superconducting wires which are disordered and subjected to external noise. We numerically calculate the diabatic error for these cases and, in particular, we demonstrate how disorder and noise change the optimal piano key picture presented in Ref. [2].
[1] B. Bauer, T. Karzig, R. V. Mishmash, A. E. Antipov, and J. Alicea, SciPost Phys. 5, 004 (2018)
[2] B. P. Truong, K. Agarwal, T. Pereg-Barnea, Phys. Rev. B 107, 104516 (2023)
Many topologically non-trivial systems have local topological invariants which cancel over the full Brillouin zone. Yet such systems could be platforms for non-abelian physics, for example nodal superconductors potentially hosting Majorana modes. Experimentally distinguishing signatures from local non-trivial topology to similar trivial features is not a clear-cut process. Our work extends the method developed by Dutreix et al., which detects the local Berry phase of the Dirac cones in graphene. Here extended to a general Hamiltonian with chiral symmetry, the method is applicable to nodal superconductors. We have found that for two Dirac cones with a difference in topological winding there exists a theoretical ideal impurity and STM tip for which Friedel oscillations capture that winding difference. This information is accessible directly in the complex phase of the Fourier transformed local density of states. We have further derived conditions for when a physical impurity can capture the winding difference. As a proof-of-concept, we applied the conditions to the topological nodal superconductor predicted in monolayer NbSe$_2$ under an in-plane field. Furthermore, we have predicted an experiment where STM can detect the winding of each of the 12 nodes. We conclude that this method of designing impurity scattering can be a powerful tool to determine local topological invariants and superconducting symmetries in 2D systems.
Stabilizer codes are the most widely studied class of quantum error-correcting codes and form the basis of most proposals for a fault-tolerant quantum computer. A stabilizer code is defined by a set of parity-check operators, which are measured in order to infer information about errors that may have occurred. In typical settings, measuring these operators is itself a noisy process and the noise strength scales with the number of qubits involved in a given parity check, or its weight. Hastings proposed a method for reducing the weights of the parity checks of a stabilizer code, though it has previously only been studied in the asymptotic regime. Here, we instead focus on the regime of small-to-medium size codes suitable for quantum computing hardware. We provide both a fully explicit description of Hastings's method and propose a substantially simplified weight reduction method that is applicable to the class of quantum product codes. Our simplified method allows us to reduce the check weights of hypergraph and lifted product codes to at most six, while preserving the number of logical qubits and at least retaining (in fact often increasing) the code distance. The price we pay is an increase in the number of physical qubits by a constant factor, but we find that our method is much more efficient than Hastings's method in this regard. We benchmark the performance of our codes in a photonic quantum computing architecture based on GKP qubits and passive linear optics, finding that our weight reduction method substantially improves code performance.
Multi-qubit parity checks are a crucial requirement for many quantum error-correcting codes. Long-range parity checks compatible with a modular architecture would help alleviate qubit connectivity requirements as quantum devices scale to larger sizes. In this work, we consider an architecture where physical (code) qubits are encoded in stationary degrees of freedom and parity checks are performed using state-selective phase shifts on propagating light pulses, described by coherent states of the electromagnetic field. We optimize the tradeoff between measurement errors, which decrease with measurement strength (set by the average number of photons in the coherent state), and the errors on code qubits arising due to photon loss during the parity check, which increase with measurement strength. We also discuss the use of these parity checks for the measurement-based preparation of entangled states of distant qubits. In particular, we show how a six-qubit entangled state can be prepared using three-qubit parity checks. This state can be used as a channel for controlled quantum teleportation of a two-qubit state, or as a source of shared randomness with potential applications in three-party quantum key distribution.
Atomic and solid-state spin ensembles are promising platforms for implementing quantum technologies, but the unavoidable presence of noise imposes the needs for error correction. Typical quantum error correction requires addressing specific qubits, but this requirement is practically challenging in most ensemble platforms. In this work, we propose a quantum error correction scheme for error correction without individual spin resolution. Our scheme encodes quantum information in superposition of excitation states, even though they are fundamentally mixed. We show that our code can protect against both individual and collective errors of dephasing, decaying, and thermalization. Furthermore, we illustrate how our scheme can be implemented with realistic interaction and control. We also exemplify the application of our formalism in robust quantum memory and loss-tolerant sensing.
Motivation: The significant progress that quantum theory has made in recent years, has occurred despite the conspicuous absence of any consensus interpretation of quantum mechanics, and in particular on the measurement problem, which is essentially Wheeler’s question: Why the quantum? The resolution of debate surrounding this issue would likely pay dividends in experimental quantum science. For example, a better understanding of the measurement process may allow design of longer lasting coherences.
Fundamental Basis of Superposition: From spacetime considerations (see references), the fundamental basis for quantum superposition is proposed to be spacetime superposition of spaces related by the Lorentz boost. In many scenarios this is equivalent to momentum superposition. Although quantum systems can be represented in many different forms (momentum basic, position basis, energy basis etc.) the definition of a fundamental basis renders these alternatives no longer equivalent. For example, although an electron in an atomic orbital may be in an energy eigenstate, it is seen as fundamentally as in a persistent state of momentum superposition.
Measurement Criterion: Measurement (operation of the probabilistic Born rule) is interpreted as any process which asks a quantum system an unanswerable momentum question, i.e., a question demanding a more specific momentum answer than the momentum superposition can deterministically provide. Measurement is an attempt to extract non-existing momentum information. If no deterministic answer is available, but some answer is demanded, then an indeterministic symmetry-breaking process must occur. An example is any diffraction experiment in which the final screen interrogates the lateral momentum of the diffracted particle. Conversely, entanglement occurs when quantum systems interact in a manner not making such demands upon each other.
Experimental Implications: The definition of a fundamental basis dictates the types of quantum system that may exist (superselection). A specific measurement criterion distinguishes probabilistic vs. entangling interactions. Both have experimental implications.
References: For further details: https://orcid.org/0000-0002-9736-7487
In the past 30 years, telescopes in space and on the ground have discovered thousands of extrasolar planets, providing us with a representative sample of the worlds that orbit other stars in our galaxy for the first time. However, our knowledge of these planets is limited to no more than a few datapoints for each one by the vast distances that separates us. Yet, though these places live mainly in our mind’s eye, we can construct remarkably accurate pictures of the processes which dominate their environments. We can do this because of our understanding of planetary processes that we have gained through 62 years of robotic solar system exploration. This hard-won experience, like a celestial Rosetta Stone, allows us to translate our sparse information about the exoplanetary realm into the language of our familiar solar family of planets. However, unlike the famous artifact, we can still write new chapters to the translation. Exoplanets tell us about the full diversity of worlds and their circumstances while robotic space exploration missions consider a single representative world from that set up close. Thus, exoplanetary astronomy and solar system exploration are disciplines in dialogue. By deeply interrogating our nearest neighbors we can expand our understanding of planets everywhere.
Those who lead industry and educational institutions and particularly those who teach need to acknowledge that their own STEM education is characterized by (1) the exclusion of non-Whites from positions of power, which almost completely erases Indigenous theories and contributions to STEM; (2) the development of a White frame that organizes STEM ideologies and normalizes White racial superiority; (3) the historical construction of a curricular model based on the thinking of White elites, thus disregarding minoritized cultures that contributed to STEM globally; and (4) the assertion that knowledge and knowledge production are neutral, objective, and unconnected to power relations. STEM education and occupations were designed to attract White men who are heterosexual, able-bodied, middle class, and upper class, and, more recently, some East Asian groups designated as acceptable. Therefore, the curriculum and products of this culture contribute to an inhospitable environment for students, faculty, and employees who do not fit these criteria.
The subsequent segment of the presentation aims to delineate an innovative STEM curriculum that eminently acknowledges and validates the racial identities and firsthand experiences of students who have been historically relegated to the periphery of mainstream education. The centrality of this curriculum lies in its unabashed focus on pressing social matters, utilizing these as the pivotal catalyst around which STEM education is designed and delivered. The significance of this curricular approach guides the shift away from a traditional, monocultural lens of teaching STEM, which often inadvertently buttresses systemic barriers, towards a more culturally responsive and socially conscious pedagogical design. By locating the lived experiences and racial identities of marginalized students at the paradigm’s core, the curriculum serves to affirm their voices and perspectives, thereby fostering a more inclusive and equitable educational environment.
Further, by intertwining STEM learning with real-world social issues, the curriculum fosters the development of critical thinking and problem-solving skills, crucial competencies for the 21st-century workforce. It empowers learners to understand, engage with, and propose solutions to real-world challenges using STEM principles. Intrinsically, it instigates a more holistic understanding of STEM, one that transcends the conventional boundaries of textbook learning and plants the seeds for nurturing socially conscious, scientifically literate individuals. Therefore, this innovative, context-driven approach to STEM instruction not only serves as a powerful tool to counter educational exclusion and disparity, but it also equips students with the aptitude and motivation to apply learned concepts in addressing socially relevant issues, thereby redefining the landscape of meaningful and impactful education.
Nature appears to respect certain laws to exquisite accuracy, for example information never travels faster than light. These laws, codified in quantum field theory, underwrite the Standard Model of particle physics. Recently it is appreciated that this structure is so rigid that there is often a unique quantum field theory compatible with a few additional assumptions. This gives an important new tool to theorists: internal consistency enables precise calculations. I will describe my contributions to this vast effort, and what it teaches us about strongly interacting field theories that appear in two surprisingly related situations: critical phenomena and quantum gravity.
The PIENU experiment at TRIUMF has provided, to date, the most precise experimental determination of $R^\pi_{e/\mu}=\frac{\pi^+\rightarrow e^+(\gamma)}{\pi^+\rightarrow \mu^+(\gamma)}$, the ratio of pions decaying to positrons relative to muons. While $R^\pi_{e/\mu}$ is more than an order of magnitude less precise that the Standard Model (SM) calculation, the PIENU result is a precise test of the universality of charged leptons interaction, a key principle of the Standard Model (SM), constrains a large range of new physics scenario, and allows dedicated searches for exotics such as sterile neutrinos. I’ll go over a short overview of $R^\pi_{e/\mu}$ measurements and introduce the
next generation precision pion decay experiment in the making: PIONEER!
This newly proposed experiment aims at pushing the boundaries of precision on $R^\pi_{e/\mu}$ and expanding the physics reach by improving on the measurement of the very rare pion beta decay $π^+\rightarrow \pi^0 e^+ \nu$. This will provide a new and competitive input to the determination of $|V_{ud}|$, an element of the Cabibbo- Kobayashi-Maskawa (CKM) quark-mixing matrix.
Located at SuperKEKB, an asymmetric $e^{+} e^{-}$ collider and the world’s first super B-Factory, the Belle II experiment is searching for evidence of new physics at the precision frontier. Since recording of physics data commenced in 2019, SuperKEKB has claimed the record as the world’s highest luminosity particle collider while steadily approaching its target integrated luminosity of 50 ab$^{-1}$ —a factor of 40 times larger than the combined datasets of the previous B-Factory experiments! The unique, experimentally clean environment, coupled with enhanced detector performance and specialised dark sector triggers, allow Belle II to pursue a vast physics program. This talk will present highlights of recent Belle II physics results and also report on the ongoing activities of Canadian groups contributing to Belle II.
The strange quark is the lightest sea quark in the proton after the up and down quarks, and its production at the LHC is crucial for the understanding of proton internal structure and fragmentation processes. In this work, strange particles are reconstructed using minimum-bias data from $pp$ collisions at 13 TeV taken by the ATLAS detector. Their kinematic distributions and production cross-sections are studied. In particular, the $K_s$ and $\Lambda$ ($\overline{\Lambda}$) give clean signatures and high yield in the detector, while the $\Xi^{-}$ ($\overline{\Xi}^{+}$), despite its lower yield, could be a strong indicator of strangeness content as it contains two strange quarks. The reconstructed data samples are then compared with Monte Carlo samples to calculate particle detector acceptance and efficiency, to estimate the sensitivity of the data and to better understand strangeness production processes.
This study investigates the impact of vector-like quarks on rare B-decays, focusing on recent experimental searches. Vector-like quarks, an intriguing feature of many extensions beyond the Standard Model (SM), offer a unique avenue for probing physics BSM. We consider extending the standard Model by adding a vector-like isosinglet down-type quark. Experiments at LHCb and Belle II are actively studying rare B transitions like exclusive semileptonic B → Kνν̄ decays. Therefore, by analyzing the underlying quark b-> s semileptonic transitions, we investigate deviations from the Standard Model due to vector-like quarks, utilizing the latest experimental constraints on the model parameters.
Integral transform methods are a cornerstone of applied physics in optics, control, and signal processing. These areas of application benefit from physics techniques not just because the techniques are quantitative, but because the quantitative knowledge that physics generates provides concrete insight. Here, we introduce an integral transform framework for optimization that puts it on an analogous physical footing to problems in optics, control, and signals. We illustrate the broad applicability of this framework on example problems arising in additive manufacturing and land-use planning. We argue that this framework both enlarges the interface between physics and new areas of application, and it enlarges we consider physical systems.
Land-use decision-making processes have a long history of producing globally pervasive systemic equity and sustainability concerns. Quantitative, optimization-based planning approaches, e.g., Multi-Objective Land Allocation (MOLA), seemingly open the possibility of improving objectivity and transparency by explicitly evaluating planning priorities by land use type, amount, and location. Here, we primary show that optimization-based planning approaches with generic planning criteria generate a series of unstable “flashpoints” whereby tiny changes in planning priorities produce large-scale changes in the amount of land use by type. We give quantitative arguments that the flashpoints we uncover in MOLA models are examples of a more general family of instabilities that occur whenever planning accounts for factors that coordinate use on- and between-sites, regardless of whether these planning factors are formulated explicitly or implicitly. Building on this, our current research extends into the realm of environmental change, revealing that common features across non-convex optimization problems, like MOLA, drive hypersensitivity to climate-induced degradation, resulting in catastrophic losses in human systems well before catastrophic climate collapse. This punctuated insensitive/hypersensitive degradation–loss response, traced to the contrasting effects of environmental degradation on subleading local versus global optima (SLO/GO), suggests substantial social and economic risks across a broad range of human systems reliant on optimization, even in the absence of extreme environmental changes.
The advent of additive manufacturing techniques offers the ability and potential to (literally) reshape our manufactured- and built environment. However, key issues, including questions about robustness, impede the use of additive manufacturing at scale. In this talk, we present a high-performance code that extends topology optimization, the leading paradigm for additive manufacturing design, via a novel Pareto-Laplace filter. This filter has the key property that it couples the physical behaviour of actual, physical products to analogues of physical processes that occur in the space of possible design solutions. We show that the solution space "physics" gives insight into key questions about robust design.
In this talk, we explore solutions to models describing waves under ice generated by moving disturbances such as trucks moving on ice that is frozen on top of large bodies of water. We start by showing how the problem can be reformulated in surface variables, reducing the number of unknowns and resulting in a nonlinear integro-differential system of equations. To solve these equations, we use an iterative solver whose convergence is sped up by a novel hybrid preconditioner. Finally, we examine different regimes such as varying pressure distributions, heterogeneities in ice as well as a bottom topography, and present how these influence the types of solutions we obtain.
A new approach for operationally studying the effects of spacetime in quantum superpositions of semiclassical states has recently been proposed by some of the authors. This approach was applied to the case of a (2+1)-dimensional Bañados-Teitelboim-Zanelli (BTZ) black hole in a superposition of masses, where it was shown that a two-level system interacting with a quantum field residing in the spacetime exhibits resonant peaks in its response at certain values of the superposed masses. Here, we extend this analysis to a mass-superposed rotating BTZ black hole, considering the case where the two-level system co-rotates with the black hole in a superposition of trajectories. We find similar resonances in the detector response function at rational ratios of the superposed outer horizon radii, specifically in the case where the ratio of the inner and outer horizons is fixed. This suggests a connection with Bekenstein's seminal conjecture concerning the discrete horizon spectra of black holes in quantum gravity, generalized to the case of rotating black holes. Our results suggest that deeper insights into quantum-gravitational phenomena may be accessible via tools in relativistic quantum information and curved spacetime quantum field theory.
The behaviour of apparent horizons throughout a black hole merger process is an unresolved problem. Numerical simulations have provided insight to the fate of the two horizons. By considering marginally outer-trapped surfaces (MOTSs) as apparent horizon candidates, self-intersecting MOTSs were found in the merger process and play a key role in the merger evolution [arXiv:1903.05626]. A similar class of self-intersecting MOTSs have then been investigated in explicitly known black hole solutions, including the Schwarzschild solution [arXiv:2005.05350; 2111.09373; 2210.15685]. We present findings from our investigations of MOTSs in the maximally-extended Kruskal black hole spacetime [arXiv:2312.00769]. The spacetime contains an Einstein-Rosen bridge that connects two asymptotic regions. This allows for novel MOTSs that span both asymptotic regions with non-spherical topology, such as that of a torus. These MOTSs are comparable to those found in numerical simulations and have unexpected behaviour with regards to their stability spectrum.
One of the most important results in mathematical general relativity in the last half century is the inequality, conjectured by Penrose in 1973, that the mass inside a black hole has a lower bound determined by the area of the black hole's event horizon, and that the minimal case is realized by the Schwarzschild black hole. While a fully general proof of the conjecture does not yet exist, it has been proved in the cases of extrinsically flat spatial slices (Riemann-Penrose inequality) and in the general case under the assumption of spherical symmetry. We seek to extend the spherically-symmetric proofs of the conjecture to include electric charge (Einstein-Maxwell theory in $(n+1)$-dimension) in an anti-deSitter background, where the rigidity case of the inequality is now Reissner–Nordström AdS. In the future, our goal is to extend our proof to Gauss-Bonnet gravity. This is on-going work which is the subject of the author's PhD thesis.
Treating the horizon radius as an order parameter in a thermal fluctuation, the free energy landscape model sheds light on the dynamic behaviour of black hole phase transitions. Here we carry out the first investigation of the dynamics of the recently discovered multicriticality in black holes. We specifically consider black hole quadruple points in D = 4 Einstein gravity coupled to non-linear electrodynamics. We observe thermodynamic phase transitions between the four stable phases at a quadruple point as well as weak and strong oscillatory phenomena by numerically solving the Smoluchowski equation describing the evolution of the probability distribution function. We analyze the dynamic evolution of the different phases at various ensemble temperatures and find that the probability distribution of a final stationary state is closely tied to the structure of its off-shell Gibbs free energy.
We study the free evolution of dilute Bose-Einstein condensate (BEC) gases which have been initially trapped and released from various differently shaped confining potentials. By numerically solving the Gross-Pitaevskii equation and analytically solving the hydrodynamic Thomas-Fermi theory for each case, we find the presence of acoustic horizons within rarefaction waves which form in the outer edges of the BECs. We comment on the horizon dynamics, the formation of oscillations near the horizon, and connections to acoustic Hawking radiation.
We constructed a family of static, vacuum five-dimensional solutions with two commuting spatial isometries describing a black hole with a S^3 horizon and a 2-cycle `bubble' in the domain of outer communications. The solutions have been obtained by adding dipole and quadropole distortions to a seed asymptotically flat solution. We showed that the conical singularities in the undistorted geometry can be removed by an appropriate choice of the distortion.
Phytoglycogen (PG) is a naturally occurring polysaccharide produced as compact, highly branched nanoparticles in the kernels of sweet corn. Because PG is biocompatible, non-toxic and digestible, it is attractive for applications involving the delivery of bioactive compounds. In the present study, we evaluate the association of PG with the hydrophobic bioactive astaxanthin (AXT), which is a naturally occurring xanthophyl carotenoid with reported health benefits, e.g., acting as an antioxidant and anti-inflammatory agent. However, the extremely poor solubility of AXT in water presents challenges in realizing its full potential for improving human and animal health. In the present study, we describe a method to improve the effective solubility of AXT in water through its physical association with PG, i.e., without the use of added chemicals such as surfactants. We combine PG dispersed in water with AXT dissolved in acetone, evaporate the acetone, and lyophilize to remove the water. The result is a stable AXT-PG complex that can be readily redispersed in water, with aqueous dispersions of AXT-PG stable for long periods of time (several months at 4℃). Using UV-Vis spectroscopy, we characterize the absorbance due to different aggregation states of the AXT molecules in the AXT-PG complex and this has allowed us to determine the maximum loading of AXT onto PG to be ~ 10% by mass, with a corresponding maximum effective concentration of AXT in water of ~ 0.9 mg/mL. Our results demonstrate the promise of using PG as an effective solubilizing and stabilizing agent for hydrophobic compounds in water.
Purpose: With advancements in high dose rate radiotherapy techniques such as FLASH therapy, radiochromic films have been proposed as a key dosimeters due to their relative dose rate independence when used with standard read-out methods. Our group is interested in understanding the real-time behaviour of these materials in order to develop radiochromic optical probes for real-time dosimetry, with utility across a broad range of beam qualities and applications.
Methods: Three radiochromic formulations were made with 10,12-pentacosa diynoic acid (PCDA) and its lithium salt (LiPCDA), with varying Li+ ratios (PCDA, 635LiPCDA, and 674LiPCDA). The formulations, coated onto polyethylene, were irradiated within a custom real-time jig equipped with optical fibres for continuous data collection before, during and after irradiation. The light source is a tungsten halogen lamp, and the light transmitted through the film was collected by a CCD camera. The three radiochromic formulations, and commercial EBT-3 for benchmarking, were irradiated to 0-25 Gy with a 74 MeV proton beam (TRIUMF), a 6MV photon beam (clinical linear accelerator (LINAC), University Health Network), and an electron FLASH beam (decommissioned LINAC). The transmitted light was processed to calculate optical density around the main absorbance peak for each formulation.
Results: All in-house films and commercial EBT-3 showed an instant sharp increase in optical density with absorbed dose, including under FLASH conditions. For all three beam modalities, 635LiPCDA (comparable to current commercial products) exhibited the highest sensitivity, followed by 674LiPCDA, and PCDA (comparable to older products) respectively. As previously observed for commercial radiochromic films, all formulations demonstrated a lower response per dose when irradiated with protons due to quenching effects.
Conclusions: We demonstrate that LiPCDA crystals can be selectively grown to exhibit tailored dose responses. For the first time, we show that real-time response in standard proton beams and under electron FLASH conditions are characterized by an immediate sharp increase in optical density with absorbed dose, followed by an expected asymptotic shoulder due to post-exposure polymerization.
Hemoglobin (Hb), the cornerstone of oxygen transport in the body, holds crucial diagnostic significance for disorders like β-Thalassemia and sickle cell anemia. Conventional blood assays often grapple with issues of delays, cost, and accessibility. In this study, we unveil an innovative nano-biosensor leveraging surface-enhanced Raman spectroscopy (SERS), offering swift and real-time detection of iron-containing molecules, with a primary focus on Hb, the predominant iron-containing compound in blood. This detection could be used with minimal samples and great sensitivity.
Our sensor's foundation involves gold and silver thin film substrates, crafted through pulsed laser ablation and electrochemical deposition techniques, precisely tuned to resonate with 633 and 532 nm Raman lasers. Functionalization with a novel heteroaromatic ligand L, a derivative of alpha-lipoic acid and 2-(2-pyridine)imidazo[4,5,f]-1,10-phenanthroline, enables the creation of a highly selective Hb sensor. The sensing mechanism hinges on the coordination bonds formed between the phenanthroline unit of L and the iron center in the heme unit of the Hb protein.
Our sensor chip exhibits stability over a week, maintaining high sensitivity to Hb. Leveraging the characteristic SERS band of L observed at 1390 cm-1, associated with the porphyrin methine bridge, we discern fluctuations in intensity corresponding to varying concentrations of normal Hb. This dynamic information is harnessed to assess iron content, facilitating the diagnosis of iron excess or deficiency indicative of various diseases. Furthermore, the SERS spectra distinguish Fe2+/Fe3+ redox species, providing insights into the oxygen-carrying capacity of Hb. Validation through electrochemical SERS, utilizing silver nanofilm on ITO, scrutinizes changes in Fe2+/Fe3+, potentially enabling early diagnosis of health conditions manifesting alterations in the oxidative states of iron in Hb.
Distinctive SERS bands in the "fingerprints region" allow discrimination between normal Hb and abnormal Hb variants. Density Functional Theory – Molecular Dynamic (DFT-MD) calculations correlate the experimental vibrational peaks enhancing the robustness of our findings. This study lays a pioneering foundation for extending our approach towards developing a lateral flow assay, promising a rapid and accurate diagnosis of Hb disorders. Our nano-biosensor holds transformative potential, heralding a new era in hemoglobin analysis and associated disorder diagnostics.
Introduction: Sepsis is a life-threatening host response to an infection that disproportionately affects vulnerable and low-resource populations. Since early intervention increases survival rate, there is a global need for accessible technology to aid with early sepsis identification. Peripheral microvascular dysfunction (MVD) is an early indicator of sepsis that manifests as impaired vasomotion in the skeletal muscle--that is, low-frequency oscillations in microvascular tone independent of cardiac and respiratory events. Previous studies have used oscillations in hemoglobin content (HbT), oxygenation (StO2), and perfusion (rBF) as sensitive markers for vasomotion. These physiological parameters can be monitored non-invasively with near-infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS). The objective of this study was to use a hybrid NIRS/DCS system to continuously monitor peripheral and cerebral vasomotion in a rat model of early sepsis.
Methods: 14 Sprague-Dawley rats were used for this study. Control animals (n=4) received an intraperitoneal (IP) injection of saline, while the experimental group (n=10) received an IP injection of fecal slurry to induce sepsis. Optical probes were secured on the scalp and hind limb of animals for simultaneous NIRS and DCS measurements. Peripheral and cerebral HbT, StO2, and rBF were quantified from NIRS/DCS measurements using algorithms developed in MATLAB. Continuous wavelet transform was used to dynamically isolate low-frequency isolations from the three parameters. Two-way ANOVAs were used to investigate power of vasomotion in all three hemodynamic parameters for differences across condition (control, septic) and time (period 1 = 0.5 - 2 h, period 2 = 2 - 4 h, period 3 = 4 - 6 h).
Results: Power of peripheral vasomotion was significantly higher in septic animals as reflected in all three parameters during periods 2 and 3. Power of cerebral vasomotion was significantly higher in septic animals only in the HbT signal.
Conclusions: Optical spectroscopy can be used as a non-invasive tool to detect peripheral MVD. Importantly, our results suggest that while the brain is partly protected, the skeletal muscle is a consistent early diagnostic target for sepsis. Limitations include the use of homogenous animal model. Future work will seek to validate these techniques in ICU patients.
Introduction: A promising approach for detecting early-stage mild cognitive impairment (MCI) is identifying changes in cerebrovascular regulation prior to overt changes in cognition. Low-frequency oscillations (LFO) in cerebral perfusion and oxygenation, originating from neurogenic and myogenic regulation of hemodynamics, may be altered in patients with MCI. Previous work has shown increased LFO in oxygenation of Alzheimer’s and MCI patients compared to healthy, older adults. For this study, we hypothesized that MCI patients would exhibit increased power of LFO in cerebral (1) perfusion, (2) oxygenation, and (3) metabolic rate of oxygen (CMRO2) consumption.
Methods: 12 MCI (74 ± 6 years) and 8 cognitively intact control (CTL) participants (69 ± 7) were recruited. An in-house built diffuse correlation spectroscopy (DCS) and time-resolved near-infrared spectroscopy (trNIRS) system was used to record microvascular perfusion and oxygenation, respectively. Data were acquired from the forehead for 480 seconds during seated rest. DCS and trNIRS measurements were analyzed with custom scripts (MATLAB) to calculate relative changes in cerebral blood flow (rCBF), tissue oxygen saturation (StO2), and relative CMRO2 (rCMRO2). A continuous wavelet transform was used to decompose time courses into time-varying frequency components. The power of neurogenic (0.02-0.06 Hz) and myogenic (0.06-0.16 Hz) oscillations were isolated. Mann-Witney tests were used to compare MCI and CTL. Effect sizes are reported as Cohen’s d.
Results: MCI patients had lower neurogenic power in rCBF (p = 0.03, d = 0.89) but greater myogenic power in StO2 (p = 0.03, d=1.00). Although not significant, this pattern remained for myogenic power in microvascular perfusion (p = 0.09, d = 0.52) and neurogenic power in StO2 (p = 0.08, d=0.86). There were no differences in neurogenic or myogenic LFO power for rCMRO2 (both p ≥ 0.3, d = 0.16).
Discussion: Participants with MCI have lower oscillatory power in cerebral microvascular perfusion but greater power in cerebral oxygenation. Interestingly, these opposing responses counteract, resulting in similar metabolic oscillations which demonstrates potential adaptations that occur to support neural metabolism in people with MCI. Immediate future work will be to analyze macrovascular perfusion and blood pressure oscillations to understand systemic differences.
Cell migration is a fundamental process in various physiological scenarios such as cancer metastasis, wound healing, immune responses,and embryonic development. Among the environmental cues, physical factors especially the electric field (EF) have been widely demonstrated to guide the migration of various cell types. EF guided cell migration, termed ‘electrotaxis’, has been traditionally studied in vitro, using contact based direct current (DC) or alternating current (AC) EF by placing electrodes directly in the media. More recently non-contact AC EF guided electrotaxis has also been explored. Since DC EF is closer to physiological conditions, the availability of non-contact wireless DC EF guided electrotaxis will be highly valuable. In this study, we developed a customizable parallel plate capacitor based experimental platform that could facilitate the use of non-contact DC EF to guide cell migration.COMSOL Multiphysics modeling shows that our platform can generate a relatively uniform EF in the center region of the cell chamber. This uniformity is important as it allows for more consistency and reproducibility of the experimental results. The design of the parallel plate capacitor apparatus allows for complete customization during use, including the flexibility to adjust the distance between electrode plates, removable petri-dish holders, and seamless integration with an optical microscope for live cell imaging. The developed platform was validated with several cell types including human metastatic breast cancer cells and human peripheral blood immune cells. With the developed platform, interesting cell migratory behaviors were observed through various quantitative analyses of time-lapse cell migration image data.We have started to further explore the mechanism behind non-contact DC EF guided electrotaxis.
The Electron-Ion Collider (EIC) is envisioned as an experimental facility to investigate gluons in nucleons and nuclei, offering insights into their structure and interactions. The Electron-Proton/Ion Collider Experiment (ePIC) Collaboration was formed to design, build, and operate the EIC project detector, which will be the first experiment at the collider. The unique physics goals at the EIC necessitate specific design considerations for the electromagnetic calorimeter in the barrel region of ePIC. Precise measurements of electron energy and shower profiles are crucial for effectively distinguishing electrons from background pions in Deep Inelastic Scattering processes at high Q2 within the barrel region. Furthermore, the calorimeter must accurately gauge the energy and coordinates of photons from processes such as Deeply Virtual Compton Scattering, while identifying photon pairs from π0 decays.
In this presentation, I will discuss the design of the Barrel Imaging Calorimeter of ePIC. Our hybrid approach combines scintillating fibers embedded in lead with imaging calorimetry based on AstroPix sensors, a low-power monolithic active pixel sensor. Through comprehensive simulations, we have tested the calorimeter design against the key requirements outlined in the EIC Yellow Report. I will focus on the anticipated performance of the calorimeter, detailing progress in design and prototyping. Additionally, I will provide insights into the development timeline and collaborative efforts involved in this endeavor.
The Electron-Ion Collider (EIC) is a new US$2.5B particle collider facility to be built at Brookhaven National Laboratory (BNL), on Long Island, New York, by the US Department of Energy (US-DOE). The EIC is the next discovery machine offering high science impact but with significant technical challenges. In the 2022–2026 Canadian Subatomic Physics Long Range Plan, the community named the EIC as a “flagship program with broad outcomes.” Similar to Canadian involvement in other large international science projects of global scale like the High Luminosity upgrade at CERN, we anticipate delivering key enabling components, expanding on existing Canadian strengths in particle accelerator technology. Canada, through expertise at TRIUMF, has significant relevant experience in superconducting radio-frequency (SRF) technology. Through discussions with EIC, we have identified an in-kind contribution with high technical complexity that would provide a significant and challenging deliverable to the EIC project. The scope consists of the design and production of 394-MHz crab cavities and cryomodules that will increase the probability of collision of the circulating beams and are essential for reaching the scientific aims of the EIC. The present layout of the EIC foresees two 394MHz cavities per interaction point per side for the Hadron Storage Ring (HSR), and one 394MHz cavity per IP per side for the Electron Storage Ring (ESR). TRIUMF’s experience in SRF technology is already being exploited to supply similar cryomodules to the high luminosity upgrade project at CERN. The EIC deliverables will expand Canada’s core competencies in accelerator technology benefitting fundamental research and industry. TRIUMF is presently engaged in design studies on the 394MHz cavities. The presentation will briefly summarize the existing TRIUMF SRF program in supporting international accelerator projects and present the proposed contribution to the EIC.
In order to search for the physics beyond the Standard Model at the precision frontier, it is sometimes essential to account for Next-to-Next- Leading Order (NNLO) theoretical corrections. Using the covariant approach, we calculated the full electroweak leptonic tensor up to quadratic (one loop squared) and reducible two loop level NNLO (α3) order, which can be used for the processes like e−p and μ− p scattering relevant to EIC, MOLLER (background studies) and MUSE experiments, respectively. In the covariant approach, we apply unitary cut of Feynman diagrams and separate them into leptonic and hadronic currents and hence, after the squaring matrix element, we can obtain differential cross section up to NNLO level.
In this presentation, I will quickly review covariant approach and provide our latest results for quadratic and reducible two loop level QED and Electroweak corrections in case of e−p scattering process.
One of the unique aspects of the Electron-Ion Collider (EIC) detectors is the extensive integration of the far-forward and far-backward detectors with the EIC ring components. This is based primarily on experience from the only prior electron-proton collider, HERA, where far-forward detector infrastructure was only partially installed initially, and it was difficult to install highly efficient and hermetic detector coverage as the needs of the physics program evolved. In contrast, the ePIC detector is envisaged to have a highly sophisticated Zero Degree Calorimeter (ZDC) far downstream of the interaction region, supplemented with tracking and calorimetry $inside$ the first downstream dipole, the B0 detector, and Roman pots. The talk will present a summary of feasibility studies utilizing the $\pi^+$ and $K^+$ deep exclusive meson production reactions. These provide well-defined but challenging final states that test the far forward event reconstruction, and shed vital information on the detector requirements needed to deliver the physics program. The $p(e,e'\pi^+n)$ reaction reconstruction is relatively straightforward, but the $K^+$ reactions are particularly challenging, as they involve the reconstruction of both 4 and 5 final particle states, $p(e,e'K^+)\Lambda/\Sigma^0$, where the hyperon decays into the far forward detectors via $\Lambda(\Sigma^0)\rightarrow p\pi^-(p\pi^-\gamma)$ or $\Lambda(\Sigma^0)\rightarrow n\pi^0(n\pi^0\gamma)$.
Quantum information processing, at its very core, is effected through unitary transformations applied to states on the Bloch sphere, the standard geometric realization of a two-level, single-qubit system. That said, to a geometer, it may be natural to replace the original Hilbert space of the problem, which is a finite-dimensional vector space, with a finite-rank Hermitian vector bundle, through which unitary transformations are replaced very naturally with parallel transport along a connection. This imparts new degrees of freedom into the generation of quantum gates. A new approach to quantum matter — relying upon exotic hyperbolic geometries — that has emerged in my work over the past half decade with mathematicians, theoretical physicists, and experimentalists suggests that this setup may be achievable as an actual computing platform. I'll describe these developments, and there will be lots of pictures.
The resource theories of separable entanglement, non-positive partial transpose entanglement, magic, and imaginarity share an interesting property: an operation is free if and only if its renormalized Choi matrix is a free state. We refer to resource theories exhibiting this property as Choi-defined resource theories. We demonstrate how and under what conditions one can construct a Choi-defined resource theory, and we prove that when such a construction is possible, the free operations are all and only the completely resource non-generating operations.
The time-dependent Schrödinger equation in one-dimension has a remarkable class of shape-preserving solutions that are not widely appreciated. Important examples are the 1954 Senitzky coherent states, harmonic oscillator solutions that offset the stationary states by classical harmonic motion. Another solution is the Airy beam, found by Berry and Balazs in 1979. It has accelerating features in the absence of an external force. Although these solutions are very different, we show that they share many important properties. Furthermore, we show that these belong to a more general class of form preserving (solitonish) wave functions. We conclude with an analysis of their dynamics in phase space with their Wigner functions.
Finding the ground-state energy of many-body lattice systems is exponentially costly due to the size of the Hilbert space, making exact diagonalization impractical. Ground-state wave functions satisfying the area law of entanglement entropy can be efficiently expressed as a matrix product states (MPS) for local, gapped Hamiltonians. The extension to a bundled matrix product state describes excitations, but a formal proof is lacking despite excellent performance in practical computation. We provide a formal proof for the claim. We define a bundled density matrix as a set of independent density matrices which are all written in a common (truncated) basis. We demonstrate that the truncation error is a practical metric that determines how well an excitation is described in a given basis common to all density matrices. We go on to demonstrate that states with volume law entanglement are not necessarily more costly to include in the bundle. The same is true for gapless systems if sufficient lower energy solutions are already present. This result implies that bundled MPSs can describe low-energy excitations without significantly increasing the bond dimension over the cost for ground state calculation with the proviso of some conditions that we explain.
Fast ignition in Inertial Confinement fusion (ICF) is an important technique to enhance the coupling efficiency of the laser to the core [1]. One of the primary challenges faced in fast ignition is the electron divergence, leading to reduced laser-core coupling [2]. A key solution to this problem is the generation of intense Megagauss magnetic fields to guide the ignition electrons, which results in an improvement in the energy coupling efficiency of the laser with the compressed fuel. Capacitor coils present themselves as excellent candidates for producing magnetic pulses of approximately 0.1-0.5 kT and a duration of around 5 ns, driven by high-energy, high-intensity (on the order of a few 10^15 W/cm2) nanosecond laser pulses [3-4]. At the University of Alberta, we have characterized gas jet nozzle targets to investigate the instantaneous magnetic fields produced by capacitor coils, based on measurements of high resolution Zeeman splitting. For optimum Zeeman splitting, the plasma conditions such as plasma temperature and density should be controlled to minimize broadening and maximize brightness of the spectral lines. We explore the response of the UV spectral line CIII 229.78 nm (1s22s2p-1s22p2) via modelling and experiments under various spatiotemporal plasma conditions. The aim is to investigate optimum plasma conditions to avoid large line broadening due to the high plasma density and temperature which can exceed the Zeeman splitting.
Recently, orbital angular momentum (OAM) beams have demonstrated at relativistic intensities at several high-power laser facilities around the world using off-axis spiral phase mirrors. The additional angular momentum carried by OAM beams, even when linearly polarized, introduces a new control parameter in laser plasma interactions and has shown promise to introduce new and exciting phenomena not possible with a standard Gaussian beam.
Of particular interest is the relativistic inverse Faraday effect where laser angular momentum is absorbed by a plasma generating large axial magnetic fields colinear with the laser k vector. Our recent work has demonstrated that magnetic fields on the order of 100’s of Tesla, extending 100’s of microns, and lasting on the order of 10 picoseconds can be generated with laser powers less than 5 terawatts. In this work we will explore this phenomenon through theory, simulations, and present results from a recent campaign at the COMET laser at Lawrence Livermore National Laboratory in which we used a linearly polarized Laguerre Gaussian laser to drive magnetic fields for the first time in the laboratory. Experimental results will be compared and validated against theory and simulations.
Betatron x-rays from a laser wakefield accelerator provide a new avenue for high-resolution, high-throughput radiography of dense materials. Here, we demonstrate the optimization of betatron x-rays for high-throughput x-ray imaging of metal alloys at the laser repetition rate of 2.5 Hz. Using the Advanced Laser Light Source in Varennes, QC, we characterized the x-ray energy spectrum, spatial resolution, beam stability, and emission length from helium, nitrogen, and mixed gas (99.5% He, 0.5% N) targets to determine the conditions for optimized imaging quality with minimized acquisition time. The optimized betatron x-ray source at 2.5 Hz was used for high-resolution imaging of micrometer-scale defects in additively manufactured metal alloys, demonstrating the potential of these sources for high-throughput data collection, accelerating the characterization of complex mechanical processes in these materials.
Cold plasma technology finds diverse applications spanning from microfabrication, medicine, agriculture, and surface decontamination. Precision required in these applications usually necessitate high control over the electric field of plasma sources, allowing for tailored targeting of specific chemical pathways. To determine the electric field, high-resolution detection techniques are essential for time and spatial resolved diagnostics. We proposed to use electric field-induced second harmonic (E-FISH), a well-established nonperturbative technique for measuring the amplitude and orientations of cold atmospheric plasma electric fields. Although E-FISH allows for a good and tunable time resolution, it has been shown that it presents some issues with spatial resolution and sensitivity. While spatial resolution can be improved by the overlapping of two non-collinear optical beams, the interaction section is much more reduced and lead in a significant signal reduction. To overcome this signal reduction, coherent Amplification of Cross-beam E-FISH (ACE-FISH) signal is introduced by mixing the low E-FISH signal and a phase-locked bright local oscillator. The enhancement of the signal is demonstrated by introducing a local oscillator, and the polarity of the electric field is determined through the phase of the homodyne signal. In a groundbreaking application, we employ ACE-FISH to measure, for the first time, the magnitude and direction of the electric field in a cold atmospheric pressure plasma jet. This jet dynamically follows the profile of the applied bias current. The ACE-FISH method not only overcomes spatial resolution challenges but also enhances sensitivity, thus presenting a promising avenue for improved diagnostics and applications across various domains of cold plasma technology [1-2].
[1] J.-B. Billeau, P. Cusson, A. Dogariu, A. Morozov, D. V. Seletskiy, and S. Reuter, “Coherent homodyne detection for amplified crossed-beam electric-field induced second harmonic (ACE-FISH),” Applied Optics, (Unpublished), 2023.
[2] J. Hogue, P. Cusson, M. Meunier, D. V. Seletskiy, and S. Reuter, “Sensitive detection of electric field-induced second harmonic signals,” Optics Letters, vol. 48, no. 17, p. 4601, aug 2023.
The nontrivial topological features in non-Hermitian systems provide promising pathways to achieve robust physical behaviors in classical or quantum open systems. Recent theoretical work discovered that the braid group characterizes the topology of non-Hermitian periodic systems.
In this talk, I will show our experimental demonstrations of the topological braiding of non-Hermitian band energies, achieved by implementing non-Hermitian lattice Hamiltonians along a frequency synthetic dimension formed in coupled ring resonators undergoing simultaneous phase and amplitude modulations. With two or more non-Hermitian bands, the system can be topologically classified by nontrivial braid groups. We demonstrated such braid-group topology with two energy bands braiding around each other, forming nontrivial knots or links. I will also show how such braid-group topology can be theoretically generalized to two and three dimensions. Furthermore, I will also show how such non-Hermitian topology can manifest in the dynamical matrices describing bosonic quadratic systems associated with the squeezing of light, where our latest results reveal a highly intricate non-Hermitian degeneracy structure that can be classified as the Swallowtail catastrophe.
The enhancement of the light-matter interaction through localized surface plasmon resonances (LSPRs) by the heterostructure of noble metal and copper sulfide nanoparticles has aroused wide concern. The higher-order nonlinear process has also gained considerable interest for its efficient enhancement of the harmonic generation process in harmonically resonant heterostructures. In this work, a theory of fourth harmonics generations (4HG) and fifth harmonics generations (5HG) is developed for metallic nanohybrids. Theoretical calculations were performed for a triple-layer nanohybrid in an ensemble of Au, Al and CuS metallic nanoparticles. When a probe field is applied to the nanohybrids, the photons would be coupled to the surface charges, and forming the surface plasmon polaritons (SPPs). The applied field would also induce dipoles, and these dipoles interact with each other which causes dipole-dipole interaction (DDI). With the produced SPP and DDI fields, the intensities of the output 4HG and 5HG fields are calculated by using the coupled mode formalism based on Maxwell's equations. The susceptibilities of different metallic nanoparticles are determined by the density matrix method under their localized SPP resonance frequencies. It is found that the 4HG and 5HG intensities depend on the fourth and fifth-order magnetic susceptibility. In the presence of SPP and DDI, the light-matter interaction is significantly enhanced by the coupling of their LSPRs. The output 4HG and 5HG intensities of the Al/Au/CuS triple-layer nanohybrids formed by the coupled LSPRs are calculated and compared with the experimental data, which showed the consistency with the theoretical model. The findings illustrate the effectiveness of producing higher harmonic generations within resonant plasmonic structures. This hybrid system can be also applied to manufacturing optical nano-switching devices.
We report the observation of the frequency nonlinearity during amplitude stabilization of the gain-embedded resonator which was previously interpreted as a van der Pol oscillator. Our investigation reveals that this specific nonlinear oscillationn is more accurately described by the van der Pol-Duffing oscillator model. We initially observed this phenomenon in a gain-embedded circuit oscillator and noted bistable behaviour upon coupling with a damped resonance. Then, in a gain-embedded cavity, we experimentally verified this nonlinear phenomenon. The bistable behavior of the cavity-magnonic polariton is well-fitted by this van der Pol-Duffing model.
The SGM (200 - 2000 eV) and SXRMB (1.7 - 10 keV) spectroscopy beamlines at the Canadian Light Source allow for a variety of novel in-situ and operando measurement techniques. This talk will cover the experimental data acquisition modes available at both beamlines and highlight the ways in which their unique capabilities allow for answering specific scientific questions. The emphasis will be on showcasing how spectroscopy can be used in a myriad of different ways to answer important topics in environmental and materials science.
Berry curvature manifests as current responses to applied electric fields. When time reversal is broken, a Berry curvature ''monopole'' gives rise to a Hall current that is proportional to the applied field. When time reversal is preserved, a Berry curvature ''dipole'' may result in a Hall current that is second order in the applied field. In this work, we examine a current response arising from a Berry curvature ''quadrupole''. This arises at third order in the applied field. However, it is the leading response when the following symmetry conditions are met. The material must not be symmetric under time-reversal ($ \mathcal{K} $) and four-fold rotations ($C_{4n}$); however, it must be invariant under the combination of these two operations ($C_{4n} \mathcal{K}$). This condition is realized in altermagnets and in certain magnetically ordered materials. We argue that shining light is a particularly suitable approach to see this effect. In the presence of a static electric field, light gives rise to a dc electric current that can be easily measured.
An electric dipole moving in a magnetic field acquires a geometric phase known as the He-McKellar-Wilkens (HMW) phase, which is the electromagnetic dual of the Aharanov-Casher phase. The HMW phase was first measured in 2012 using an atom interferometer [1]. In that experiment the electric and magnetic fields were static. We propose a modification where these fields are generated by laser beams.
[1] Lepoutre et al, PRL 109, 120404 (2012)
We compare the results of Electromagnetically Induced Transparency (EIT) and Four-Wave Mixing (4WM) in both thermal Rubidium vapor and cold atom-based systems. Our aim is to balance simplicity and fidelity in systems that aim to produce atom-resonant quantum stated of light. We discuss the construction of a Magneto Optical Trap (MOT) on an extremely low budget and discuss strategies for implementing a cold atom system with limited resources. In our next steps, we plan to employ a cavity-enhanced 4WM system with minimal optical power to generate squeezed quantum states. In order to achieve the required phase stability between the involved fields, we have tested both electronic phase-lock systems and a sideband approach using an Electro-optical modulator. In the proposed work, a cavity is locked to a laser which in turn is locked to an atomic ensemble, enabling strong photon-atom interactions.
We consider a dilute gas of bosons in a slowly rotating toroidal trap, focusing on the two-mode regime consisting of a non-rotating mode and a rotating mode corresponding to a single vortex. This system undergoes a symmetry breaking transition as the ratio of interactions to `disorder potential’ is varied and chooses one of the two modes spontaneously, an example of macroscopic quantum self-trapping. Analyzing elementary excitations around the BEC using Bogoliubov theory, regions of energetic instabilities with negative excitation frequencies are found, as well as dynamical instabilities, where excitations have complex frequencies. For the latter, amplitudes grow or decay exponentially. Instabilities can occur at bifurcations where the classical field theory provided by the Gross-Pitaevskii equation predicts that two or more solutions appear or disappear. Those complex eigenvalues confirm that the Bogoliubov Hamiltonian is non-Hermitian as picking a phase for the BEC breaks U(1) symmetry. In non-Hermitian quantum theory, the requirement of self-adjointness is replaced by a less stringent condition of PT-symmetry, which still ensures that Hamiltonians exhibit real and positive spectra if PT-symmetry is unbroken. We are investigating how the occurence of the dynamical instability is connected to a PT-symmetry breaking phase transition.
Coherent anti-Stokes Raman scattering (CARS) is a nonlinear optical process that is used for spectroscopy and imaging. The stimulated CARS signal is orders of magnitude stronger than in spontaneous Raman scattering, enabling CARS to achieve substantially faster acquisition speeds. This has positioned CARS as a desirable alternative to spontaneous Raman scattering as a contrast mechanism for chemical imaging. However, CARS suffers from the presence of a so-called non-resonant background (NRB) that distorts peak shapes and intensities, thus hindering the broader adoption of this powerful technique. The NRB makes quantitative analysis of CARS spectra nontrivial and reduces image contrast. NRB removal techniques that retrieve Raman-like signals from CARS spectra have thus become a central focus of the CARS literature. We present an original and accessible approach to NRB removal based on gradient boosting decision trees.
Gradient boosting decision trees are increasingly being used to win machine learning competitions, demonstrating their potential to compete with neural networks. Here, we apply the open-source gradient boosting framework XGBoost to NRB removal. A dataset of 100,000 stochastically generated CARS (input) and Raman-like (label) spectra was used for the training of the decision trees with a train-validation split of 80/20, while a dataset of 1000 independently generated pairs of spectra was used for testing. After hyperparameter tuning, the best decision tree yielded a Pearson correlation coefficient of r=.97 (p<.001) between retrieved and ground-truth Raman-like spectra, corresponding to a mean squared error (MSE) of 0.00047. When the trained model is applied to experimental CARS spectra obtained from samples with well-known Raman peaks, the model reproduces all of the expected Raman peaks for each of the samples that were tested. Our results establish gradient boosting decision trees as an effective tool for CARS NRB removal in lieu of neural networks.
The Abraham-Minkowski controversy refers to the ambiguity in defining the momentum of light within a dielectric medium. The choice one has in the partitioning of the total stress energy tensor of a system into a “material” portion and an “electromagnetic” portion has historically led to vigorous debate. The difference between Abraham’s formulation of the momentum density of light in a medium and Minkowski’s version of the same quantity leads to either the presence or absence, respectively, of the so-called “Abraham force” at the level of the equations of motion. We propose an atom-interference experiment for measuring the quantum geometric phase which ultimately gives rise to the Abraham force.
NSERC Chairs for Inclusion in Science and Engineering brings together three leaders to change the face of STEM in the Atlantic region: Dr. Svetlana Barkanova (Physics, Memorial University of Newfoundland), Dr. Kevin Hewitt (Physics, Dalhousie University), and Dr. Stephanie MacQuarrie (Chemistry, Cape Breton University). CISE-Atlantic will present an overview of our initiatives and focus on two important directions including incorporating and accounting for outreach and EDI into the Tenure and Promotion process and leading a call to action to recognize service to the community in the T&P process and “Physics in Rural Classrooms” a key initiative the team is leading. In Atlantic Canada, some of the rural communities do not have physics teachers at all, or the teachers assigned to science or physics classes may struggle with some of the physics topics. By directly connecting presenters from all over Canada with remote schools, we are providing a welcome resource to teachers and exciting role models to students. Where relevant, the curriculum may refer to the regional priorities and include Indigenous knowledge. The program offers four online guest talks per year to address specific curriculum for students in Grades 7 to 11. The talk will outline the motivation, logistics, and possible ways for our physics community to engage in the program.
The Tokai-to-Kamioka (T2K) experiment consists of an accelerator complex colliding protons on a graphite target, generating mesons which decay to neutrinos and detecting these neutrinos at both a near detector and a far detector, Super-Kamiokande (SK), 295km away. SK is a water Cherenkov detector and thus Cherenkov radiation from charged particles is detected by roughly 11000 photomuliplier tubes, the output of which is reconstructed to infer particle type and kinematics.
The current reconstruction algorithm in SK, fiTQun, uses classical likelihood maximization to estimate particle type and kinematics from Cherenkov rings produced when a neutrino interaction produces a charged lepton or hadron. This reconstruction algorithm has excellent performance in the most important T2K metrics - for example classifying electron neutrino events from muon neutrino events - but improvements to charged pion separation from muons, vertex and momentum reconstruction as well as computation time would greatly benefit many T2K and SK analyses.
The Water Cherenkov Machine Learning (WatChMaL) collaboration seeks to update classical reconstruction processes with machine learning. For SK data, investigations have centered on using either ResNet or PointNet architectures for both particle identification and vertex, momentum reconstruction. This talk will outline the data processing which the SK data must undergo to ensure adequate training, challenges in adapting state-of-the-art machine learning algorithms to our target problem, and current performance and comparisons with the classical algorithm. Outlines of future steps, including potential of adversarial networks to mitigate detector systematics in Super-Kamiokande will be discussed. Finally, other efforts in the WatChMaL collaboration will be described, including those on upcoming neutrino detectors.
T2K (Tokai to Kamioka) is a long-baseline neutrino experiment designed to investigate neutrino oscillations. The experiment employs a neutrino beam generated by colliding a proton beam with a graphite target. This target area is enclosed within a helium vessel containing the Optical Transition Radiation (OTR) monitor. The OTR monitor plays a crucial role in measuring the profile and position of the proton beam, essential for characterizing neutrino production and ensuring target protection. However, we observe a discrepancy between the beam width measured by the upstream beam monitors and OTR which could be caused by a broad background present in OTR images. We hypothesize this background light originates from scintillation induced by the proton beam. In order to understand the background in OTR images, we have built a Geant4 simulation to test two scintillation mechanisms. We model primary scintillation from excitation of the helium gas by the proton beam as well as secondary scintillation from the proton beam interacting with the upstream collimator and target. By confirming Geant4 simulation results through comparison with ray-tracing studies and experimental data we have developed an accurate model of the background light essential for improving OTR measurements. Minimizing uncertainty in OTR light production mechanisms is critical for fine-tuning the proton beam orbit at the onset of the T2K experiment, while also providing significant insights for physics analysis.
The DEAP-3600 dark matter experiment is at the forefront of our efforts to uncover the mysteries of the universe’s dark abundance. In this presentation, we explore significant developments in energy calibration techniques used within the DEAP-3600 experiment, showcasing an innovative approach that uses high-energy gamma rays from both the background spectrum and the AmBe calibration spectrum. This new method not only improves the precision of energy calibration but also strengthens the experiment’s ability to search dark matter particles.
We demonstrate the effectiveness of using high-energy gamma rays from the background spectrum to refine our understanding of the detector’s response across a wider energy range, thus enhancing the DEAP-3600 experiment’s capacity to identify potential dark matter interactions. Furthermore, it enables us to extend the utility of the detector to other rare event searches, including 5.5 MeV Solar Axions and Boron-8 neutrinos searches, broadening the scientific impact of our work.
This presentation will investigate into the alternative energy calibration techniques, providing insights into the recent results achieved by the DEAP-3600 experiment. Furthermore, we will explore the promising horizons offered by our detector upgrade. By doing so, we aim to emphasize the significance of these developments in advancing our understanding of dark matter.
Cryogenic (O(mK)) technologies are used for a variety of applications in astroparticle, nuclear, and quantum physics. The Cryogenic Underground TEst facility (CUTE) at SNOLAB, provides a low-background and vibrationally isolated environment for testing and operating these future devices. The experimental stage of CUTE can reach a base temperature of ~12mK and can hold a payload of up to 20 kg. The facility has been used to test detectors for SuperCDMS and is transitioning to become a SNOLAB user facility. The main design features and operating parameters of CUTE will be discussed in this talk as well as the current and future status and availability of the facility.
In this presentation, we introduce an innovative method for achieving comprehensive renormalization of observables, such as theoretical predictions for cross sections and decay rates in particle physics. Despite previous efforts to address infinities through renormalization techniques, theoretical expressions for observables still exhibit dependencies on arbitrary subtraction schemes and scales, preventing full renormalization. We propose a solution to this challenge by introducing the Principle of Observable Effective Matching (POEM), enabling us to attain both scale and scheme independence simultaneously. To demonstrate the effectiveness of this approach, we apply it to the total cross section of electron-positron to hadrons, utilizing 3- and 4-loop MS scheme expressions within perturbative Quantum Chromodynamics (pQCD). Through POEM and a process termed Effective Dynamical Renormalization, we achieve full renormalization of these expressions. Our resulting prediction, 1.052431+0.0006−0.0006 at Q=31.6GeV, closely aligns with the experimental value of Rexpe+e−=1.0527+0.005−0.005, showcasing the efficacy of our method.
The nature of dark matter is one of the most important open questions in the Standard Model, and dark matter direct detection holds exciting promises of new physics. By operating state-of-the-art kilogram-scale detectors at milliKelvin temperatures in one of the world’s deepest laboratories, SuperCDMS SNOLAB will be sensitive to a large range of dark matter masses. From October 2023 to March 2024, one SuperCDMS tower, consisting of six High Voltage detectors, was deployed at the Cryogenic Underground TEst facility (CUTE). This marks the first time that the new-generation SuperCDMS detectors are operated in an underground, low-background environment, allowing for a comprehensive detector performance study and possibly early science results. In this talk, I will detail the detector testing efforts and present our first findings about these detectors.
Incorporation of foreign atoms in low-dimensional materials such as graphene are interesting for many applications, including biosensing, super-capacitors, and electronic device fabrication. In such processes, controlling the nature of the foreign atom incorporation is a key challenge, as different moieties can contribute differently to doping and present different reactivities. With plasma processing increasingly requiring atomic level precision, a detailed understanding of the mechanisms by which ions, electrons, reactive neutrals, excited species, and photons interact simultaneously with materials such as graphene has become more important than ever.
In recent years, we studied the interaction of low-pressure argon plasmas with polycrystalline graphene films grown by chemical vapor deposition. Spatially-resolved Raman spectroscopy conducted before and after each plasma treatment showed defect generation following a 0D defect curve, while the domain boundaries developed as 1D defects. Surprisingly and contrary to common expectations of plasma-surface interactions, damage generation was slower at the grain boundaries than within the graphene grains, a behavior ascribed to a new preferential self-healing mechanism. Through a judicious control of the properties of the flowing afterglow of a microwave N2 plasma obtained by space- resolved optical emission spectroscopy, we further demonstrated an aromatic incorporation of nitrogen groups in graphene with minimal ion-induced damage. The use of both reactive neutral atoms and N2 excited states (mostly metastable states) was a radical departure from what was the state of the art in atomic manipulation, mainly because excited species can provide sufficient energy for the activation of adatom covalent incorporation while leaving the translational energy of both the impinging species and the low-dimensional materials undisturbed. A selective nitrogen doping due to preferential healing of plasma-generated defects near grain boundaries was also highlighted.
Very recently, a new setup was specifically designed to examine plasma-graphene interactions. In-plasma Raman spectrometry is used to monitor the evolution of selected Raman peaks over nine points of the graphene surface. On one hand, for high-energy ions, defect generation progressively rises with the ion dose, with no significant variations after ion irradiation. On the other hand, for very-low-energy ions, defect generation increases at a lower rate and then decreases over a very long time scale after ion irradiation. Such self-healing dynamics cannot be explained by a simple carbon adatom-vacancy annihilation. Using a 0D model, it is demonstrated that that various mechanisms are in play, including carbon adatom trapping by Stone-Wales defects and dimerization. These mechanisms compete with the self-healing of graphene at room temperature, and they slow down the healing process. Such features are not observed at higher energies for which carbon atoms are sputtered from the graphene surface, with no significant populations of carbon adatoms. We believe that these experiments can be used as building blocks to examine the formation of chemically doped graphene film in reactive plasmas using, for example, argon mixed with either traces of N- or B-bearing gases.
Tungsten-based materials are the currently favoured choice for the first-wall/Plasma Facing Components (PFC) in plasma fusion devices such as the ITER tokamak. The behaviour of tungsten-based materials under high-fluence ion bombardment is therefore highly relevant for fusion device engineering problems. The USask Plasma Immersion Ion Implantation (PIII) system has been optimized for high-fluence ion bombardment candidate PFC materials. PIII can be used to simulate the high fluence ion bombardment encountered in plasma fusion devices, and therefore provides a useful tool for PFC testing. This talk will discuss a recent study of tungsten-based materials (pure tungsten, W-Ni-Cu heavy alloy, and W-Ta) PIII-implanted with helium and deuterium. The post-implant analysis of these materials was carried out using synchrotron-based Grazing-Incidence X-ray Diffraction (GIXRD) and Grazing-Incidence X-ray Reflection (GIXRR) at the Canadian Light Source. These data reveal important aspects of the effect of helium and deuterium ion bombardment of tungsten-based PFC materials, and shed light on their suitability for fusion devices.
The magnetic-field-dependent fluorescence properties of NV$^{-}$ center defects embedded within a diamond matrix have made them a candidate for solid state qubits for quantum computing as well as magnetic field sensing. Microwave plasma assisted chemical vapor deposition (MPCVD) of diamond with \emph{in situ} nitrogen doping has provided reproducibility and uniformity in the production of NV$^{-}$ centers on multiple substrates[1]. What has yet to be understood is the impact of the nitrogen doping time on the MPCVD process and its impact on the creation of NV$^{-}$ centers.
Analysis of the NV$^{-}$-containing diamond films has been carried out using Scanning Electron Microscopy (SEM), X-ray Diffraction (XRD), Raman spectroscopy, Photoluminesence spectroscopy, and optical microscopy. In addition, calculated plasma parameters and models have been used to quantify the properties of the MPCVD process.This study aims to investigate the effect of nitrogen doping time and its effect on the produced spectral lines associated with the 1333 cm$^{-1}$ Diamond Raman spectra peak, 637 nm Photoluminesence NV$^{-}$ spectral peak, and the <111> and <220> diamond XRD peaks. This investigation aims to quantify a relationship between spectral peaks, NV$^{-}$ density, and nitrogen doping time in terms of MPCVD process parameters.
[1] Ejalonibu, H. A., Sarty, G. E., Bradley, M. P. (2019, April 25). ``Optimal parameter(s) for the synthesis of nitrogen-vacancy (NV) centres in polycrystalline diamonds at low pressure" - \emph{Journal of Materials Science: Materials in Electronics.} SpringerLink. https://link.springer.com/article/10.1007/s10854-019-01376-z
The vastest majority of the attempts at synthesizing novel two-dimensional (2D) materials have been relying on growth methods that work under thermodynamic equilibrium conditions, such as chemical vapor deposition, because these techniques have proven themselves successful in yielding a plethora technologically attractive, albeit thermodynamically stable, 2D materials. Out of equilibrium synthesis techniques are more rarely used for 2D materials, but this comes with limitations on the variety of 2D systems that can thus be obtained. For example, 2D tungsten semi-carbide (W2C) is a metallic quantum material that has been theoretically predicted, but was yet to be experimentally demonstrated, because the corresponding full carbide (WC) is energetically favored under thermodynamic equilibrium conditions, such as chemical vapor deposition. Here, we report a novel dual-zone remote plasma deposition reactor that was specially conceived to grow 2D carbides out of thermodynamic equilibrium. As far as tungsten carbide is concerned, this has led to tungsten carbide deposits with well-tuned ratios of W and C precursors, as demonstrated by optical emission spectroscopy (OES) of the plasma precursors, which has ultimately led us to obtain few-layer 2D W2C. In the second part of our talk, we will discuss the behavior of remote-plasma grown W2C 2D crystals under strain, and their investigation with scanning tunneling microscopy (STM) and spectroscopy (STS). We show that, in agreement with theoretical predictions, plasma-grown W2C offer tunable density of electronic states at the Fermi level, a property that may be potentially uniquely suited for obtaining fractional quantum Hall effects, superconductivity, and quantum thermal transport. Collectively, our study points at the critical relevance of out-of-equilibrium remote-plasma techniques towards the growth of unprecedented 2D materials.
I will present our recent progress in designing algorithms that depend on quantum-mechanical resources – superposition, interference, and entanglement – for the solution of computational problems. Combined, these algorithms cover a large variety of challenging computational tasks spanning combinatorial optimization, machine learning, and model counting. First, I will discuss an algorithm for combinatorial optimization based on stabilizer states and Clifford quantum circuits. The algorithm iteratively builds a quantum circuit that maps an initial easy-to-prepare state to approximate solutions of optimization problems. Since Clifford circuits can be efficiently simulated classically, the result is a classical quantum-inspired algorithm. We benchmark this algorithm on synthetic instances of two NP-hard problems, namely MAXCUT and the Sherrington-Kirkpatrick model, and observe performance competitive with established algorithms for the solution of these problems. Next, I will present a quantum machine learning (QML) model based on matchgate quantum circuits. This restricted class of quantum circuits is efficiently simulable classically through a mapping to free Majorana fermions. We apply our matchgate QML model to commonly studied datasets, including MNIST and UCI ML Breast Cancer Wisconsin (Diagnostic), and obtain better classification accuracy than corresponding unrestricted QML models. Finally, I will outline ongoing work on algorithms for hard problems in #P, the computational complexity class encompassing counting problems. These examples demonstrate that (a) using restricted quantum resources as an algorithmic design principle of classical algorithms may lead to significant advantages even without a quantum computer, and (b) the frontier of near-term quantum advantage may lie further in the future than anticipated by some.
Biological systems need to react to stimuli over a broad spectrum of timescales. If and how this ability can emerge without external fine-tuning is a puzzle. This problem has been considered in discrete Markovian systems where results from random matrix theory could be leveraged. Here, a generic model for Markovian dynamics with parameters controlling the dynamic range of matrix elements via uniformity and correlation of state transitions. Analytic predictions of critical values where transitions between random and non-random dynamics were found before having the model applied to real data. The model was applied to electrocorticography data of monkeys at wakeful rest undergoing an anesthetic injection to induce sleep, an antagonist injection was then administered in order to bring the monkey back to wakefulness. This data was processed into discrete Markov models at regular time intervals throughout the task. The Markov models were then analyzed with respect to the uniformity and correlation for transition rates, as well as resultant entropy and entropy rate measurements. The results were quantitatively understood in terms of the random model and the brain activity was found to cross over a predicted critical regime. Moreover, interplay between the uniformity and correlation parameters coincided with predictions of maintaining criticality across a task. Results are robust enough that the states of consciousness for the monkey were identifiable through parameter values, with sudden changes correlating with transitions between wakefulness and rest.
We show how thin wall magnetic monopoles can exist in a false vacuum, hence the name false monopoles, and how they can trigger the decay of the false vacuum.
In physics, Spacetime is always assumed to be a smooth $4-$manifold with a fixed (standard) differential structure. Two smooth $n-$manifolds are said to be exotic if they have the same topology but different differential structures. S. Donaldson showed that there exist exotic differential structures on $\mathbb{R}^4$. In the compact case, J. Milnor and M. Kervaire classified exotic differential structures on $n-$spheres $\mathbb{S}^n$. A fundamental question now remains to be answered : do exotic differential structures on spacetime play any role in physics ? The possibility of the applications of exotic structures in physics was first suggested by E. Witten in his article "Global gravitational anomalies". Trying to give a physical meaning of exotic spheres, Witten conjectures that exotic $n-$spheres should be thought as gravitational instantons in $n-$dimensional gravity and should give rise to gravitational solitons in $(n+1)-$dimensions. In this talk, we will address these questions in two steps. First we construct Kaluza-Klein $SO(4)-$monopoles on Milnor's exotic $7-$spheres (solutions to the 7-dimensional Einstein equations with cosmological constant). Secondly, taking exotic $7-$spheres as models of spacetime, we address physical effects of exotic smooth structures on the energy spectra of elementary particles. Finally we discuss other possible applications of exotic $7-$spheres in other areas of physics.
A generally covariant gauge theory is presented which leads to the Gauss constraint but lacks both the Hamiltonian and spatial diffeomorphism constraints, and possesses local degrees of freedom. The canonical theory therefore resembles Yang-Mills theory without the Hamiltonian. We describe its observables, canonical quantization, and some generalizations.
Active matter is a term used to describe matter that is composed of a large number of self-propelled active ‘particles’ that individually convert stored or ambient energy into systematic motion. Examples include a flock of birds, a school of fish, or at smaller scales a suspension of bacteria or even the collective motion within a human cell. When viewed collectively, active matter is an out-of-equilibrium material. This talk focuses on active matter systems where the active particles are very small, for example bacteria or chemically active colloidal particles. The motion of small active particles in homogeneous Newtonian fluids has received considerable attention, with interest ranging from phoretic propulsion to biological locomotion, whereas studies on active bodies immersed in inhomogeneous fluids are comparatively scarce. In this talk I will show how the dynamics of active particles can be dramatically altered by the introduction of fluid inhomogeneity and discuss the effects of spatial variations of fluid density, viscosity, and other fluid complexity.
Enzymes are valuable because they can catalyze reactions by binding transiently and greatly enhance the reaction probability for "substrate" molecules to convert to "product" molecules. But do they receive a physical kick while this reaction is proceeding? This would make them substrate-driven nanometers, or nanoscale active matter. Numerous fluorescence-based measurements (and a few others) say yes; several other measurements now say no!
We examine the diffusion of enzymes attached to nanoparticles (NPs) by multiple techniques. We also measure the enzyme activity of these enzyme-functional NPs. I will talk about the interesting behaviour of the enzyme activity of enzyme-functional NPs. And I might even answer the question in the title!
In recent years, there has been a surge of interest in minimally invasive medical techniques, with magnetic micro-robots emerging as a promising avenue. These microrobots
possess the remarkable ability to navigate through various mediums, including viscoelastic and non-Newtonian fluids, thereby facilitating targeted drug delivery and medical interventions. However, while many existing designs draw inspiration from micro-swimmers found in biological systems like bacteria and sperm, they often rely on a contact-based approach for payload transportation, which can complicate release at the intended site. Our project aimed to explore the potential of helical micro-robots for non-contact delivery of drugs or cargo. We conducted a comprehensive analysis of the shape and geometric parameters of the helical micro-robot, with a specific focus on its capacity to transport passive filaments. Through our examination, we propose a novel design comprising three sections with alternating handedness, including two pulling and one pushing microhelices, to enhance the capture and transportation of passive filaments in Newtonian fluids using a non-contact method. Furthermore, we simulated the process of capturing and transporting the passive filament and evaluated the functionality of the newly designed micro-robot. Our findings offer valuable insights into the physics of helical micro-robots and their potential applications in medical procedures and drug delivery. Additionally, the proposed non-contact approach for delivering filamentous cargo holds promise for the development of more efficient and effective microrobots in medical applications.
Molecular motors are nanoscale machines capable of transducing chemical energy into mechanical work. Inspired by biology, our transnational team has conceived different designs of artificial motors comprised of protein building blocks – proteins, because these are Nature's choice of such functional units. We have recently characterized the motility of one of these designs – the Lawnmower – and found that its dynamics demonstrate motor-like properties. I’ll describe the burnt-bridge ratchet principle of Lawnmower motility and our simulations and experiments that explore its motion.
Work in my group on this project led by PhD graduate Chapin Korosec, with funding from NSERC.
Publication: Korosec et al., Nature Communications 15, 1511 (2024)
Muon capture is a nuclear-weak process in which a negatively charged muon, initially in an atomic bound state, is captured by the atomic nucleus, resulting in atomic number reduction by one and emission of a muon neutrino. Thanks to the high momentum transfer involved in the process, it is one of the most promising probes for the yet unobserved neutrinoless double-beta decay. To help the planned muon-capture experiments, reliable theory predictions are of paramount importance.
To this end, I will discuss recent progress in ab initio studies on muon capture in light nuclei, focusing in particular on the ab initio no-core shell model. These systematically improvable calculations are based on nuclear interactions derived from chiral effective field theory. The computed rates are found to be in good agreement with available experimental counterparts, motivating future experimental and theoretical explorations in light nuclei.
Recent analysis of Fermi decays by C.Y. Seng and M. Gorshteyn and the corresponding $V_{ud}$ determination have revealed a degree of tension with Cabibbo-Kobayashi-Maskawa (CKM) matrix unitarity, confirmation of which would indicate several potential deficiencies within the Standard Model (SM) weak sector. Extraction of $V_{ud}$ requires electroweak radiative corrections (EWRC) from theory to be applied to experimentally obtained $ft$-values. Novel calculations of corrections sensitive to hadronic structure, i.e., the $\gamma W$-box, are at the heart of the recent tension. Moreover, to further improve on the extraction of $V_{ud}$, a modern and consistent treatment of the two nuclear structure dependent corrections is critical. These corrections are (i) $\delta_C$, the isospin symmetry breaking correction (ii) and $\delta_{NS}$, the EWRC representing evaluation of the $\gamma W$-box on a nucleus. Preliminary estimations of $\delta_{NS}$ have been made in the aforementioned analysis, however, the approach cannot include effects from low-lying nuclear states which require a true many-body treatment. Via collaboration with C.Y. Seng and M. Gorshteyn and use of the Lanczos subspace method, these corrections can be computed in ab initio nuclear theory for the first time. We apply the no-core shell model (NCSM), a nonrelativistic quantum many-body theory for describing low-lying bound states of $s$- and $p$-shell nuclei starting solely from nuclear interactions. We will present preliminary results for $\delta_{NS}$ and $\delta_{C}$ determined in the NCSM for the $^{10}\text{C} \rightarrow {}^{10}\text{B}$ beta transition, with the eventual goal of extending the calculations to $^{14}\text{O} \rightarrow {}^{14}\text{N}$ and $^{18}\text{Ne} \rightarrow {}^{18}\text{F}$.
In recent years, there has been a dramatic improvement in our ability to probe the nuclear many-body problem, due to the availability of several different powerful many-body techniques and sophisticated nuclear interactions derived from chiral effective field theory (EFT). In a recent paper [1], we investigated the perturbativeness of these chiral EFT interactions in a many-body context, using quantum Monte Carlo (QMC). QMC techniques have been used to probe a variety of nuclear many-body systems, ranging from light nuclei to neutron matter [2]. There are a variety of ways in which the Monte Carlo method can be applied to the many-body problem. The diffusion Monte Carlo method involves propagating a many-body system through imaginary time can be used on the continuum, where it is often improved with the application of auxiliary fields to handle complicated nuclear correlations, as well as in a lattice formalism, where particles are allowed to hop between lattice sites and interact with each other when they occupy the same site. In a recent publication, we began investigating how this lattice formulation, which is typically used to study condensed matter systems, can be applied to systems of interest to nuclear physics [3]. This presentation will discuss recent work involving the application of QMC approaches to the nuclear many-body problem, as well as a further discussion on how these methods can be improved to help expand on our understanding of nuclear physics.
[1] R. Curry, J.E. Lynn, K.E. Schmidt, and A. Gezerlis., Second-Order Perturbation Theory in Continuum Quantum Monte Carlo Calculations, Phys. Rev. Res. 5, L042021 (2023)
[2] J. Carlson et al., Quantum Monte Carlo Methods for Nuclear Physics, Rev. Mod. Phys. 87, 1067 (2015).
[3] R. Curry, J. Dissanayake, S. Gandolfi, and A. Gezerlis., Auxiliary Field Quantum Monte Carlo for Nuclear Physics on the Lattice, arXiv:2310.01504.
Anomalies in the systematics of nuclear properties challenge our understanding of the underlying nuclear structure. One such anomaly emerges in the Zr isotopic chain as a dramatic ground-state shape change, abruptly shifting from spherical into a deformed one at N=60. Only a few state-of-the-art theoretical models have successfully reproduced this deformation onset in $^{100}$Zr and helped to establish the shape coexistence in lighter Zr isotopes [1, 2]. Of particular interest is $^{98}$Zr, a transitional nucleus lying on the interface between spherical and deformed phases. Extensive experimental and theoretical research efforts have been made to study the shape coexistence phenomena in this isotope [3,4,5,6]. Although they provide an over-all understanding of $^{98}$Zr's nuclear structure, uncertainties remain in interpreting its higher-lying bands. Specifically, two recent studies utilizing Monte Carlo Shell Model (MCSM) [3] and Interacting Boson Model with configuration mixing (IBM-CM) [4] calculations have presented conflicting interpretations. The MCSM predicts multiple shape coexistence with deformed band structures, whereas the IBM-CM favours a multiphonon-like structures with configuration mixing.
To address these uncertainties, a $\beta$-decay experiment was conducted at TRIUMF-ISAC facility utilizing the 8$\pi$ spectrometer with $\beta$-particle detectors. The high-quality and high-statistics data obtained enabled the determination of branching ratios for weak transitions, which are crucial for assigning band structures. In particular, the key 155-keV $2_{2}^{+} \rightarrow 0_{3}^{+}$ transition was observed, and its branching ratio measured, permitting the $B$(E2) value to be determined. Additionally, $\gamma$-$\gamma$ angular correlation measurements enabled the determination of both spin assignments and mixing ratios. As a result, the $0^+$, $2^+$, and $I=1$ natures for multiple newly observed and previously known (but not firmly assigned) states has been established. The new results revealed the collective character of certain key transitions, supporting the multiple shape coexistence interpretation provided by the MCSM framework. These results will be presented and discussed in relation to both MCSM and IBM-CM calculations.
References
[1] T. Togashi, Y. Tsunoda, T. Otsuka, and N. Shimizu, Phys. Rev. Lett. 117, 172502 (2016).
[2] N. Gavrielov, A. Leviatan and F. Iachello, Phys. Rev. C 105, 014305 (2022).
[3] P. Singh, W. Korten et al., Phys. Rev. Lett. 121, 192501 (2018).
[4] V. Karayonchev, J. Jolie et al., Phys. Rev. C 102, 064314 (2020).
[5] J. E. Garcia-Ramos, K. Heyde, Phys. Rev. C 100, 044315 (2019).
[6] P. Kumar, V. Thakur et al., Eur. Phys. J. A 57, 36 (2021).
Classical chaos arises from the inherent non-linearity of dynamical systems. However, quantum maps are linear; therefore, the definition of chaos is not straightforward. To address this, we study a quantum system that exhibit chaotic behavior in their classical limits. One such system of interest is the kicked top model Haake, Ku ́s, and Scharf, Z. Phys. B 65, 381 (1987), where classical dynamics are governed by Hamilton’s equations on phase space, while quantum dynamics are described by the Schr ̈odinger equation in Hilbert space. In the kicked top model, non-linearity is introduced through the exponent of the angular momentum term, denoted as J^p. Notably, when p = 1, the system remains integrable. Extensive research has focused on the case where p = 2. In this study, we investigate the critical degree of non-linearity necessary for a system to exhibit chaotic behavior. This is done by modifying the original Hamiltonian such that a non-integer value of p is allowed. We categorize the modified kicked top into two regimes: 1 ≤ p ≤ 2 and p > 2, and analyze their distinct behaviors. Our findings reveal that the system loses integrability for any p > 1, leading to the emergence of chaos. Moreover, we observe that the intensity of chaos amplifies with increasing non-linearity. However, as we further increase p (> 2), we observe unexpected behavior, where chaos is suppressed, and regions of chaotic sea are confined to a small region in phase space. This study sheds light on the complex interplay between non-linearity and chaos, offering valuable insights into their dynamic behavior.
Bell's inequalities provide a practical method for testing whether correlations observed between spatially separated parts of a system are compatible with any local hidden variable description. For $2-$ qubit pure states, entanglement and nonlocality as measured by Bell inequality violations are directly related. However, for multiqubit pure states, the much more complex relation between N-qubit entanglement and nonlocality has not yet been explored in much detail. In this work, we analyze the violation of the Svetlichny-Bell inequality by N-qubit generalized GHZ (GGHZ) states, and identify members of this family of states that do not violate the inequality. GGHZ states are a generalization of the well known GHZ state, which is a useful entanglement resource. GGHZ states are hence natural candidates to explore for extending various quantum information protocols, like controlled quantum teleportation, to more than three parties. Our results raise interesting questions regarding characterization of genuine multipartite correlations using Bell-type inequalities.
Among the different approaches to studying the structure of atomic nuclei comprising protons and neutrons, the nuclear shell model formalism is widely successful across different regions of the nuclear chart. However, applying the shell model formalism becomes difficult for heavier mass regions, as the Hilbert space needed to define such a problem scales exponentially with increasing number of nucleons. Quantum computing is a promising way to deal with such a scenario, however for systems of practical relevance the amount of quantum resources required is beyond the capabilities of today’s hardware. Quantum entanglement provides a distinctive viewpoint into the fundamental structure of strongly correlated systems, including atomic nuclei. There is a growing interest in understanding the entanglement structure of nuclear systems, and leveraging this knowledge to simulate many-nucleon systems more efficiently.
In this work, we apply entanglement measures to reduce the quantum resources required to simulate a nuclear many-body system. We calculated the single orbital entropies as more neutrons were added for selected p-shell (Z = 2, 3, and 4) nuclei within the nuclear shell model formalism. In the case of the Li (Z = 3) isotopic chain, the proton single orbital entanglement of the 0p1/2 orbital in $^6$Li (1+) is 1.7 times more than $^7$Li (3/2-) and $^8$Li (2+). Also, the single orbital entanglement of proton 0p1/2 in $^9$Li (3/2-) is five times less than that of $^6$Li (1+). Hence, if the less entangled orbitals are treated differently, more efficient simulation circuits with fewer qubits and fewer quantum gates are possible for nuclei like $^9$Li (3/2-). Moreover, other entanglement metrics like mutual information can provide valuable insight into the underlying structure of a few-nucleon system. This method of reducing quantum resources could be useful for other neutron-rich nuclei of different isotopic chains.
Many-body entanglement is essential for most quantum technologies, but generating it on a qubit platform is generally experimentally challenging. On the other hand, continuous-variable (CV) cluster states have recently been realized among over a million bosonic modes. In our work, we present a hybrid CV-qubit approach to generate entanglement between many qubits by downloading it from efficiently generated CV cluster states. Our protocol is based on hybrid CV-qubit quantum teleportation in the displaced Gottesman-Kitaev-Preskill (GKP) basis. We develop an equivalent circuit model to characterize the dominant CV errors: finite squeezing and loss. Our results show that only 6dB squeezing is sufficient for robust qubit memory, and 12dB squeezing is sufficient for fault-tolerant quantum computation. We also show the correspondence between loss and qubit dephasing. Our protocol can be implemented with operations that can be commonly found in many bosonic platforms and does not require strong hybrid coupling.
Studying emergent phenomena in classical statistical physics remains one of the most computationally difficult problems. With the appropriate algorithm to renormalize the system, one of the most effective methods to study these problems is tensor networks. In the context of research areas like condensed matter, the result is a coarse grained and truncated system where only the most relevant states ranked by entropy have been maintained. An explosion of numerical algorithms which compute general properties of a statistical physics system such as specific heat, magnetization, and free energies are available; however, an overview of which tensor algorithms are best and where they must be improved would be highly advantageous for the scientific community. With our newly coded library of open access tensor network algorithms we make new recommendations of which algorithms to use, speculate on improvements for future algorithms, and provide information on how to implement novel tensor networks using our framework, the DMRjulia library.
M.R.G.F. acknowledges support from the Summer Undergraduate Research Award (SURA) from the Faculty of Science at the University of Victoria and the NSERC CREATE in Quantum Computing Program, grant number 543245.This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grants RGPIN-2023-05510 and DGECR- 2023-00026
Superconducting radiofrequency (SRF) cavities are an enabling technology for modern high power accelerators enabling material science (e.g. Canadian Light Source), nuclear physics (e.g TRIUMF), and particle physics (e.g. LHC, Electron ion collider) experiments. The behaviour of superconductors under radiofrequency is distinctively different from the DC case being intrinsically dissipative at temperatures above 0K and strongly dissipative above the lower critical field Hc1. This requires dedicated research and development for reliable operation and advancing the technology beyond state of the art. One particular technical challenges is efficient recovery and mitigation of performance degradation during operation to maximize availability for experiments. Under ideal conditions, state of the art SRF cavities reach fundamental limitations in terms of accelerating gradient (energy gain per unit length) and power dissipation. Further performance increases require specialized chemical and surface treatments, tailored to specific cavity types (optimized in shape for different charged particles from electrons to heavy ions) and exploring heterostructure nanomaterials. I will highlight recent research highlights from TRIUMF and UVic including results from testing new surface treatments on unique multimode coaxial resonators and material science investigations using beta detected nuclear magnetic resonance (beta-NMR) and muon spin rotation and relaxation (muSR) combined with the state of the art material analytic techniques (Transmission electron microscopy, secondary ion mass spectroscopy). The very low dissipation of SRF technology is also of interest to applications in quantum technology. Based on SRF cavity data we have developed a model for two level system losses.
The Dept. of Physics, together with the NEWS-G collaboration at Queen’s University are developing Spherical Proportional Counters (SPC) aimed for dark matter detection research. The response of SPCs to nuclear recoils with the interaction of the hypothetical dark matter particles can be best calibrated with a high intensity beam of low energy neutrons (~10 keV – 100 keV). Presently, the number of facilities having such neutron sources are quite low. This project aims to design and build a low energy neutron source at the proton accelerator facility of the Reactor Materials Testing Laboratory (RMTL).
This new neutron source consists of proton beam of 1.89 MeV – 2 MeV energy which bombards a target of Lithium Fluoride. The target is made by evaporating LiF on a tantalum substrate. This target plate is then kept on an Aluminium Nitride backing plate which together is kept on a 304L stainless steel flange which seals the vacuum chamber.
According to the theoretical calculations, LiF produces a good yield of neutrons, best at an angle of 45°. However, the energy spectrum of these neutrons ranges from ~31keV to higher. To achieve a monoenergetic source of neutrons at 24keV, we are also developing a collimator with an iron filter.
The collimator would consist of a combination of shielding materials, particularly Borated (B-PE) and Non-Borated Polyethylene (PE), and Lead (Pb). B-PE would thermalize the neutrons leaving at undesirable angles from the source and the Pb shielding would absorb the gamma radiation created by the B-PE as these would induce undesirable background in the SPC.
A thin layer of PE will also be used to decrease the energy of the neutrons originally in the energy range of > 31keV, down to a suitable energy range before they reach the iron filter to produce a 24keV neutron beam.
The High Energy Light Isotope eXperiment (HELIX) is a balloon-borne payload designed to measure the isotopic abundances of light cosmic ray nuclei. Precise measurements of the 10Be isotope from 0.2 GeV/n to beyond 10 GeV/n will allow the refining of cosmic ray propagation models, critical for interpreting excesses and unexpected fluxes reported by several space-borne instruments in recent years. Beryllium isotopes will be observed by HELIX with the first in a series of long duration balloon flights this summer in the Arctic. Upon completion of its maiden voyage, the detectors that make up the payload will be upgraded for a second flight to enhance performance and increase statistics. Potential upgrades for the HELIX hodoscope, an instrument contributing to the observation of particle path in the experiment, are being developed for this purpose.
The hodoscope is a position-measuring detector that uses ribbons of scintillating fibres woven into silicon photomultipliers to provide the location of incident particles in a high resolution. A prototype for an updated optical sensor readout system is being constructed at Queen’s University without fibre weaving. In this presentation, I will discuss the design and development status of the prototype hodoscope for the future HELIX payloads.
A Laser Ablation Source (LAS) can be used as an adaptable tool for ion production in mass spectrometry experiments [1]. The choice in ablation material allows for diverse production of ion species. This flexibility particularly complements online ion-trap-based mass spectrometry experiments, which require a variety of calibrant species across a wide range of masses. A LAS is currently being developed as an ion source for TRIUMF's Ion Trap for Atomic and Nuclear Science (TITAN). The LAS will couple to TITAN's Multiple-Reflection Time of Flight Mass Spectrometer (MR-TOF-MS) [2] to enhance the variety of stable and long-lived species for calibration during on-line experiments, off-line experiments, and technical developments. The LAS will additionally aid the other ion traps at TITAN in tuning prior to experiments through the production of chemical or mass analogs of targeted isotopes. Optimization of the ion optics and the overall design have been completed. Manufacturing is underway at the University of Calgary, where assembly and off-line testing will be completed before installation onto the on-line TITAN facility. The status of the LAS will be discussed, including characterizations of the assembled system such as the spatial resolution of the laser ablation spot on multi-material targets. The addition of the LAS to TITAN will not only improve the precision of online ion-trap-based mass spectrometry experiments through the introduction of isobaric mass calibrants, but also open new pathways for TITAN to engage in a variety of environmental and medical studies.
References
1. K. Murray et al. “Characterization of a Spatially Resolved Multi-Element
Laser Ablation Ion Source”. In: International Journal of Mass Spectrometry 472
(2022), p. 116763. issn: 13873806. doi: 10.1016/j.ijms.2021.116763
2. T. Dickel et al. “Recent upgrades of the multiple-reflection time-of-flight mass
spectrometer at TITAN, TRIUMF”. in: Hyperfine Interactions 240.1 (2019).
issn: 0304-3843. doi: 10.1007/s10751-019-1610-y
Polycyclic hydrocarbons (PHs) are carcinogens often present in water due to its contamination from oil and vehicle exhausts, and their removal is difficult due to their resistance to conventional water purification methods. Here, we present a thorough synchrotron-based characterization of carbon nano-particles derived from different parts of the cannabis plant (hurd and bast) and their ability to adsorb PHs in aqueous environment, with anthracene as a case study. The synthesis of carbon nano-particles was carried out by pyrolysis at varying temperatures followed by strong acid (HNO3:H2SO4) treatment. The goal is to establish a structure-function relationship between the synthesis parameters and the ability of these nano-particles to promote PH adhesion at their surface via pi-pi electron stacking. Synchrotron based X-ray absorption (XAS) is used to investigate the composition of these nanoparticles as well as their electronic structure which profoundly differs from graphene oxide and carbon dots, and is more resembling to amorphous carbons. Along with dynamic light scattering, XAS also demonstrates that defect-free sp2 carbon clusters (with limited hydroxyl and carboxyl groups at their surface) is necessary for the interfacial adhesion of anthracene at their surfaces. Our XAS results are also corroborated by benchtop techniques including Fourier-transform infrared (FTIR), photo-luminescence (PL) and UV-visible optical spectroscopies, as well as atomic force microscopy (AFM). We demonstrate that a unique advantage of our biomass-derived carbon nano-particles rests in the rapidity of their anthracene capture process, only requiring a few seconds, as opposed to several hours as in other systems proposed in the literature. Collectively, our study demonstrates the importance of advanced XAS techniques for the characterization of pi-pi electron stacking in carbon nanosystems.
The ability to measure small deformations or strains is useful for understanding many aspects of materials especially in soft condensed matter systems. Systematic shifts of speckles arising from small angle x-ray coherent diffraction when analyzed enable flow patterns of particle in the elastomers to be inferred. This information is obtained from cross-correlations of speckle patterns. This speckle tracking technique measures strain patterns with a accuracy similar to X-ray single crystal measurements but in amorphous or highly disordered materials.
Epitaxy of group IV semiconductors is a key enabler for electronics, telecommunications, and quantum devices. In the case of Sn, the growth challenges posed by lattice mismatch and the low solid solubility of Sn (<0.1%) in Si and Ge are significant. This research addresses these challenges by investigating ion implantation as a non-equilibrium growth technique combined with post-implantation annealing. A range of Sn concentrations was explored using Sn ions implanted into Si (001) at different doses (5E14 – 4E16 atoms/c$m_2$) and annealed at 600$^o$C and 800$^o$C (30 mins, dry $N_2$). The structural and optical properties of the samples were analyzed using Rutherford Backscattering Spectrometry (RBS), Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Positron Annihilation Spectroscopy (PAS), and Spectroscopic Ellipsometry (SE). RBS and SEM results indicate a maximum Sn dose of 5E15 for avoiding segregation during annealing at 600$^o$C and 800$^o$C, with Sn substitutionality reaching ~95 ±1%. SE results demonstrate increased optical absorption coefficient (𝛂) of Si for all implanted Sn doses (for λ = 800 - 1700 nm), with the highest 𝛂 values recorded for the highest dose of Sn (4E16). Evidence of segregated Sn contributing to changes in optical properties of Si is observed by etching the SiSn sample with 4E16 dose of Sn. The results show a reduction in the initial 𝛂 values; however, values obtained after etching were still higher than for pure Si. In conclusion, our study identifies Sn compositions that achieve high (~95%) substitutionality in Si without onset of segregation at 600$^o$C and 800$^o$C annealing temperatures. We analyze the implications of these findings on the optical properties of Si.
We may expect lithium to be the simplest metal as it has only a single $2s$ valence electron. Surprisingly, lithium's crystal structure at low temperature and ambient pressure has long been a matter of debate. In 1984, A. W. Overhauser proposed a rhombohedral $9R$ structure. Subsequent neutron experiments by Schwarz et al. in 1990 favour a disordered polytope. More recently, in 2017, Elatresh et al. argued against the $9R$ structure while Ackland et al. found fcc ordering. In this work, we seek to understand the physical principles that could lead to such conflicting findings. We describe metallic bonding in an arbitrary close-packed structure within the tight-binding approximation. Close-packed structures, also called Barlow stackings, are infinite in number. They can be codified by stacking sequence (e.g. fcc $\leftrightarrow ABC$) or by a Hagg code (e.g. fcc $\leftrightarrow +++$). From the point of view of an atomic orbital, all close-packed structures offer similar local environments with the same number of nearest neighbours. When hoppings are short-ranged, the tight binding description shows a surprising gauge-like symmetry. As a result, the electronic spectrum is precisely the same for every close-packed structure. This results in competition across a large class of structures that all have the same binding energy.
A preference for one ordering pattern can only emerge from (a) long-ranged (third-neighbour and further) hoppings or (b) phonon free energies at finite temperatures. Our results could explain the observed fcc structure in lithium under high pressure.
There is a critical knowledge gap in understanding the kinetics and mechanisms of mineral formation and degradation in the context of potential technologies that are targeted for carbon capture, utilization, and storage [1]. Both crystallization and dissolution of carbonate minerals figure prominently in many such climate-change-mitigation strategies that aim for carbon dioxide removal. For example, different approaches to ocean-based alkalinity enhancement involve processes that depend on mineral surface and interfacial effects in order to increase water pH with concomitant atmospheric carbon removal. In this context, I will describe my team’s work related to tracking changes in carbonate mineral phases, including surfaces and bulk structures, due to dissolution and recrystallization processes [2]. In doing so, I will emphasize the urgent need for collaborations between researchers who do foundational materials physics with those involved in developing monitoring, reporting, and verification protocols for potential carbon dioxide removal strategies.
[1] Basic Energy Sciences Roundtable, Foundational Science for Carbon Dioxide Removal Technologies, US Department of Energy (2022) DOI: 10.2172/1868525
[2] B. Gao, K. M. Poduska, S. Kababya, A. Schmidt. J. Am. Chem. Soc. (2023) 48, 25938-25941. DOI: 10.1021/jacs.3c09027
This research aims to enhance the performance of thermoelectric systems through a multifaceted approach combining computational modeling and machine learning techniques. The study focuses on analyzing quantum statistics within thermoelectric systems to uncover novel insights into alloy doping. We verified key derivations concerning the extrema of thermal, lattice thermal, and electrical conductivities as a function of temperature. Utilizing the analytical equations proposed by Yadav et al. (2019), we numerically verified and validated these equations, and discussed the theoretical predictions given in the paper. We developed a machine-learning model to predict thermoelectric figures of merit. With the use of the Polylogarithm and Lambert W functions, the model aims to provide optimal values for doping in thermoelectric alloys and seeks to identify compositions that can significantly enhance thermoelectric performance. This study involves a comprehensive analysis of the interplay between doping concentration, material properties, and thermoelectric efficiency. Our study endeavours to provide valuable insights into materials that advance thermoelectric technology to develop more efficient and sustainable energy conversion systems.
The dynamics of particles residing at a liquid-gas interface have shown to be of high importance both in fundamental studies and technological applications in recent years. Interfacial particles are amply found in artificial material manufacturing and biological systems. A better understanding of the unique physical properties of particles at the interface requires extensive attention to the surface interaction between the particle and the fluid. In this research, a computational method is employed to firstly successfully simulate the coexistence of liquid and gas, and secondly study the wetting properties of spherical particles at a liquid-gas interface. Different wetting boundary conditions will be tested to analyze the adsorption of the particle onto the interface. The simulations will be performed using a modified version of the lb/fluid package in LAMMPS, which is an implementation of Lattice Boltzmann method for simulating fluid mechanics. These results can provide us with enough insights to study interfacial particles with more complex conditions.
To date, there are very few all-optical techniques, if any, that are suitable for the purpose of acquiring, with nanoscale lateral resolution, quantitative maps of the thermal conductivity and thermal expansivity of 2D materials and nanostructured thin films, despite huge demand for nanoscale thermal management, for example in designing integrated circuitry for power electronics. Here, we introduce ω-ω and ω-2ω near-field thermoreflectance imaging as an all-optical and contactless approach to map the thermal conductivity and thermal expansion coefficents at the nanoscale with precision. Testing of our technique is performed on nanogranular films of gold and multilayer graphene (ML-G) platelets. As a case study, our recently invented ω-ω near-field scanning thermoreflectance imaging (NeSTRI) technique is here applied to multilayer graphene thin films on glass substrates. Thermal conductivity of micrometre-size multilayer graphene platelets is determined and is consistent with previous macroscopic predictions. As far as thermal expansion coefficient (TEC) is concerned, our method demonstrates that the TEC of ML-G is (-5.77±3.79) x10-6 K-1 and is assigned to in-plane vibrational bending modes. A vibrational-thermal transition from graphene to graphite is observed, where the TEC becomes positive as the ML thickness increases. Basically, our nanoscale method demonstrates results in excellent agreement with its macroscopic counterparts, as well as superior capabilities to probe 2D materials and interfaces.
The intense confinement of electromagnetic fields between metallic bispheres remains a subject of ongoing technological interest. Similarly, light can be concentrated into near-flied subwavelength hotspots in dimers of high refractive index dielectric resonators. Micro-resonators made of silicon and germanium are often exploited in forming exceedingly strong axial hotspots in dimers at visible spectrum region, facilitated by the hybridization of morphology-dependent resonances (MDRs) in individual objects. With an index of refraction approaching 9 at microwave frequencies, water has a large index contrast between the dielectric and the surrounding air, making water a particularly suitable material for obtaining strong Mie resonances. As a result, cm-sized aqueous dielectric dimers such as grapes can exhibit sufficiently strong axial hotspots to ignite plasma within household microwave ovens. Since individual grapes are never observed to spark, an understanding of the hybridization of isolated MDRs in dimers (and clusters) is of interest from a fundamental and technological (nano)photonic perspective.
We employ a combination of experimental, analytical, and computational methods to investigate MDRs hybridization in water, with a focus on the formation of axial hotspot in aqueous dimers. Experimentally, we use hydrogel beads and thermal imaging to explore polarization and size-dependence in hybridization. An analytical approach of applying vectoral addition of spherical harmonics provides geometric insight into which modes most strongly interact to form an electromagnetic hotspot. Finally, we employ the FEM simulations to further investigate mode concentrations and hotspot formation in dimers of various sizes, orientations, and separation.
During this talk, I aim to facilitate a critical conversation about Black women's educational experiences in post-secondary physics and astronomy education in Canada. To accomplish this, I develop a framework to understand the normative physics and astronomy curriculum, wherein 'normative curriculum' refers to the learning and performance expectations that extend beyond what texts like syllabi, course outlines, and standard educational material might inform. Drawing on the literature reviewed in my doctoral study, I describe the educational experiences of women in North America and use these experiences to shape typical curricular expectations. To begin, I explore how education in post-secondary physics and astronomy programs is understood within research. A comprehensive review of critical thinkers in science education leads to the conceptualization of how individuals encounter the curriculum. Subsequently, I operationalize the notion of curriculum as experiences, as revealed by the research on and about White women, Women of Color, and Black Women who study, research, and work in the physical and astronomical sciences. In doing so, I gather themes from literature to highlight the often-overlooked commitments and tasks that women must fulfill to be recognized as legitimate physicists and astronomers. Following this, I describe areas of the normative physics and astronomy curriculum, detailing critical perspectives on thinking and learning in science education. Throughout this talk, I will make connections to findings from my current study on Black women's educational experiences, including how they navigate predominantly White and male-dominated spaces in Canada. By the end of this discussion, I hope to deepen our overall understanding of physics and astronomy education within a national context.
Culture is a way of learning and therefore determine student career interest and performance. In Nigeria, the females are often regarded as the weaker sex and therefore need to choose careers not met for men. This results to gender inequality especially in the science and related discipline. The purpose of this study is to determine the cultural effect of gender on the admission and performance of physics students in six selected universities from the six geopolitical zones in Nigeria. The simple statistical analysis showed that female to male admission ratio overwhelmingly favour males. On performance, however, the performance of the female students did not show any gender difference. The cultural effects on the difference in gender admission and its non-significant effects on performance are discussed.
International Union of Pure and Applied Physics (IUPAP) is an organization that is deeply committed to promoting EDI among the worldwide community of professional physicists. As a Chair of 2022-2024 C14 (Physics Education) Commission of IUPAP, I had a privilege to participate in and co-organize several events that took place during the mandate of the current commission and were aimed at promoting gender equality in physics education. In my talk I will report on the Education Workshops at the 8th International Conference on Women in Physics 2023 (ICWIP2023) that I was tasked to co-organize in collaboration with my colleagues from the IUPAP's Women in Physics Working Group (WG5). More broadly, I will touch upon IUPAP’s EDI principles and initiatives that benefit physics education worldwide.
Diversity is lacking by most measures, in most STEM fields, including physics. A survey of Canadian physicists from 2021, called CanPhysCounts, found that the percentage of white men only increased as you move up the ranks in physics; the undergraduate level having the most diversity and people in physics careers or faculty positions being the least diverse with over 50% of people surveyed identifying as white men.
Increasing diversity, equity, inclusion, and accessibility (DEIA) in physics, and other STEM fields, is critical to producing good science. If there are more voices at the table, there will be new, interesting questions being asked and if we include more diverse thinkers in our science, that science will become better. To get these voices involved we have to prioritize DEIA within our physics worlds. A diverse group of people are not going to stay in physics if the physics space is not welcoming, inclusive, equitable, and accessible to them. In an effort to prioritize DEIA, I have created a practical guide that will help meeting and conference organizers make their meetings more inclusive and accessible. This guide was written as a compliment to the 500 Women Scientists’ Inclusive Scientific Meetings Guide. Scientific meetings and conferences are a good place to do DEIA work, because meetings and conferences are where many early career scientists find opportunities to advance their careers; from presenting their work, to engaging with collaborators, to meeting potential future advisors and employers. The same concepts can be generalized, though, into many scientific environments. Here I will present the motivation for my guide, the work that has previously been done in this area, what my guide brings to the table, and how I hope my guide will be used.
Since the discovery of the Higgs boson by the ATLAS and CMS Collaborations in 2012, a major focus in particle physics has been the understanding of its interactions. In the last years, huge progress was made in determining the strength of the Higgs bosons couplings to fermions and vector bosons, but its self‐interaction has yet to be established. The Higgs self-interactions are tightly related to the form of the Higgs potential, thus representing an extremely important measurement for our understanding of the origin of electroweak symmetry breaking and our universe. The most natural way to probe the self-interaction and the shape of the Higgs potential is through searches for Higgs boson pairs (HH) at particle colliders. This talk aims to summarize the most recent Higgs boson pairs results of the ATLAS experiment, as well as the prospects for future measurements.
The Large Hadron Collider (LHC) at CERN, is the largest and most powerful particle collider in the world, and the only machine capable of producing Higgs bosons. Interactions with the Higgs field gives particles mass, and a particle’s coupling with the Higgs boson is proportional to its mass. The Standard Model particles that make up matter can be grouped into different generations, and previous measurements of Higgs’ couplings have focused on the third generation of particles, which are the most massive. The best opportunity to measure the Higgs coupling to a second-generation particle at a lower, untested mass scale is by measuring the Higgs boson decay into two muons.
The Higgs to dimuon decay is a very rare process, and there are many other processes that can mimic this signature, making it very difficult to measure. Advanced methods are required to identify this small signal from a large continuous background in the data collected by the ATLAS detector at the LHC. An important technique to increase the signal-to-background ratio is splitting the data into distinct categories, based on the properties and kinematics of the events. The Higgs signal can then be extracted separately from several datasets with different signal-to-background ratios, resulting in a large increase in overall statistical significance of the measurement. Using the latest advancements in machine learning, I will use a deep neural network (NN) to optimize these categories. Various observables measured by the ATLAS detector will be provided to this NN, and it will determine the optimal way to separate the data into categories to maximize the statistical significance.
After the data has been split into optimal categories, the Higgs boson resonance peak can be extracted from the background. With improvements in analysis techniques and data currently being taken during Run-3 of the LHC, we hope to measure the Higgs to dimuon production with at least 3 sigma significance at the ATLAS detector, which would establish evidence for this process.
There exists a large body of indirect evidence for the existence of Dark Matter (DM) but, to date, no direct evidence has been found. Because of this, there is a wide range of open parameter space which has given rise to many different models. One class of models proposes that dark matter is composed of particles that have their own interactions and only minimally couple to the standard model through one or more “portal” interactions. One category of such models include a vector portal term that kinetically mixes dark gauge fields with standard model gauge fields. These models are characterized by Dark Matter having a component consisting of a Mili-charged particle - particles having an effective electric charge that is a fraction of the electron's electric charge. Direct detection of dark matter at accelerators is a high priority to narrow down possible models. Detecting or ruling out some possible DM models is a part of the experimental program for the MoEDAL experiment located at the LHC. The MAPP extension to the MoEDAL experiment, now approved for run 3, focuses on searching for Mili-Charged Particles (mCPs), and Long-Lived Particles (LLP). The vector portal that gives rise to mili-charged Dark Sector components has two possible phases: the Holdom phase, which is characterized by a massless dark vector gauge field, and the Okun phase, which has a massive dark vector gauge field. This talk will focus on a 'mixed' phase, which assumes both a massless and massive dark vector field. We will then look at Drell-Yan production of Dark mCPs and explore their phenomenology within the context of MoEDAL-MAPP.
Recent progress in understanding the algebraic structure of Feynman integrals has lead to a new "tropical" numerical integration algorithm introduced by Borinsky and collaborators. For the first time, it is possible to systematically study the numerical values of very many Feynman integrals from a relatively broad class. I will present the findings of such a study that involved all subdivergence-free vertex-type Feynman graphs of phi^4 theory in 4 dimensions up to 13 loops, and partial data up to 18 loops. In total, more than 1.5 million vacuum integrals have been computed, amounting to over 20 million vertex-type integrals. The resulting data indicates that at high loop order, most Feynman integrals follow a smooth distribution, but higher moments of that distribution diverge. This has severe consequences for the accuracy of randomly sampling Feynman graphs. Moreover, this study has lead to new numerical data for the subdivergence-free contribution to the beta function up to 18 loops, confirming a longstanding prediction for the leading asymptotic growth of these coefficients.
Based on JHEP 2023.160
We study Unruh phenomena for an accelerating qudit detector coupled to a quantized scalar field, comparing its response to that of a standard qubit-based Unruh-DeWitt detector. We show that there are limitations to the utility of the detailed balance condition as an indicator for Unruh thermality of higher-dimensional qudit detector models. This can be traced to the fact that a qudit has multiple possible transition channels between its energy levels, in contrast to the 2-level qubit model. We illustrate these limitations using two types of qutrit detector models based on the spin-1 representations of SU(2) and the non-Hermitian generalization of the Pauli observables (the Heisenberg-Weyl operators). https://arxiv.org/abs/2309.04598
In the 1970s, it was discovered that a uniformly accelerated detector, interacting with the vacuum state of a quantum scalar field in flat spacetime, has a thermal response with a temperature proportional to its proper acceleration. This phenomenon, known as the Unruh effect, is considered a signpost in the search for a quantum theory of gravity. Since the discovery of the effect, efforts have been dedicated to the study of quantum detectors in curved spacetime because their response encodes information about fluctuations of the vacuum state of the field and hence of the underlying spacetime. However, despite more than four decades of dedicated research, little is known about the response of quantum detectors as they freely fall into black holes. I present results detailing the response of a detector interacting with the Hartle-Hawking vacuum state of a massless scalar field in a Bañados-Teitelboim-Zanelli (BTZ) black hole as the detector freely falls toward and across the event horizon. I also discuss how this response changes for the geon counterpart of a BTZ black hole, an object identical to the BTZ black hole outside its horizon but having a different topology inside. Our results suggest that the detector can potentially serve as an ‘early warning system’ that indicates the presence of the event horizon and discerns the interior topology of the black hole.
We consider the transition rate of a static Unruh-DeWitt particle detector in a variety of spacetimes built out of quotients of $\text{AdS}_3$ spacetime. In particular, we contrast the behavior of a Unruh-DeWitt detector interacting with a quantum scalar field in the $\mathbb{R}\text{P}^{2}$ geon spacetime and a spacetime constructed by Aminneborg et al. The Wightman functions of these spacetimes are obtained using the method of images. We find a number of features that distinguish the two spacetimes, which are identical outside of the black hole's event horizon, most notably, in the response functions of gapless detectors in the sharp-switching limit. This points to a way in which the interior topology of a black hole may be discerned by an external observer.
Recent studies have shown that an Unruh-DeWitt (UDW) detector coupled to a massless scalar field in (3+1) Schwarzschild and (2+1) non-rotating BTZ spacetimes exhibits a local extremum in transition rate at the horizon. This non-monotonicity is of interest, as it suggests that the event horizon is distinguishable to a local probe when QFT is taken into consideration. In this study, we calculate the transition rate of a freely falling UDW detector in (2+1)-dimensional rotating BTZ spacetime. We explore different values of black hole mass, black hole angular momentum, and boundary conditions of the field at infinity. The results that we obtain are consistent with previous studies in the limit as black hole angular momentum vanishes; however, the presence of rotation introduces new phenomena, and our results provide a more general profile for the infalling detector problem in BTZ spacetime. There is now a growing body of evidence for detector excitement across black hole event horizons, and we anticipate that further searches will be conducted in other spacetimes to better understand its physical meaning.
In appropriate semiclassical limits, the so-called Island Formula computes the entropy of non-gravitational quantum systems entangled with a gravitational theory. This is a special case in which the quantum-corrected Ryu-Takayanagi formula has been shown to compute a von Neumann entropy using only properties of the gravitational path integral and, in particular, without relying on the existence of a holographic dual field theory. It is thus natural to claim that a similar conclusion should hold more broadly, and that any asymptotically-AdS gravitational theory will define an algebra for any boundary region such that, in appropriate limits, the entropy of any state on that algebra is computed by the quantum-corrected Ryu-Takayanagi formula. Recent works by Chandrasekaran, Pennington and Witten have used the theory of von Neumann algebras to derive results of this form in various special contexts. We argue here that the above claim holds more generally, whenever the Euclidean path integral of the gravitational theory satisfies a set of standard axioms. We thus allow finite values of all coupling constants and do not require taking any special limits. Since our axioms do not restrict ultra-violet bulk structures, they may be expected to hold equally well for successful formulations of string field theory, spin-foam models, or any other approach to constructing a UV-complete theory.
Dynamins are an essential superfamily of mechanoenzymes that remodel membranes and often contain a “variable domain” important for regulation. For the mitochondrial fission dynamin, dynamin-related protein 1 (Drp1), a regulatory role for the variable domain (VD) is demonstrated by gain- and loss- of-function mutations, yet the basis for this is unclear. Here, the isolated VD is shown to be intrinsically disordered and undergo a liquid–liquid phase separation under in vitro crowding conditions. MD simulations suggest this liquid-liquid phase separation arises from weak, multivalent interactions similar to other systems involving intrinsically disordered regions. These crowding conditions also enhance binding to cardiolipin, a mitochondrial lipid, which appears to also promote phase separation. Since dynamin-related protein 1 is found assembled into discrete punctate structures on the mitochondrial surface, the inference from the present work is that these structures might arise from a condensed state driven by interactions between VD domains and between cardiolipin and VD. These findings support a model where the variable domain mediates phase separation that enables rapid tuning of Drp1 assembly necessary for fission.
Introduction: We have previously demonstrated, using polarized light, that we can image amyloid protein deposits in the retina without a dye. Postmortem, their numbers predict the load of amyloid in the brain and severity of Alzheimer’s disease (AD). Here we differentiate retinal deposits of presumed amyloid beta, associated with AD, from presumed retinal deposits of alpha synuclein, associated with two other neurodegenerative diseases (multiple system atrophy, MSA and dementia of Lewy bodies, DLB). We also image precursors to these deposits.
Methods: Eyes and brains were obtained post-mortem in compliance with the Declaration of Helsinki from 10 donors with AD, and from 2 donors with MSA or DLB in whom alpha synuclein had been found in the brain. Individuals with multiple post-mortem brain pathologies were excluded from this study. Eyes were fixed in 10% formalin. Retinas were stained with 0.1% Thioflavin-S and counterstained with DAPI, flat mounted in quadrants and imaged using a microscope, custom fitted with a polarimeter. In each subject, deposits found in association with the neural retina as well as the surrounding retinal area were imaged. For each imaged region, 10 polarized light interactions were examined. The presence of interactions with polarized light was measured both in the deposits and surrounding tissue.
Results: Although their size distributions overlapped, retinal deposits were significantly smaller in retinas in which amyloid beta deposits were expected, compared with the size of the presumed alpha synuclein deposits. After correction for repeated measures, the averages and standard deviations of four polarimetric properties differed significantly between the presumed amyloid deposits and the presumed alpha synuclein deposits. Using machine learning (random forest and convolutional neural networks), we were able to separate the two deposit types with accuracies of >85%. Interactions with circularly polarized light were also detected.
Conclusions: Interactions with polarized light can separate deposits in the retina due to Alzheimer’s disease from those due to diseases with alpha synuclein pathology (MSA and DLB), early in the disease. Polarized light also detects two circular signals which are presumed to be precursors to deposits. These findings could lead to earlier and simpler diagnosis and differentiation of multiple brain diseases.
Molecular azobenzene photoswitches have long been attractive in the design of photoresponsive materials owing to their reversible light-triggered photoisomerization about the azo bond (N=N) between trans and cis isomeric configurations. Towards more versatile materials applications, azopyridines have been designed as a next-generation azobenzene photoswitch possessing pH sensitivity, hydrogen-bonding, and metal binding abilities. As a result, they have become a key element in the photocontrol of liquid crystals, pharmacological agents, photodriven oscillators, and molecular spin switches. Our group is also developing a new, nature-inspired optical oxygen sensor for tumours whose quantitative readout is the isomerization rate of an azopyridine photoswitch unit. However, detailed studies on the isomerization kinetics of azopyridines and their protonated forms are still lacking. Not only would such studies extend their application to biological contexts, but protonation can also serve as a tool to significantly modulate the photoisomerization process and has even been shown to abolish it entirely. Moreover, there is a conspicuous lack of literature on the photoisomerization of azopyridines in chlorinated solvents where adventitious protonation can occur.
In this work, irradiation of 4-phenylazopyridine (AzPy) in chlorinated solvent with 365 nm light produced significant bathochromic shifts in the π-π and n-π absorption bands rather than the expected spectral changes associated with trans-cis photoisomerization. In addition, there was a significant acceleration of the cis-trans back-isomerization rate, which was attributed to protonation of AzPy at the pyridine nitrogen due to HCl production from UV-mediated photodecomposition of the solvent. Calculations with the density functional theory quantum-mechanical method demonstrated that due to weakening of the electronics of the azo bond, protonation reduced the activation barrier for cis-trans isomerization significantly, corresponding to a 9-fold acceleration in the isomerization rate. Remarkably, protonation also shut down intersystem crossing between singlet and triplet potential energy surfaces along the isomerization reaction coordinate.
Proton therapy uses an external beam of protons to destroy cancerous tissue while reducing damage to healthy tissue. Of particular interest is the recent concept of proton FLASH therapy, where ultra-high dose rates (> 40 Gy/s) are delivered for under one second, with improved sparing of healthy tissue compared to conventional dose rates. The FLASH effect and the influence of beam properties and biological characteristics are not yet fully understood, hence, a sensitive dosimeter with high spatial resolution and in-situ relative dose information for FLASH is needed to bring it into the clinic. Optical fibers (OF) are gaining traction as dosimetry detectors in radiotherapy, including proton therapy, due to their superior spatial resolution, linear dose dependence, independence of dose rate, real-time response, and independence from electromagnetic fields and temperature fluctuations within the range of realistic clinical conditions.
At TRIUMF, characterizations of OF for proton FLASH dosimetry are ongoing. As beam-availability at the Proton Therapy Research Centre is limited, we are now exploring experiments at the TR13, TRIUMF’s 13 MeV cyclotron, which is used to produce medical isotopes and where the beam is more regularly available. To characterize a fiber’s light yield and radiation hardness, a fiber holder customized for the TR13 is needed. The fiber holder was designed based on Monte Carlo simulations in FLUKA as well as temperature calculations using in-house data.
Three different fiber holders were tested in simulations. Two designs were discarded because of energy deposition inhomogeneity in the fiber and other considerations. The third fiber holder showed promising results regarding beam deposition, heat transfer calculations, and radiation activation limitations.
The current fiber holder design can hold silica fibers up to a diameter of 350 um and withstand irradiations of up to 2 µA beam current. This holder will allow systematic evaluation of OF for potential use with proton FLASH.
The MOLLER experiment is a >$40M USD experiment expected to run in 2026. This experiment has a large Canadian contribution, to both the spectrometer and detector systems. The experiment utilizes parity-violation in the weak interaction to measure the asymmetry between longitudinally polarized electrons in the positive and negative helicity states. The electrons scatter from electrons in liquid hydrogen, are collimated and bent through the spectrometer system to the main detector array. There are 224 integrating quartz detectors in the array. In addition there are a set of tracking detectors to study backgrounds and determine the acceptance. In fact, the whole accelerator is part of the experiment, with beam position and charge monitors throughout the beamline serving to study helicity-correlated backgrounds. In this talk I will describe the goals of the MOLLER experiment and its design and provide a status, in particular of the spectrometer and detector systems.
The conventional picture of the hadron, in which partons play the dominant role, predicts a separation of short-distance (hard) and long-distance (soft) physics, known as 'factorization'. It has been proven that for certain processes, at sufficiently high $Q^2$, the reaction amplitude factorizes into a hard part, representing the interaction of the incident virtual photon probe with the parton, and a soft part, representing the response of the nucleon to this interaction. One class of such processes is Deep Exclusive Meson Production (DEMP), which provide access to a novel class of hadron structure observables known as Generalized Parton Distributions (GPDs). Unifying the concepts of parton distributions and of hadronic form factors, GPDs correlate different parton configurations in the hadron at the quantum mechanical level, and contain a wealth of new information about how partons make up hadrons. However, access to such GPD information requires that the 'factorization regime' has been reached kinematically, and this can be tested only experimentally. I will summarize prior and planned tests of the validity of GPD factorization in DEMP reactions, such as exclusive pion and kaon production, using the Jefferson Lab Hall C apparatus.
Measurements of several rare eta and eta′ decay channels will be carried out at the Jefferson Lab Eta Factory (JEF). JEF will commence in fall 2024 using an upgraded GlueX detector in Hall D. The combination of highly-boosted eta/eta′ production, recoil proton detection, and a new fine-granularity high-resolution lead-tungstate insert in the GlueX forward calorimeter confers uniqueness to JEF, compared to other experiments worldwide. JEF will search for new sub-GeV gauge bosons in portals coupling the SM sector to the dark sector, will provide constraints on C-violating/P-conserving reactions, and will allow precision tests of low-energy QCD. Details on the hardware upgrade and simulations will be presented.
Measurements of the neutron electric dipole moment (EDM) place severe constraints on new sources of CP violation beyond the standard model.
The TRIUMF UltraCold Advanced Neutron (TUCAN) EDM experiment aims improve the measurement of the neutron EDM by a factor of 10 compared to the world's best measurement. The experiment must be conducted in a magnetically quiet environment. A magnetically shielded room (MSR) has been prepared at TRIUMF to house the experiment. The MSR was designed to provide a quasi-static magnetic shielding factor of minimally 50,000, which would be sufficient to meet the requirements of the EDM experiment. Measurements have showed that the shielding factor goal was not met. Several additional measurements were taken in order to understand the result. In communication with the MSR vendor, we have designed a new insert for the MSR, which is expected to restore its capabilities. In this presentation I will review the situation with the TUCAN MSR, how we discovered its performance issues, and our progress on fixing the problem.
The Lassonde School of Engineering at York University launched k2i (kindergarten to industry) academy in June 2020 with a mission to create an ecosystem of diverse partners, committed to dismantling systemic barriers to opportunities for underrepresented students in STEM. The k2i academy is a key component of the Lassonde School of Engineering Equity, Diversity, and Inclusion Action Plan. In this talk, Lisa Cole, Director of Programming at k2i academy will share how Inclusive Design approaches are currently being used to create programs that questions systemic barriers, innovates viable solutions, and builds alongside k12 sector partners.
There is increasing demand for measurements of atmospheric properties as the climate continues to change at an unprecedented rate. Remote sensing allows us to acquire information about our atmosphere from the ground and from space by detecting reflected or emitted radiation. I will present initial results of a comparison using simulated space-based measurements from the HAWC ALI satellite instrument with ground-based measurements from a network of micro-pulse lidars, MPLCAN.
The Aerosol Limb Imager (ALI) is a part of the High-altitude Aerosol, Water, and Clouds (HAWC) satellite, a Canadian mission which will help fill a critical gap in our understanding of the role of aerosol, water vapour, and clouds in climate forcing. ALI will retrieve aerosol extinction and particle size in the troposphere and stratosphere.
The Canadian Micro-Pulse Lidar Network (MPLCAN) is a network consisting of five micro-pulse lidars (MPLs) across eastern and northern Canada. The MPLs can detect particulates produced from wildfire smoke, volcanic ash, and anthropogenic pollutants by collecting backscattered light. They can also differentiate between water and ice in clouds by measuring the polarization state of the backscatter signal.
Coincident measurements between the MPLCAN and ALI instruments have great potential to validate the ALI measurements, and to extend their horizontal coverage. However, the ALI retrieved quantities are not directly comparable to the MPL backscatter measurements, so assumptions must be made about the constituents and optical properties of the atmosphere to compare them. The ALI retrieved quantities were converted to an MPL backscatter measurement for comparison using two methods. First, Mie scattering theory was used based on the ALI retrievals of aerosol particle size to calculate the backscatter coefficient. The second method assumed a lidar ratio, the ratio of backscatter to extinction, appropriate for background stratospheric aerosols. The ALI-derived backscatter coefficient from both methods yielded similar results. Preliminary comparisons between both simulated and actual MPL measurements and the converted ALI retrieval show promising agreement. Future work will aim to model ALI passing over multiple MPLs for realistic HAWC satellite tracks to simulate wildfire smoke events.
Resistance spot welding employs the Joule heating effect to form a localized molten pool between two or more metal sheets, which upon solidification forms a solid bond. This process is widely used in automotive and other industrial sectors due to its low cost and ease of automation. Quality assurance of such joints is primarily done using offline inspection with a multi-element ultrasonic transducer to allow for 2D measurements of the weld size to occur. Due to the high number of spot welds in automotive applications, averaging about 5000 welds per car, this inspection is performed only on critical welds, or periodically on select samples.
Currently, the novel in-process inspection, which monitors during welding, employs a single-element ultrasound transducer built into the welding electrode. A series of pulses are then used to form a time evolution signature from which size is estimated based on the penetration of the weld into the sheet. For this reason, adoption has been hindered in applications where the physical diameter of the welding zone is required by safety standards.
To overcome this, current techniques in the field such as muti-element matrix and phased array have been explored. Although both offer the possibility for diameter measurement, the increased size of the transducer requires a significantly larger welding electrode and makes integration difficult. Phased array also employs electronic focusing, increasing both the complexity and cost of the system by an order of magnitude.
In order to allow for imaging to occur, a radical alternative was required. By using a series of point-like sources, we propose a novel approach implementing a built-in lens cut into the welding electrode, as a result, a 2D image of the welding process can be performed using a transducer that is a fraction of the size of even single-element solutions. After theoretical and numerical validation, a prototype was fabricated for experimental study.
The primary drawback of this technique results from the drastically smaller size, resulting in approximately 5 orders of magnitude lower signal.
This talk covers the current results and state of development, future approaches to overcome implementation challenges, and the potential for new advanced solutions based around this innovative approach.
Geophysical methods and soil test analysis have been used to study soil properties in the farm of the Centre for Entrepreneurial Studies (CES), Delta State University Abraka Nigeria. Vertical electrical sounding (VES), borehole geophysics, electrical resistivity tomography (ERT) and geochemical methods were used for the study. Seven VES stations were occupied along five traverses ERT measurements. Soil samples were collected close to the VES stations for soil test and grain size analysis to corroborate the VES and ERT results. The results of the topsoil obtained from the VES is in agreement with the ERT and borehole log results and this ranges from fine grained silt topsoil to sandy clay. The low resistivity of the topsoil is as a result of partial decomposition of plants and animals forming organic matter, and ranges from 168 – 790 Ωm with average value of 494 Ωm and average depth of 2.3 m. This depth cover the upper root zone of some significant crop, and depict a high amount of moisture and mineral nutrients, and a fair degree of stoniness to aid adequate rooting of the crops. Also, the observed topsoil is high in porosity and water retention which are major suitable factors for the yield of tuber and stem plants. The soil test results gave pH: 6.13-7.16, organic matter: 6.48-8.66 %, Nitrogen: 65.72-78.21 %, Phosphorus: 53.32-67.43 %, Copper: 14.16-22.61 mg/kg, Nickel: 1.16-3.11 mg/kg, Lead: 4.00-8.84 mg/kg, Arsenic: 0.08-0.1 mg/kg Iron: 96.33-151.63 mg/kg. These recorded concentrations are below the WHO standard for crop production.
The Cosmological Advanced Survey Telescope for Optical and uv Research (CASTOR) is a proposed Canadian Space Agency (CSA) mission that would image the skies at ultraviolet (UV) and blue-optical wavelengths simultaneously. Operating close to its diffraction limit, the 1-m-diameter CASTOR telescope is designed with a spatial resolution similar to the Hubble Space Telescope (HST), but with a field of view about one hundred times larger. The exciting science enabled by the CASTOR suite of instruments and the planned legacy surveys encompasses small bodies in the Solar System, exoplanet atmospheres, cosmic explosions, supermassive black holes, galaxy evolution, and cosmology. In addition, this survey mapping capability would add UV coverage to wide-field surveys planned for the Euclid and Roman telescopes and enhance the science return on these missions. With a CSA-funded phase 0 study already complete, the CASTOR science case and engineering design is on track for a launch in 2030 pending continued funding.
We use atomic force microscopy-force spectroscopy (AFM-FS) to measure the morphology and mechanical properties of cross-linked polyethylene (PEX-a) pipe. PEX-a pipe is being increasingly used to replace metal pipe for water transport and heating applications, and it is important to understand ageing, degradation and failure mechanisms to ensure long-term reliability. AFM-FS measurements on the PEX-a pipe surfaces and across the pipe wall thickness allow us to quantify changes in the morphology and mechanical properties from high resolution maps of parameters such as stiffness, modulus, and adhesion. Measurements performed on pipes subjected to different processing and accelerated ageing conditions generate a substantial amount of data. To classify and correlate these images and the associated properties, we have used machine learning techniques such as k-means clustering, decision trees, support vector machines, and neural networks, revealing distinctive changes in the morphology and mechanical properties with ageing. Our machine learning approach to the analysis of the large body of AFM-FS data complements our deep generative modeling of infrared images of the same pipes [1], providing additional insight into the complex phenomena of ageing and degradation.
[1] M. Grossutti et al., ACS Appl. Mater. Interfaces 15, 22532 (2023).
The development of nanotechnology has brought a great opportunity to study the linear and nonlinear optical properties of plasmonic nanohybrids made of metallic nanoparticles and quantum emitters. Rayleigh scattering is a nonlinear scattering mechanism occurring due to the elastic collision of electromagnetic radiation from bound electrons in atoms or molecules after they have been excited to virtual states far from resonances. A theory for stimulated Rayleigh scattering (SRS) has been developed for metallic nanohybrids composed of an ensemble of metallic nanoparticles and quantum dots (QDs). The intensity of the output stimulated Rayleigh scattered light is obtained using the coupled-mode formalism of Maxwell’s equations and evaluated by the density matrix method. An analytical expression of the SRS intensity is calculated in the presence of surface plasmon polaritons (SPPs) and dipole-dipole interactions (DDIs). We have compared this theory with experimental data for a nanohybrid doped with an ensemble of Ag-nanoparticles and rhodamine 6G dye. There was found to be good agreement between experiment and theory. We have also predicted an enhancement of the SRS intensity due to the extra scattering mechanisms of the SPP and DDI polaritons with QDs. It was also found that at low values of DDI coupling the SRS intensity spectrum contains two peaks. However, when the DDI coupling is increased there is only one peak in the SRS spectrum. The findings of this paper can be very useful. For example, the analytical expressions can be valuable for experimental scientists and engineers who can use them to compare their experiments and make new types of plasmonic devices. The enhancement in the SRS intensity can also be used to fabricate SRS nanosensors. Similarly, our finding about the SRS intensity having two peaks to one peak due to the DDI coupling can be used to fabricate SRS nanoswitches where the two peaks can be thought of as the ON position and the one peak can be considered as the OFF position.
Metal additive manufacturing emerges as a pivotal innovation in modern manufacturing technologies, characterized by its exceptional capability to fabricate complex geometries. This process depends on the critical phase change phenomenon, where metals change between solid and liquid states under the intense heat from lasers. Accurate simulations of these phase changes are essential for enhancing the precision and reliability of metal additive manufacturing processes, thereby expanding the range of producible designs. However, the challenge lies in the detailed modeling of particle responses to thermal variations. This entails an understanding of melting dynamics—how particles transition from solid to liquid upon reaching their melting points, their interactions and fusion during this transformation, and the resultant changes in properties such as viscosity and flow. In response, this study introduces an innovative Discrete Element Method (DEM) for simulating particle dynamics and phase changes in metal additive manufacturing. By modeling metal powder as a cluster of interconnected smaller particles, this approach simplifies the simulation of melting and solidification. It combines particle dynamics and phase change simulations into a single framework, offering computational efficiency and adaptability to various materials and manufacturing conditions. As a result, this presents a practical alternative to more complex methods like Computational Fluid Dynamics (CFD) and facilitates rapid prototyping and optimization in metal additive manufacturing.
The field of domain wall electronics is part of a broad effort to engineer novel electronic functionalities in complex oxides via nanoscale inhomogeneities. Conducting ferroelectric domain walls offer the possibility of writeable electronics, for which the conduction channels may be manipulated by external fields or strains. In this talk, I discuss a simple problem, namely how the shape of a conducting domain wall changes with the density of free electrons on the domain wall. I show that the competition between electrostatic forces and domain wall surface tension naturally leads to a zigzag domain wall morphology.
Silicon nitride (SiN) stands out as a promising material for the fabrication and design of integrated photonic devices applicable to precision spectroscopy, telecommunications, and quantum optical communication. Notably, SiN demonstrates low losses, high nonlinearities, and compatibility with existing CMOS technology. We will report on our lab's optimized process, guiding quantum devices from the fabrication stage to optical characterization.
Our methodology employs low-pressure chemical vapor deposition to generate stoichiometric silicon nitride. Notably, removing the backside of the nitride from the wafer significantly impacts achieving nominal values for the refractive index [1]. Understanding how the index changes with wafer and fabrication processing proves critical for predicting correct geometries and the associated group velocities required for realizing novel quantum technologies. The quantified propagation loss of our devices is measured at 1.2 dB/cm, with coupling losses at 2 dB/facet, aligning with the current state-of-the-art.
Furthermore, we've conducted device modeling and theoretical simulations to predict device performance. We employed the Lugiato-Lefever Equation, solving it using the split-step Fourier method [2]. Guided by our theoretical predictions, we initiated the fabrication of new resonators for optical frequency combs and solitons, subsequently moving these newly fabricated devices to the lab for characterization.
In conclusion, I will discuss how our progress in developing these novel devices can be applied to exciting applications [3].
[1] A. M. Tareki, et al., IEEE Photonics Journal. 15, 1-7, (2023) [2] T. Hansson, et al. Optics Communications, 312, 134-136 (2014) [3] M.A. Guidry, et al. Nat. Photon. 16, 52–58 (2022).
Future quantum networks have significant implications in the secure transfer of sensitive information. A key component to enabling longer transmission distances in these networks is an efficient and reliable quantum memory (QM) device. QM devices can enable the storage of quantum optical light and will be a vital component of quantum repeater nodes and precise quantum sensors. We will present the Signal-to-Noise Ratio (SNR) and a Bit-Error Rate (BER) performance metrics for a unique, dual-rail QM system housed in a deployable module.
Our setup utilizes a rubidium vapor cell operating at near room temperature under the conditions of electromagnetically induced transparency [1]. This effect allows optical light states to be coherently mapped into and out of a warm atomic ensemble. A dual-rail configuration is employed which permits the storage of arbitrary polarization qubits. We will report the capabilities of our memory as a device in visible light communication and its SNR and BER performance under various operating conditions such as memory lifetime and optical storage efficiency [2].
Furthermore, we will present the capability of this system for an on-off keying communication scheme by analyzing differential signaling between the rails. This is, to our knowledge, the first demonstration of an optical dual-rail memory utilized for this type of communication scheme.
Demonstrations utilizing these novel QM systems in established communication protocols will be key for quantum networks and the future quantum internet.
[1] Namazi, Mehdi et al., Phys. Rev. Appl. 034023 (2017) [2] J. De Bruycker, et al., 12th International Symposium on Communication Systems, Networks and Digital Signal Processing pp. 1-5 (2020).
A robust, reliable and field-deployable quantum memory device will be necessary for long-distance quantum communication and the future quantum internet [1]. An attractive implementation to meet these requirements is a warm vapour system operating under the conditions of Electromagnetically Induced Transparency. This technique is capable of storing and receiving quantum optical light states [2]. Our study investigates the temperature dependence of the storage lifetime for the D1 transition in Rb87 vapour. Rubidium is chosen for its favorable operational temperature and resonant wavelengths that are readily attainable from commercial light sources.
We employ a rack-mountable optical memory setup containing isotopically pure Rb87 vapour cells. Using spectroscopic techniques for temperature calibration [3], we explore a range of operating temperatures. Employing optical pulses of ~500 ns duration, we achieved storage decay lifetimes as long as 175 μs, which is a promising benchmark for this type of system. The measured storage lifetimes provide insight into the decoherence mechanisms that can affect optical memory performance. Lower operating temperatures can exhibit an increased coherence time due to reduced atomic motion but tend to also lead to a subsequent decrease in memory efficiency due to lower atomic depths.
These lifetimes demonstrate the potential for field-deployable systems in long-distance quantum communication schemes. Our results also underscore the importance of temperature control in quantum memory systems and offer practical insights for utilizing quantum architecture in both classical and quantum regimes in new and exciting applications.
[1] Mehdi Namazi et al., Phys. Rev. Appl. 18, 044058 (2022)
[2] Mehdi Namazi et al., Phys. Rev. Appl. 8, 034023 (2017)
[3] Li-Chung Ha et al., Phys. Rev. A 103, 022826 (2021)
The impact of ion dynamics in the sheath of argon DC plasma discharges is analysed. We show that, at moderate pressures where the ion mean free path is of the order of the sheath width (10-150 Pa), the spatial variations of the ion temperature have a strong impact the sheath formation process, especially on the density profiles of plasma species and the mean velocity of the ions impacting the cathode. To show these findings, we compare simulation the data of DC argon discharges obtained from a Particles-In-Cell 1D3V (one dimension in space and three dimension in velocity) kinetic model with the simulation data of one-dimensional self-consistent fluid ones. Simulations show that ion collisions with neutral atoms must absolutely be considered in the fluid model to accurately simulate the discharge, especially in the sheath region, and s self-consistent calculation of the ion temperature profile is necessary in the whole simulation domain. In particular, in the cathode sheath where there is large potential fall, despite the relatively large ion-neutral frequency in the considered pressure range, the ion temperature can be several orders of magnitude larger than the background gas temperature. Kinetic simulations also showed that ion-neutral collisions are responsible for a progressive spreading of the ion velocities in the directions perpendicular to the electric field in the cathode sheath.
Non-equilibrium plasmas at atmospheric pressure are often characterized by optical emission spectroscopy. Despite the simplicity of recording optical emission spectra in plasmas, the determination of spatially resolved plasma properties (e.g. electron temperature) in an efficient way is very challenging.
Hyperspectral imaging is a spectroscopic technique that combines optical emission spectroscopy with 2D optical imaging to simultaneously generate spectral and spatial mappings of optical emission. Using this technique, images are acquired over a wide range of wavelengths with narrow bandwidths, and a 2D spatial mapping of the spectral variation is generated within a reasonable time. Each pixel of the image ends up containing spectral information, and collectively, the pixels form a hyperspectral cube that comprises both spatial and spectral information.
In this presentation, we show spatially resolved optical images of a microwave argon plasma jet expanding into ambient air recorded over a wide range of wavelengths using a hyperspectral imaging system based on a tunable Bragg-grating imager coupled to a scientific Complementary Metal–Oxide–Semiconductor camera. The working principles of the system are detailed, along with the necessary post-processing steps. Further analysis of the spatial-spectral data, including Abel transform used to determine 2D radially resolved spatial mappings, is also presented.
Overall, we will show that the proposed approach provides unprecedented cartographies of key plasma parameters, such as argon and oxygen line emission intensities, argon metastable number densities, and argon excitation temperatures.
Considering that all these plasma parameters were obtained from measurements performed in a reasonable time, Bragg-grating-based hyperspectral imaging constitutes an advantageous plasma diagnostic technique for detailed analysis of microwave plasma jets used in several applications.
A low-$\beta$ plasma is characterized by a dominance of magnetic energy over internal (kinetic) energy, where the magnetic pressure ($B^2/2\mu_0$) surpasses the kinetic pressure ($p$), confining the plasma within magnetic fields. Under specific conditions, low-$\beta$ plasmas adhere to Alfvén's theorem, wherein magnetic field lines remain 'frozen' within the plasma and move with it. These plasmas are commonly associated with magnetic confinement fusion reactors, star atmospheres, and plasma-based space propulsion technologies.
This talk aims to present findings from plasma acceleration simulations conducted using magnetohydrodynamics (MHD) and Particle-In-Cell (PIC) codes. The study will review Weber-Davis solar wind acceleration, following Parker's theoretical framework. Furthermore, various plasma acceleration modes have been studied, including the critical points responsible for accelerating solar winds from subsonic to supersonic velocities. Transitioning from solar winds to magnetic nozzle scenarios involves minor adjustments, leading to a convergent-divergent magnetic field configuration that converts plasma's thermal energy into directed kinetic energy.
Both PIC and MHD simulations are analyzed and compared to understand plasma acceleration modes, with a focus on torsional Alfvén waves, pressure-induced acceleration, and centrifugal confinement.
Ionospheric turbulence is studied extensively with satellites and rocket instrumentation and with ground-based radars. There are two distinct regimes. One concerns the E region below 130 km altitude and the other the F region above 150 km. The E region is often subjected to intense Hall currents, which lead to various instabilities dominated by a modified two-stream instability. F region instabilities are more slowly growing and cover much greater scales. In recent years we have come to understand that the dominant structures evolve in such a way that their electric field is reduced compared to the ambient electric field in a way such that they match threshold electric field conditions, for which the growth rate is next to nihil. These structures also heat the electrons, sometimes to a point that the heating rate exceeds the local classical Joule heating rate. Exceptions to the rules have also been found with narrow radar spectra, where the Doppler shift of the structures actually matches expectations from linear growth rate theory owing to the peculiar directions at which said structures are generated. With modern radars we can now localize the location of decameter turbulence in relation to optical images related to the aurora boreales and find that they are parallel to auroral arcs but not inside, indicating stronger electric fields on the edges of aurora. For the F region, we often observe far larger structures generated by slowly growing instabilities like the generalized Rayleigh-Taylor instability. In the equatorial region where such structures are generated, we find that structures up to 70 km in size decay at an ambipolar diffusion rate associated with much smaller 500 m structures and conclude that the culprit is mode-coupling down to sizes for which classical diffusion is fast enough to offer a sink of wave energy. At higher latitudes we systematically observe steepening spectra but only when and where the plasma is connected to a large E region plasma density produced either by solar illumination or energetic auroral particles.
We propose to investigate the breakdown of superfluidity in strongly correlated Li Fermi superfluids in a new trap geometry that allows for a long coherence time: a homogenous box with one periodic boundary condition. We will achieve this by trapping on the surface of a cylinder and introducing flexible barriers to superfluid flow. We report progress toward this new trap and prospects for future experiments.
We revisit and expand upon previous results (1) related to He2+-Ne2 collisions to analyse electron-removal processes resulting in dimer fragmentation. The standard independent-electron multinomial analysis of single- and multi- electron transitions is compared to a Slater-determinant-based analysis that accounts for the Pauli principle. Given the orientation of a projectile travelling parallel to the dimer axis, we account for electron capture by the projectile from the first atom it interacts with in the dimer. The results indicate strong agreement between the two analyses and confirming our previous prediction of a strong Interatomic Coulombic Decay (ICD) signal at low energies (~10keV/amu).
For a He+ projectile there is a smaller total ICD cross-section, but no relevant competing process in the Ne+ - Ne+ fragmentation channel. Measuring the kinetic energy release spectrum would indicate a clear ICD signal.
(1) T. Kirchner, J. Phys. B 54, 205201 (2021)
The development of Photonic Crystal Fibers in the 1990s has led to considerable research on Supercontinuum generation, in which nonlinear effects play a big role. The majority of simulation work done to model the nonlinear Raman effect has been using the Generalized Nonlinear Schrödinger Equation (GNLSE) - which is computationally efficient but lacks accuracy in broadband modelling due to its reliance on the Slow Varying Envelope Approximation. Ultra-broadband spectra have been modelled using other equations however, such as the Forward Maxwell Equation (FME) - which makes minimal approximations - and an equation developed by Silva, Weigand and Crespo (SWCE) - another computationally efficient model used to simulate Cascaded Four-Wave Mixing.
Nonlinear media have also been employed for a recent amplification method called Kerr Instability Amplification (KIA). However, the only simulations done to test KIA so far have been using the Forward Maxwell Equation. In this work, we simulated both these effects using all three equations and compared them. We find that they all perform similarly in modelling the Raman effect, but the GNLSE exhibits noticeably lower amplification in KIA simulations. The SWCE shows similar results to the FME while being substantially more efficient. We expect that understanding how these equations compare in simulating these nonlinear effects will prove useful to the photonics community.
A quantum internet of connected nodes requires the ability to send single photons across vast distances, something not possible with current fiber optic technology. A solution to this is the utilization of quantum repeater nodes which are reliant on a trustworthy quantum memory (QM) device. We will present a deployable quantum memory system that utilizes Electromagnetically Induced Transparency for optical storage in a warm atomic vapour. We have characterized the storage lifetime, signal to noise ratio (SNR) and bit error rate (BER) of the D1 transition manifold of isotopically pure Rb87.
Using optical pulses of 500ns duration, we obtained a storage lifetime of 175us. These lifetimes highlight the potential of our portable quantum memory system for long distance quantum communication schemes. Further, our QM system has a dual rail configuration that allows the storage of arbitrary polarization qubits. The dual rail system allows us to quantify SNR of two spatially distinct channels and to characterize the memory performance for on-off keying through use of polarization differential.
Our poster will provide an overview of this novel system and highlight the capability of deployable QM systems in long distance communications and the possible future applications.
Variational calculations readily produce high precision energies and wave functions for the ground state, but typically the accuracy rapidly deteriorates with increasing principal quantum number n. The current limit is n = 10 [1,2]. We will report the results of new variational calculations based on the use of triple basis sets in Hylleraas coordinates. The basis sets are "tripled" in that each combination of powers i,j,k in basis functions of the form r1^i r_2^j r_12^k exp(-alpha r_1 - beta r_2) is repeated three times with different nonlinear parameters alpha and beta that are separately optimized to span different distance scales. Results will be reported for the S- and P-states up to n = 24, including a comparison with high precision measurements for n = 24.
[1] G. W. F. Drake and Z.-C. Yan, Phys. Rev. A 46, 2378 (1992).
[2] D. T. Aznabaev, A. K. Bekbaev, and V. I. Korobov, Phys. Rev. A,
98, 012510 (2018).
[3] ] G. Clausen et al., Phys. Rev. Lett. 127, 093001 (2021).
Blood disorders, such as low iron anemia, affect almost one-third of Canadians. The symptoms range from extreme fatigue to shortness of breath. Considering these situations, early detection is paramount. A common diagnostic test for anemia is red blood cell count, but this process must be done at a lab and does not have an immediate turnaround time. High costs and less availably of equipment and doctors in poorer countries make such a test a luxury. Our proposed research aims at addressing these challenges by creating a rapid, reliable, sensitive, and specific point-of-care device that would be affordable for quantifying hemoglobin (Hb) levels in real time using a photodetector. The significance of this research lies in its potential to revolutionize Hb disorder diagnosis by leveraging photodetector technology.
In this study, we use lab-built Dye-sensitized solar cells and characterize it for their current response at a fixed voltage. In order to determine the Hb levels in blood, we converted the photodetector’s transmission response into quantifiable current readings based on Hb concentration. This process included the development of a calibration curve of Hb concentration vs current at a set voltage. From preliminary responses, we found a linear relationship between the current and the concentration of hemoglobin present in glass. Hb concentration exhibit a distinct optical absorption property, which can be distinguished and measured using a photodetector. However, this device does not make it specific to Hb detection, hence further studies will involve fabricating a test strip that will adsorb only Hb onto it which will enable the quantification of Hb concentration in blood.
Canada plans to use a deep geological repository (DGR) system consisting of corrosion-resistant used fuel containers (UFCs) and other barriers to store spent nuclear fuel safely. Concerns arise regarding potential fuel exposure to groundwater, leading to fuel oxidation and dissolution due to water radiolysis by residual fuel radioactivity. Radionuclides within spent fuel, primarily located in UO2 grains, can be released based on fuel corrosion rates. Unlike that of β- and γ-radiations, the α-radiation dose rate will remain high for extended periods and make the α-radiolysis of water the primary source of oxidants.
This study aims to explore α-particle interactions with fuel surfaces to figure out their impact on UO2 dissolution rates. The methodology is to conduct in-situ α-irradiation-electrochemistry experiments using the Rutherford backscattering beamline on the Western Tandetron accelerator and sealed radiation sources to provide a constant flux of high-energy α-particles. As direct usage of UO2 fuel pellets was impractical due to the specific experimental setup, uranium oxide thin films were grown on metallic foil substrates via electrodeposition, using an aqueous electrolyte containing uranyl nitrate. Films were grown using current densities of 5-30 mA/cm2, pH = 7.5 to 8.5, and 76 ± 1 °C temperature.
To make sure the composition of the deposited films matched the composition of the used fuel, a detailed characterization of the films was performed. Films showed a cauliflower-like morphology in SEM analysis, with uranium and oxygen presence confirmed through EDX. RBS measurements indicated the film thicknesses in the 1-5 μm range. XRD showed that as-deposited films were amorphous, turning into UO2 polycrystalline films after annealing at 600 ˚C in 10-6 Torr H2. Raman analysis detected U4O9 and U3O8 phases in the as-deposited films, while UO2 phases emerged in the annealed samples' spectra. Further characterization of the films, as well as preparation for in-situ α-irradiation-electrochemistry experiments, is currently underway.
Formation of microporous structures on a polymer surface leads to improved surface properties such as self-cleaning, anti-fogging, antibacterial characteristics, and strengthened adhesion with metals. Femtosecond laser-induced microporous structures (fs-LIMS) are microscale features created using laser technology for subsequent metal deposition. However, their quality is heavily influenced by complex interactions between various laser processing parameters and material properties. Presently, the selection of appropriate laser parameters relies largely on the operator’s experience and requires laborious experimentations. To achieve a more efficient, rapid, and cognitively automated process, an integrated machine learning methodology is introduced for determining the optimal process conditions for fs-LIMS. This methodology commences with feature extraction from images captured by scanning electron microscopy (SEM) using a convolutional neural network (CNN). Subsequently, various dimensionality reduction techniques such as principal component analysis (PCA), multidimensional scaling (MDS), and t-distributed stochastic neighbor embedding (t-SNE) are employed to explore various analytical approaches. The k-means clustering method is then utilized to automatically classify the main characteristics (extracted from various dimensionality reduction methods) of fs-LIMS into categories representing high, moderate and low quality. Among the diverse dimensionality reduction methods, PCA proves most effective, achieving a peak accuracy of 95.97% in a three-dimensional PCA model. Finally, based on the labeled images by PCA and k-means clustering, support vector machine (SVM), artificial neural network (ANN), and random forest (RF) algorithms are applied to predict the laser processed outcomes. The results reveal that SVM attains the highest accuracy, performing at a level of 92%. This study introduces a novel approach for identifying the optimal laser process conditions to create laser-induced microscale porous structures.
The titanium oxide surface is responsible for many of the properties associated with the metal because it creates a hard, uniform, and thermodynamically stable protective coating over metallic titanium. Because of the characteristics of the oxide film, titanium has found uses in biomedical implants, aerospace engineering, industrial piping for corrosive environments, and other areas where high strength and low weight are required. Our project is aimed at understanding the atomistic mechanisms of TiO2 formation, including oxidation rates and the role of anodization potential on Ti oxide layer structure and morphology, using Rutherford backscattering spectrometry (RBS) for elemental depth profiling during oxide growth, complemented by other surface-sensitive techniques.
Our research involves using a specially designed in-situ cell with an ion-permeable silicon nitride window to provide a barrier between the ultra-high vacuum (UHV) required to perform RBS and the electrolyte solution required for electrochemical analysis and anodization. The thin silicon nitride window is coated with titanium and exposed to an electrolyte solution; RBS measurements are taken as the titanium metal is anodized to titanium oxide. To determine information about the growth mechanism of titanium, in-situ anodization during RBS is performed, providing information about the growth mechanism of titanium oxide. In-situ RBS results show a significant increase in the oxidation rate of titanium compared to equivalent ex-situ measurements, as well as spontaneous TiO2 film growth, without applied potential, in the presence of high-energy He+ particles interacting with the electrolyte solution. Additionally, a significant change is observed between benchtop electrochemical impedance spectrometry experiments and those performed under high energy He+ flux. Direct and indirect alpha radiation exposure measurements are performed to determine the enhanced titanium oxide growth rate generated via radiation and radiolysis effects. The quantification of these effects allows for a reliable comparison of in-situ RBS anodization experiments with ex-situ benchtop anodization experiments.
Many fundamental science processes and engineering designs are affected by the presence of hydrogen, e.g. hydrogen embrittlement. In order to understand fundamental issues in these materials and devices, quantifying the amount of hydrogen is needed. A method with high sensitivity is critical to improve current hydrogen analysis techniques to understand hydrogen related processes. To overcome these limitations, a new method called medium energy elastic recoil detection analysis (ME-ERDA) is adapted from two existing techniques – elastic recoil detection analysis (ERDA) and medium energy ion scattering (MEIS). ME-ERDA successfully detects hydrogen at surfaces and interfaces with a resolution of ~10 Å. An important aspect of analysis is quantifying the amount of hydrogen in a material, a process which requires a calibration standard with a large known amount of hydrogen. Improving hydrogen analysis methods will be achieved by synthesizing calibration standards made of thin metal hydrides, particularly titanium hydride (TiHx), and quantifying the amount of hydrogen in the standards. This will be accomplished by depositing titanium on a Si (001) wafer via magnetron sputter deposition (Western Nanofab). Forming the metal hydride will be done using two methods: 1) annealing in a hydrogenated environment, and 2) galvanostatic polarization. A hydrogen depth profile has been done using ERDA (Western Tandetron Accelerator Lab), secondary ion mass spectrometry (SIMS) (Surface Science Western), and ME-ERDA; with an emphasis on improving the resolution of ME-ERDA by adjusting the detector setup. To gain insight into hydrogen sensitivity and depth resolution for these techniques, a comparative analysis will be made between ME-ERDA, ERDA, and SIMS. This newly developed ME-ERDA technique and the establishment of hydrogen standards hold significant importance for future engineering applications requiring hydrogen depth profiling, as well as for advancing our fundamental understanding of hydrogen related processes.
We introduce a novel concept for relational and discrete cyclic timekeeping for application in a quantum clock design. Taking inspiration from ancient timekeeping systems, we challenge conventional use of continuous time by exploring temporal space definable by finite Euclidean 1D geometry bound by discrete event-driven zero time points. In contrast to abstract continuous and infinitesimal time, our proposed quantum clock synchronizes the start/stop cycle with events in physical reality, offering a potential avenue to address challenges in quantum computing and discrete event simulations. Our approach is based on temporal space that is bound by physical limit in time, where time can be precisely defined as zero [t = 0]. This temporal limit aligns with Planck’s limits and the Mohist definition of an “atomic,” representing an indivisible line. For instance, superposition phenomena occur precisely at t = Øt = 0, independent of space. In contrast to infinitesimal intervals proposed beyond our dimensional reality, our definition of temporal space is confined to our observable universe, relevant to normal matter. The concept highlights the contrast with relativistic modeling, emphasizing Rt relationalism's capability to separate space from time, offering a distinct perspective on temporal metrics within our observable reality.
Natural and artificial impulsive sources in the atmosphere can generate infrasound, or very low frequency (f<20 Hz) acoustic waves, that can travel over long distances with minimal attenuation. Traditionally confined to ground-based sensors, the domain of infrasound sensing has expanded in recent years to include airborne platforms (e.g., balloons). Unlike other sensing modalities that might have geographic (e.g., inaccessible regions), time-of-day (e.g., optical) or other limitations, infrasound can be utilized continuously (day and night) on a global scale. Volcanoes, lightning, chemical explosions, re-entry vehicles, space debris, and bolides are among the diverse sources producing infrasound phenomena. Among these, bolides present a particularly intriguing scientific challenge due to their varying velocities, entry angles, and physical properties. Theoretically, bolide infrasound signatures should carry information about the source (e.g., velocity, altitude, mass) but the dynamic changes in the atmosphere that occur on temporal scales of minutes to hours might lead to loss of that information. Therefore, to fully utilize infrasound towards characterization of bolides and sources alike, it is of essence to have both the detailed event ground truth and accurate atmospheric specifications. This information serves as the foundation for improving and validating models, with the ultimate goal of utilizing infrasound signatures alone to infer characteristics of the source. In this context, a succinct overview of bolide infrasound will be provided, complemented by notable examples, to elucidate its utility in atmospheric studies.
SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525
High-end microwave systems rely heavily on oscillators with minimal phase noise. The research work introduces a novel method to decrease phase noise by employing a gain-driven polariton platform. Through coherent coupling-induced mode hybridization, frequency distribution around the carrier signal is effectively suppressed.
The approach to achieve minimal phase noise performance will be shown using three prototypes. The first prototype is used to demonstrate the phase noise reduction mechanism (more than 25dB). The second prototype, optimized to operate at a fixed frequency of 3.5GHz, exhibits remarkable phase noise levels of -131 dBc/Hz and -133 dBc/Hz at 10 kHz and 100 kHz offset frequencies, respectively. The third prototype offers a tuning range from 2.1 to 2.7 GHz
The research work merges gain-embedded cavity technology with YIG oscillator technology using cavity magnonics. The integration results in improved spectral purity, leveraging the synergy between the two mature technologies.
Erbium is one element in the globally-recognized class of critical minerals, the rare earth elements (REE’s). It is an essential component in various clean energy and modern technology applications from nuclear control rods to infrared optics. Growing demand for these high-tech applications alongside geopolitical supply chain risks underscore the critical status of REE’s. To address this, it is of interest to advance resource development through all available means, including both mining and recycling. In order to develop and maintain responsible resource management strategies, it is crucial to be able to reliably identify and quantify rare-earth-containing materials and to have a comprehensive understanding of their properties.
This work presents an effort to advance current methodologies surrounding the analysis and characterization of rare-earth-containing materials, with a focus on erbium. Using several analytical techniques such as X-ray Photoelectron Spectroscopy (XPS), Secondary Ion Mass Spectrometry (SIMS), and Rutherford Backscattering Spectroscopy (RBS) we are developing robust characterization procedures for various erbium-containing materials. By identifying subtle binding energy shifts and structural variances in the complex XPS signals of erbium compounds, we are developing novel and practical standard curve-fitting procedures. These fitting procedures will serve as reference data to allow for the future identification and Er content quantification of these compounds in unknown erbium-bearing materials. We are also exploring the fabrication of element-specific SIMS standards through ion implantation, as more representative standards will allow for more accurate quantification of those elements in materials. Using both Al-K𝛼 and high-energy Ag-L𝛼 XPS sources in conjunction with elemental mapping via Energy Dispersive X-ray spectroscopy, we have also identified several light REE’s residing in interstitial grain boundaries between barite and calcite mineral grains within bastnaesite ore. Collectively, these techniques provide a strong foundation for our understanding of the composition, electronic structure, and surface chemistry of erbium-containing materials. These advancements are critical for optimizing the extraction and recycling processes by increasing the processing yield, efficiency, and by reducing waste.
Despite its outstanding electronic properties, silicon has limited light emission capabilities due to its indirect bandgap. However, Si quantum structures (Si-QSs) exhibit light emission through quantum confinement. In this project, we investigate the co-implantation of silicon and germanium to create SiGe quantum dots (QDs). The relative concentration of Ge has a direct influence on the optical properties since the bandgap depends on it. Silicon ions at 40 keV were implanted into a 1 μm thermally grown $\mathrm{SiO}_2$ film on a Si (001) substrate to achieve a peak concentration of 17.5 at. % in relation to the matrix. The chosen energy placed the implanted peak 50 nm below the surface. Samples were subsequently implanted with 55 keV $\mathrm{Ge}^+$ with 0.5, 1.0, 2.0, 4.0, and 7.5 peak at. %, and thermally annealed to promote cluster growth and crystallization. The Ge implantation energy was calculated to put the Ge ion range at the same position as the Si ion range. For a second set of samples, $\mathrm{Ge}^+$ implantation was done after $1100 \, ^\circ\mathrm{C}$ annealing, necessary for Si QDs growth. Therefore, we also studied the influence of annealing order on the properties of the samples. Structural properties were studied with Raman spectroscopy, and we observed a Ge-Si peak at $405 \, \mathrm{cm}^{-1}$ indicating the formation of Si-Ge bonds only for the second set of samples with 7.5 peak Ge at. %. The optical properties of these SiGe QDs were studied with photoluminescence in the visible and near-infrared, with emissions around 800 nm and 1000 nm for both sets. It was observed that PL intensity decreased in both sets of samples with increasing Ge content, and the samples with no annealing between implants exhibited more intense PL. The PL peak at 1000 nm shifts to a lower wavelength with higher Ge at. %, which provides evidence of Ge incorporation in Si QDs in both sets of samples. Finally, the emission was investigated using time-resolved photoluminescence (TR-PL), and it showed that the lifetime time decreases as the Ge concentration increases for both sets of samples.
Composition and optical properties of ion beam fabricated SiGeSn layers in Si (001)
A.W. Henry a, C.U. Ekeruche a, P.J. Simpson b, L.V. Goncharova a
a Department of Physics and Astronomy, University of Western Ontario, London, Ontario, Canada, N6A 3K7
b Department of Computer Science, Mathematics, Physics, and Statistics, University of British Columbia, Okanagan Campus, Kelowna, British Columbia, V1V 1V7
Abstract
SiGeSn compounds, a unique class of semiconductors with the ability to engineer both the lattice parameter and band structure, have been investigated for their potential in monolithic integration of electronic and photonic devices. These materials have demonstrated potential in diverse applications, including lasing, thin-film waveguide fabrication, high electron mobility transistors, and fully depleted-MOSFETs. The study focused on the optical and electronic properties of a 200-400nm SiGeSn layer in a Si (001) substrate. Various characterisation techniques, including Spectroscopic Ellipsometry (SE), Channeling Rutherford Backscattering Spectroscopy (c-RBS), Positron Annihilation Spectroscopy (PAS), and Scanning Electron Microscopy (SEM) with Energy Dispersive X-Ray Analysis (EDX), were employed. The RBS elemental depth distribution of SiGeSn was characterised, revealing successful implantation of Ge and Sn to their intended doses 5-80nm below the surface, as well as different Ge and Sn distributions at various annealing temperatures and times. SE modelling, based on RBS compositional data, was conducted to investigate observed Ψ, Δ plot features. The models, indicated an average implanted volume thickness of ~63nm, and increase near-IR absorption properties as compared to crystalline Si. Growth defects were identified and quantified via c-RBS. The data showed increased substitutionality of Ge and Sn in annealed samples. This research underscores the promise of SiGeSn alloys in cost-effective and CMOS-compatible optoelectronic devices.
Transition Metal Dichalcogenides (TMDs) are layered semiconducting materials of the form MX$_2$, where M represents a transition metal atom and X represents a chalcogen. In the 1T structural phase, the chalcogens provide an octahedral environment for each metal atom. We propose a quantum loop model to explain the nature of bonding in these materials. We focus on metal atoms from group VI of the periodic table (e.g., MoS$_2$, MoSe$_2$, WS$_2$, WSe$_2$) which have two valence electrons in their $d$ orbitals. These electrons reside in t$_{2g}$ orbitals that point towards the six nearest neighbors on the underlying triangular lattice. We argue that these form covalent bonds that connect together to form loops. Loops can be formed in a large number of ways, leading to a resonating valence bond picture. We numerically enumerate all allowed loop configurations for small sizes of systems. We then construct a minimal effective Hamiltonian with local ‘potential energy’ and ‘kinetic energy’ terms. The kinetic energy term reflects processes where neighboring loops are cut and merged to form new loops, or a single loop changes shape. The potential energy term is due to the repulsion of proximate bonds. We construct a phase diagram, finding two prominent stripe-like phases. One of these closely resembles the 1T' structure, which is a well-known stripe-like distortion of the 1T phase. We discuss further tests of these ideas, e.g., in impurity-induced textures.
In 2010 Sau $et~al$ proposed a topological superconducting Majorana fermions can be realized in a semiconductor
quantum well coupled to an $s$-wave superconductor and a ferromagnetic insulator. In the same year, Alica, proposed a simpler architecture for detecting Majorana fermions by applying an in-plane magnetic field to a (110)-grown
semiconductor coupled only to an $s$-wave superconductor. Here we propose an alternative setup, wherein a topological superconducting phase is in proximity to a tilted Dirac materials with a variable tilt parameter, in order to explore if the system can be driven into a topological superconducting state. Success creating topological superconductors would open these systems up as a unique flexible platform for topological quantum computation.
We present an open-source API and software package called SymPhas for defining and simulating phase-field models, supporting up to three dimensions and an arbitrary number of fields. SymPhas is the first of its kind to offer complete flexibility in user specification of phase-field models from the phase-field dynamical or free energy equations, allowing the study of a wide range of models with the same software platform. This is accomplished by implementing a novel symbolic algebra library with a rich feature set that supports user-defined mathematical expressions with minimal constraint on expression format or grammar. The symbolic algebra library uses C++ template meta-programming, meaning that all expressions are represented as a C++ type. Consequently, symbolic expressions are "static" and formulated at compile-time, including all rules and simplifications that are applied. This approach dramatically minimizes application runtime, particularly for complex models since branching is entirely eliminated from the symbolic evaluation step. Performance is also augmented via parallelization with OpenMP and the C++ standard library. SymPhas has been used to simulate a number of well-known phase-field models, most of which are available as examples [1], as well as generating large-scale training and test data for a machine learning algorithm [2].
Silber, S. A. & Karttunen, M. SymPhas —General Purpose Software for Phase-Field, Phase-Field Crystal, and Reaction-Diffusion Simulations. Adv. Theory Simul. 5, 2100351 (2021).
Kiyani, E., Silber, S., Kooshkbaghi, M. & Karttunen, M. Machine-learning-based data-driven discovery of nonlinear phase-field dynamics. Physical Review E 106, 65303 (2022).
High-harmonic generation (HHG) and attosecond scale physics are important areas of current research, combining aspects optical, atomic, molecular, and condensed matter physics. In the past decade, the study of HHG has been extended from atomic gases to solids. HHG in solids does not follow the behaviour of atomic gas HHG due to the added complexity of bulk inter-atomic interaction, and this makes HHG in solids particularly suited for the exploration of properties of e.g. electronic band structure and spacing. While higher laser intensity allows for higher-order HHG cutoff, the application of such high energies can also lead to heating or damage to the sample through processes whereby the induced electron excitations thermalize with the lattice and induce lattice disruption or structural change. This thermal damage is potentially a limiting factor in experiment, and therefore the means of controlling thermal damage are of great practical interest. Here we present an initial study of the heating process following HHG in a solid state scenario. We consider a simple two-level model exhibiting HHG via direct simulation of the time-dependent Schrodinger equation, through which we determine how the energy deposited by the high intensity pulse heats the sample, and in turn the eventual thermalization of the excited electronic states. We explore the features of this model with varying pulse parameters (i.e. envelope, intensity, duration) to test the sensitivity of thermalization to the characteristics of the stimulation pulse. Finally, we discuss how these results may apply to more detailed models including the full electronic band structure.
Biomolecular self-assembly lies at the very heart of the function of living cells, where it organizes individual components into functional biological machines. The macromolecular sub-units typically correspond to proteins, whose shapes have been optimized over millions of years of evolution to ensure a proper functionality of the self-assembled structures. However, in pathological cases, proteins fail to achieve the optimal folding, which often leads to complex ill-fitting shapes. This produces geometrical incompatibility, which leads to frustrated interactions between the sub-units. Surprisingly, despite a huge variability in protein structure, such misfolded units tend to robustly self-assemble into aggregates with well-defined morphologies. Interestingly, these structures display a clear preference for slimmer topologies, such as fiber aggregates. This emergent principle of dimensionality reduction suggests that the aggregation of irregular components derives from the generic physical principles, rather than the microscopic details of the interactions.
Inspired by this idea, we model the frustrated self-assembly of ill-shaped proteins as coarse-grained anisotropic particles, whose interactions depend on their relative orientations and positions in space. This simple model successfully reproduces a hierarchy of aggregate morphologies and gives pointers to the origins of dimensionality reduction.
Despite more than 25% power conversion efficiency (PCE) of organic-inorganic metal halide perovskite (HOIP), it remains a significant challenge to improve the long-term environmental stability, which hinders their further commercial application. Two-dimensional (2D) perovskite, originating from 3D perovskite structures, can be tuned by atomic scales, leading to electronic band structure tunability beyond 3D perovskites. In addition, they show much higher environmental stability as compared to their 3D counterpart. However, the carrier photogeneration and transport mechanisms remain unclear.
In contrast to that research community mainly relied on the traditional time-resolved optical spectroscopic techniques including pump–probe and fluorescence approaches to investigate carrier diffusion dynamics, we have used the novel ultrafast photocurrent spectroscopy to investigate the carrier drift dynamics. In this project, we have investigated the 2D perovskites system including type 1 and type 2 perovskite (e.g., (4Tm)2PbI4, (BTm)2PbI4, BA2PbI4) and elucidated the nature of fundamental carrier photogeneration mechanism. Our work establishes the foundation for the 2D perovskite application in photovoltaics, photodetection, and LEDs.
Hybrid organic-inorganic metal halide perovskite have bypassed the power conversion efficiency (PCE) of silicon photovoltaics since it emerged ten years ago. Because of their easy processing and fabrication under ambient environment, it is a promising next generation photovoltaic technology. To further improve PCE, one of the practical approaches is to include the use of organic material, which generally acts as charge extractor from the perovskite photoactive layer. However, the cost of organic-based hole transporting materials (HTMs) and their long-term environmental stability could hinder their further commercial application. Thus, there is an urgency of introducing a new approach to replace organic-based HTM within PSC.
In this report, we have fabricated different types of PSC without the use of any organic HTM material. In particular, MAPbBr3 based perovskite are being synthesized and uses it for HTM-free PSC application. In addition, mixed halide perovskites are synthesized by varying molar ratio of organic and inorganic precursors, that leads to perovskites with higher PCE, and improved stability as compared to the equimolar ratio of precursors. To further improve the PV performances, additive incorporation within perovskites are also being explored. In the case of MAPbI3-based HOIP, MAPbI3-20FACl shows the best PCE among the other compositions of FACl. On the other hand, for CsPbIBr2-based total inorganic perovskite, CsPbIBr2-10MACl shows a higher PCE than the other compositions.
Neutron stars are very dense objects that result from the death of a main-sequence star with an original mass between 8 and 25 solar masses. Studying the interior of these stars through events such as binary neutron star mergers can help explain the behavior of ultra-compact matter similar to that found inside an atomic nucleus. During these mergers, gravitational energy transfers to neutrinos which escape the stellar matter, carrying information about the equation of state of neutron stars with them. To test our understanding of nuclear matter in extreme conditions, we can compare neutrino yields detected in neutrino observatories on Earth to theoretical yields. Theoretical yields are calculated using binary neutron star merger simulations with different ultra-compact matter equations of state to account for the number of neutrinos produced during a merger. The three different equations of state used are SFHo, DD2, and NL3. This study sets out to determine if the equation of state of ultra-compact matter impacts the cosmic neutrino background, and, if so, if detection of this effect is possible in neutrino observatories. We found that the SFHo equation of state results in a significantly higher number of neutrinos emitted during the merger when compared to other equations of state.
Prior research into dense neutron-rich matter reveals an extraordinary revelation: the dynamic interplay between the nuclear strong force and the Coulomb force orchestrates the formation of complex structures referred to as Nuclear Pasta, exhibiting shapes such as spheres, slabs, and rods. This research employs semi-classical molecular dynamic simulations to investigate the response of Nuclear Pasta to an abundance of neutrinos generated in these environments. The diffusion of neutrinos through Nuclear Pasta structures proves pivotal in phenomena like the explosion of stalled core-collapse supernovae (Type II SN) and the cooling of neutron stars. The coherent neutrino-nucleon transport scattering cross-section (σt) and the transport mean free path (λt), provides valuable insights into the transport properties specific to different pasta shapes. Our exploration enhances our understanding of how neutrino diffusion influences energy loss, potentially re-igniting a Type II SN, and contributes to understanding the cooling time-scales in neutron stars. This endeavour offers a unique glimpse into the behaviour of matter at densities comparable to a nucleus, thereby enriching our understanding of the cosmos.
The neutron electric dipole moment (EDM) is being extensively studied worldwide with the goal of improving its precision. At TRIUMF, the EDM precision goal is $10^{-27}$ e-cm, which is an order of magnitude more precise than the previous best measurement. The experiment will use a new high-intensity ultracold neutron (UCN) source and a newly developed neutron EDM spectrometer. UCN will be delivered to the EDM spectrometer through coated neutron guide tubes. The tubes will be coated with diamond-like carbon (DLC), which provides a high neutron optical potential ($\sim 250$ neV) which reflects neutrons from its surface, with a minimal loss probability per bounce, such that the loss of UCN from the source to the EDM spectrometer is minimal. These factors are crucial to making the statistical precision of the EDM experiment possible. This poster will present the UCN guide coating facility at The University of Winnipeg, and the approach of pulsed-laser deposition of DLC. Surface analysis of our recent first coatings performed in Winnipeg will also be presented.
Decay spectroscopy stands as a pivotal tool in unravelling the intricate properties of atomic nuclei, offering unparalleled insights into the fundamental processes governing the decay of rare isotopes and shaping our understanding of nuclear physics. GRIFFIN (Gamma-Ray Infrastructure For Fundamental Investigations of Nuclei) is a world-leading facility for decay spectroscopy with rare-isotope beams, located at the TRIUMF-ISAC-I facility at the University of British Columbia campus in Vancouver, Canada. The GRIFFIN spectrometer is equipped with 16 high-purity germanium clover detectors coupled to a fully digital data acquisition system, enabling high gamma-ray detection efficiency and data throughput. ARIES (Ancillary detector for Rare Isotope Event Selection) is a new ancillary beta detector formed of a self-supporting array of plastic scintillator tiles which will fit inside GRIFFIN and aims to dramatically expand the beta detection capabilities within this experimental setup.
This poster summarizes the research conducted by a UBC physics student during an 8-month work term at TRIUMF. The focus of the work was the development of ARIES — a novel beta detector designed to enhance the capabilities of the GRIFFIN spectrometer. This research outlines the comprehensive characterization and testing of ARIES scintillator tiles and SiPMs, alongside the preliminary stages of construction and validation of a prototype. Through meticulous experimentation and analysis, this work contributed to the refinement and optimization of ARIES, ensuring its integration with the GRIFFIN spectrometer. The coupling of the new ARIES detector within the GRIFFIN spectrometer will facilitate deeper insights into the fundamental properties of nuclei and advance our understanding of the universe at its most fundamental level.
We have developed a longitudinal survey which aims to examine students' attitudes toward their programs and courses in the Department of Physics and Astronomy at McMaster University. We are building on previous survey projects which looked at students’ motivations when selecting their programs and the variations between different introductory physics cohorts. The long-term goal of this project is to evaluate the effectiveness of undergraduate physics programs. The results of the survey will be shared with the department and considered when making program changes. We intend for students’ experiences and perceptions to be a factor in curriculum improvement. This survey will be conducted yearly at the end of the winter term, with the first round of surveys in March-April 2024, and has been distributed to all students enrolled in an undergraduate physics program. As we aim to access the entire physics student experience, the survey is divided into five sections: Student History, Current Coursework, Student Wellbeing, Plans After Graduation, and Demographic Questions.
The development of our survey involved contributions from department representatives and undergraduate students. We met with department representatives in the early stages of the survey construction to determine what kinds of information are important to program development. After completing the initial draft of the survey, we met with focus groups to get student feedback on the survey. These focus groups included ~15 undergraduate physics students ranging from first-year to 5+ year students. The overall response from the undergraduate students was that they are very motivated to have their opinions incorporated into department decision-making. We wanted to ensure that the survey included information relevant to the department while also giving the students an opportunity to express their feedback.
This work was funded by the MacPherson Institute Student Partners Program.
Over the past decade, physics education has increasingly emphasized computational skills for undergraduates. These skills offer many benefits, fostering problem-solving, analysis, and critical thinking applicable across various professions. This study delves into the relationship between computational activities and enhanced physics learning, specifically explored through coding exercises introduced in a second-year electricity and magnetism course. Students numerically computed vector derivatives for diverse fields, providing a basis for learning gains assessed through pre- and post-quizzes. Interviews with students during code development shed light on their thought processes, confidence levels, and alignment of computed results with their initial conceptions of vector fields.
In this poster, I discuss the pedagogical efforts put forth in developing appropriate undergraduate labs while making use of phenomena studied in graduate physics courses.
This work is a part of the undergraduate lab revision process at the University of Waterloo and presents our efforts in bringing inquiry-based lab pedagogy to the upper year undergraduate students.
We discuss the reasoning behind taking specific concepts usually reserved for graduate levels and define criteria and procedure to ensure our students' learning and agency are not superficial.
This investigation delineates the application of Integrated Information Theory (IIT) for the elucidation of neurodynamics underpinning inhibitory control mechanisms, as operationalized within Go/NoGo paradigms, utilizing electroencephalographic (EEG) methodologies to quantify the integrated information (Φ) parameter across the brain's visual and frontoparietal networks. Inhibitory control, a pivotal component of executive functions, facilitates the suppression of prepotent responses to external stimuli, thereby enabling goal-oriented behavior. Contemporary advancements in the domain of cognitive neuroscience have accentuated the efficacy of EEG in mapping the neural substrates of executive functionalities, with a specific focus on the cognitive and attentional correlates discernible within distinct EEG frequency bands. Employing IIT—a theoretical construct positing the emergence of consciousness from the integrated information produced by a network of interrelated elements—this study analytically examines EEG data from a cohort of 14 healthy participants engaged in Go/NoGo tasks. The objective is to delineate the association between the magnitude of integrated information (Φ) within specific neural networks and the proficiency of inhibitory control as evidenced by task performance metrics.
Initial findings indicate a pronounced correlation between elevated Φ values within the visual network and superior task performance, suggesting that augmented information integration within this network may underlie more efficacious inhibitory control. This association was not mirrored within the frontoparietal network, intimating a potential functional specificity of integrated information in relation to cognitive control mechanisms. This research augments the cognitive neuroscience literature by illustrating the applicability of IIT in empirical investigations, furnishing novel insights into the neural architecture of inhibitory control and its interplay with consciousness. Future endeavors should aim to refine methodologies for Φ quantification and broaden the scope of these findings across diverse cognitive tasks and demographic cohorts.
Healthy ears are not only sensitive and selective detectors of sound, but also emit faint sounds at amplitudes typically below human hearing threshold. These sounds are known as otoacoustic emissions (OAEs) and are considered a byproduct of an active nonlinear amplification process, arising from collective dynamics of the sensory hair cells in the inner ear. OAEs can occur spontaneously in the absence of any stimuli (SOAEs) and can be evoked in response to external acoustic stimuli (eOAEs). It has been established that these emissions correlate to auditory perception. However, much is unknown about OAE generation as well as the role of noise (e.g., Brownian motion of the fluids in the inner ear). As well as creating a noise floor, we consider the notion that noise could contribute to improving sensitivity and/or selectivity. In order to address this, we extend an established model of coupled nonlinear oscillators (Vilfan & Duke, 2008) to simulate both SOAEs and eOAEs. In particular we examine the model's response to tones, varying both level and frequency. We look at the impact that addition of noise has to both collective and individual responses of the oscillators on the sensitivity or selectivity of the system. This model provides insight into how the dynamics of the system as a whole contrast to those of a singular part of the given system. These insights could provide more information as to how hair cells work collectively to produce OAEs.
Electrical stimulation around the ocular region of the head produces a phenomenon known as phosphenes. Phosphenes are the appearance of white flashes within the visual field when no light has entered the eye. This virtual light arises from the excitation of retinal neurons, triggering action potentials that travel through visual pathways to the visual cortex, leading to the perception of light. Due to many visually impaired individuals possessing surviving retinal neurons, phosphenes are able to be visualized by the visually impaired. Current phosphene stimulation techniques involve either invasive or non-invasive application of current. This project will focus on non-invasive techniques due to its safety, ease of use and cheaper cost compared to invasive surgical procedures. However, non-invasive techniques are limited by their spatial accuracy of where phosphenes are produced in the visual field. This project will first focus on improving accuracy with the future goal of producing phosphene shapes. Along with this, a plan to develop a mobile phosphene stimulator mask has been outlined. The mask will be worn as an aid by the visually impaired as they navigate the world. Lastly, a 3D simulation of the phosphene stimulator applied to a human’s ocular area will be rendered using COMSOL Multiphysics Software. This simulation will assess the current’s path and dose aiding in further understanding the phosphene phenomenon.
Tissue-tissue interface (compartment boundary) formation is an essential process during animal development and in disease. It has been shown that mechanical forces are important for both the establishment and maintenance of boundaries. For example, cables formed by actin and the molecular motor Myosin II are often found at compartment boundaries. However, how the boundaries are established or maintained during development and disease remains still unclear. In the Drosophila (fruit fly) embryo, the mesectoderm tissue separates ectoderm and mesoderm tissues, forming the ventral midline of the embryo. Eventually, mesectoderm cells are internalized becoming part of the central nervous system. It has been shown that during tissue internalization, a tension-bearing supracellular cable is formed at the mesectoderm-ectoderm interface by Myosin enrichment. As the mesectoderm internalizes this cable straightens even though the Myosin levels decrease at the boundary. During the internalization process, the ectoderm cells continue cell divisions. We hypothesize that increasing cell divisions leads to an increase in “tissue fluidity” and this fluidity defines the boundary shape, the internalization time, and the Myosin dynamics at the mesectoderm-ectoderm interface. To test this hypothesis, we used mathematical modelling together with in vivo manipulation of the boundary and image analysis. Our results suggest that the Myosin disassembly rate and tissue relaxation time control the internalization time and that the tissue fluidity maintains the linearity of the boundary.
The aim of this study is to find a pathway to evaluate and predict the bioactivity of molecular chemicals using the first principles approaches. The main question is why molecules with similar structures demonstrate distinct bioactivity properties when interacting with bio-organisms? Addressing this question holds significance not only for applications to environmental safety of chemicals and drug design but also for advancement of our understanding of kinetic process in complex system. The objective is to elucidate such a macroscopic phenomenon through atomic-level physico-chemical properties and ultimately construct a theoretical model capable of delineating the corresponding structure-bioactivity relationships. A directional reactivity pathway has been developed in the first step of study for preliminary estimation of ligand’s reactivity with aryl-hydrocarbon receptor (AhR) protein. The general metabolic pathways, specific mechanisms of biochemical transformation, and relevant background knowledge for ligand-protein interactions have been considered. To investigate critical changes stemming from subtle deviations in molecular structures, we utilize two theoretical tools. Specifically, Density Functional Theory (DFT) is used for calculating the electron density distribution and related electric properties, such as dipole moments and localized electrophilicity. Meanwhile, Molecular Dynamic (MD) simulations are applied to investigate and visualize molecular kinetic interactions, emphasizing the influence of steric effects. Theoretical studies on ligand-protein binding orientation, probability, and equilibrium binding positions are conducted and presented as well as their comparison with experimental bioassay results.
We previously introduced an innovative method to convert the co-variability of a pair of species in biochemical networks into biochemical reaction rates without perturbation experiments or relying on time-related data. We demonstrated this method through numerical demonstrations in previous work. However, our previous examples only addressed fluctuations in stationary states of models that overlooked cell division, approximating cellular growth as first-order dilution. In this work, we further exploit non-stationary models involving growing and dividing cells with our previous method. We provide numerical demonstrations where fluctuations in non-stationary systems effectively enabled the inference of rate functions between stochastically interacting elements.
Background: Arterial occlusion is a ubiquitous medical procedure, which is used in many clinical scenarios. However, there is no standard protocol for the selection of the applied pressure. As various pressures may trigger different physiological responses, it is important to understand these peculiarities. Aim: The current work aims to investigate if there is any difference in skin tone with occlusion at various applied pressures and how that potential difference relates to tissue physiology. Materials and Methods: We used remote photoplethysmography to record arterial occlusion events remotely with an iPhone camera. The hands of healthy volunteers (10 volunteers) were occluded at the wrist by inflating the blood cuff to either 150 or 200 mmHg. The experimental setup includes the subject sitting with their hands placed side by side on a raised platform palms facing down with the iPhone positioned directly above. A white circle is also placed in the frame for normalization during processing. In each iteration of data collection, one hand is designated as the experimental hand, occluded throughout the data collection, and the other acts as a control. A 7-minute continuous video was captured, consisting of three segments with different effects applied to the experimental hand: rest, arterial occlusion (150 or 200 mmHg), and pressure release. This process is repeated with the right and left hands acting as the experimental hands (on the left experimental hand 150 mmHg of pressure is applied and on the right 200 mmHg). The recorded video footage is divided into three regions of interest (ROI), one on the experimental hand, one on the control hand and one around the white circle. For each ROI the signal is averaged for each channel (R, G and B) in each frame for each segment creating a time series. Each channel of each ROI for the hands was then normalized using the white circle time series. To generate colour data from the RGB time series, the RGB series is converted to the CIE XYZ colour space and then further normalized to generate chromaticity coordinates across time for each segment for each ROI (control and experimental). Results: The preliminary results provide a good visualization of the colour changes that occur throughout the different levels of arterial occlusion (150 mmHg vs 200 mmHg). Conclusions: These preliminary results could allow for the interpretation of proper arterial occlusion based on remotely recorded and interpreted colour data.
The way chromosomes are spatially organized influences their biological functions. Cells orchestrate the action of various molecules toward organizing their chromosomes: chromosome-associated proteins and the surrounding “free” molecules often referred to as crowders. Chain molecules like chromosomes can be entropically condensed in a crowded medium. A number of recent experiments showed that the presence of the protein H-NS enhances the entropic compaction of bacterial chromosomes by crowders. Using a coarse-grained computational model, we discuss the physical effects on bacterial chromosomes H-NS and crowders bring about. In this discussion, a H-NS dimer is modeled as a mobile binder with two binding sites, which can bind to a chromosome-like polymer with characteristic binding energy. Using the model, we will clarify the relative role of biomolecular crowding and H-NS in condensing a bacterial chromosome, offering quantitative insights into recent chromosome experiments. In particular, they shed light on the nature and degree of crowder and H-NS synergetics: while the presence of crowders enhances H-NS binding to a bacterial chromosome, the presence of H-NS makes crowding effects more efficient, suggesting two-way synergetics in condensing the chromosome.
[1] Trofimenko, et al., Genetic, cellular, and structural characterization of the membrane potential-dependent cell-penetrating peptide translocation pore, eLife, 2021
[2] Lima, et al., Biological Membrane-Penetrating Peptides: Computational Prediction and Applications, Front. Cell. Infect. Microbiol., 2022
[3] Bereau, et al., Folding and Insertion Thermodynamics of the Transmembrane WALP Peptide., J. Chem. Phys., 2015.
[4] Pourmousa, et al., Molecular Dynamic Studies of Transportan Interacting with a DPPC Lipid Bilayer. J. Phys. Chem. B, 2013.
In the early stages of cancer, malignant cells are confined within the boundaries of a tissue. With the rapid division of these cells, large pressure gradients form across the borders. When the force resulting from this pressure overpowers the intercellular adhesion, the cells gain the ability to invade and spread through the adjacent tissues [1]. Understanding the details behind this process is of utmost importance as it may lead to novel methods for the early detection or treatment of cancer. New experimental data hints at a connection between the cross-sectional area of the compressed cells within the benign tumors and the likelihood of them spreading into bordering tissues. To investigate this relation better, we employ computational methods to study such systems in silico. Here we use the CellSim3D [2, 3] off-lattice model to study epithelial tissue growth in the presence of cancerous cells. CellSim3D allows the simulation of the mechanical aspects of growth, division, migration, and the interaction of cells with each other and their environment. Our study focuses on showing the connection between the area distribution of the cells within the benign tumors and the likelihood of them metastasizing to new sites in the body. The system of study is an epithelial tissue grown from a single rigid cell, in which a limited number of softer cells are introduced, and the system is left to evolve. The observations from this system follow the experimental findings and prove the effectiveness of employing computational methods in studying malignant tissues.
References
[1] P. Madhikar, J. ˚Astr¨om, B. Baumeier, and M. Karttunen, Jamming and force distribution in growing epithelial tissue, Phys. Rev. Res. 3, 023129 (2021).
[2] P. Madhikar, J. ˚Astr¨om, J. Westerholm, and M. Karttunen, CellSim3D: GPU accelerated software for simulations
of cellular growth and division in three dimensions, Comput. Phys.Commun. 232, 206 (2018).
[3] https://github.com/SoftSimu/CellSim3D
In computational neuroscience, understanding the multifaceted dynamics within neural networks remains a pressing challenge, particularly in the context of the brain's staggering complexity, comprising approximately 100 billion neurons. While traditional models have focused on local coupling and function, there is a growing consensus that a network-centric approach is indispensable for a comprehensive understanding of brain function. Against this backdrop, this study introduces a groundbreaking approach that leverages the mathematical principles of quantum mechanics to scrutinize neuroimaging data.
Our quantum-inspired model offers a novel framework for investigating network dynamics, complementing, and extending existing network science methodologies. It provides a nuanced perspective on neural information flow, enabling a deeper understanding of brain network topology. A significant feature of this model is its ability to integrate neuro-energetics, thereby enriching our understanding of metabolic processes within these intricate networks. Particularly salient is the model's utility in examining the temporal dynamics of resting-state networks, which are key to understanding the brain's baseline functional connectivity. Using predefined neural networks as a template, our model dynamically tracks network behaviour, aligning with the focus on neural network inference from imaging data.
Significantly, this approach uses source-localized electroencephalography (EEG) data, allowing for a broader application in both clinical and research settings. A significant expansion of this research is the incorporation of Physics-Informed Neural Networks (PINNs) to extend the capabilities of the quantum-mechanical framework. PINNs serve as a computational bridge, facilitating the integration of physical laws into the modelling process, thus enhancing the model's predictive accuracy and interpretability.
INTRODUCTION
MRI provides highly detailed images that enable healthcare professionals to assess the joints and surroundings in great detail. While commercial MRI scanners typically come equipped with basic receive coils, such as the head receive array, RF coils tailored for specialized applications like TMJ MRI must be obtained separately. Consequently, TMJ MRI scans often use suboptimal head receive array 1-4 due to the lack of specialized coils.
In this study, we introduce a simple, low-cost, and easy-to-reproduce wireless resonator insert to enhance the quality of TMJ MRI at 1.5 Tesla. The wireless resonator shows a significant improvement in SNR and noticeably better imaging quality compared to the head array alone in both phantom and in vivo images.
METHODS
Figures 1A and 1B show the head neck receive array and a wireless resonator. Figure 1C illustrates the position of wireless resonator for TMJ MRI. Figure 1D depicts the circuit diagram of the wireless resonator which was tuned to 63.67 MHz, the passive detune circuit disables the wireless resonator during the transmit phase, similar to the designs in previous works 5-8 The centers of the wireless resonator pads are aligned with TMJ for optimal imaging performance. The body coil is used for RF transmission, while the head array is employed for RF reception.
We perform multiple tests to assess the performance with and without the wireless resonator inserted into the head array:
• The transmit field (B1+) map and RF power calibration for detuning effectiveness
• Phantom image for SNR measurement
• Volunteer image for clinical evaluation
The wireless resonator operates in receive-only mode, modifying the scanners’ default parameter settings is unnecessary
Local board approved human procedures; participant provided written consent. Safety test conducted before imaging9.
RESULTS
Figure 2 compares axial B1+ with and without the wireless resonator insert. The difference between these two B1+ maps is <1%. Additionally, The RF power change for a 180-degree flip angle was under 1.5% with and without the wireless resonator. These affirm that the wireless insert remains highly transparent to RF power during the transmit phase.
In the context of TMJ MRI, where we typically focus on anatomical structures like the articular fossa, articular eminence, and disc, the average depth rarely exceeds 2.5 cm10. The SNR improvement (averaged over the red box in Figures 3C and 3D) achieved with the wireless resonator can reach up to 5.3 times at this depth. the SNR (averaged over the yellow box in Figures 3C and 3D), remains 2.4 times even at a depth of 4 cm.
Figure 4 displays volunteer TMJ images, acquired using multi-slice sagittal T1-weighted and PWD images. Combining the wireless resonator with the head array significantly improves image quality over using the head array alone. This aligns with our phantom study, where the wireless resonator consistently provided higher SNR. To achieve acceptable quality with just the head array, thicker slices or longer scan times are necessary11.
DISCUSSIONS
We chose the head array instead of the body coil as the primary coil for following reasons:
• It offers stronger mutual coupling, higher wireless power transfer efficiency, lower coil loss.
• This choice combines large array coverage with high local SNR, aiding TMJ MRI localization.
• Parallel imaging functionality. In Figure 5, the g-factor computations performed on a phantom with R-L acceleration factors ranging from 2 to 4. The comparison with g-factors obtained from a head array is also provided. The majority of commercially available TMJ coils do not possess parallel imaging capabilities11,12.
Prioritizing safety and comfort, a flexible printed circuit board coil is securely embedded in an MRI-compatible foam pad, which is only 20 mm thick and adaptable to different anatomical shapes. The coil operates exclusively in receive-only mode, avoiding interference with transmit signals and eliminating hotspots. For added safety, a fuse is included in the passive detune circuit in case of malfunction.
CONCLUSION
The combination of wireless RF resonators and phased arrays enhances SNR in specific regions and enables parallel imaging within existing MRI setups.
This approach could prove beneficial for imaging other anatomies, such as the thyroid, eye, and carotid artery. Different wireless RF resonators can also be integrated with diverse receive arrays to acquire extremity, breast, and body images tailored to specific anatomies.
Beyond using L/C resonators for wireless inserts, alternative solutions may involve volume-type wireless resonators or metamaterial-inspired designs 13,14
This advancement ensures affordability, streamlined workflow, and flexibility across different magnetic field strengths.
REFERENCES
INTRODUCTION
Inductive RF coils provide a cost-effective and simple approach for creating wireless RF coils in MRI1-5. They streamline MR scan setup and enhance patient comfort by eliminating the need for bulky components like cables, baluns, preamplifiers, and connectors. However, volume-type wireless coils are usually operated in transmit/receive mode due to their complex structure and multiple resonant modes. Adding multiple detuning circuits to these coils would decrease the SNR and increase costs. In this work, we proposed an innovative inductive wireless volume coil based on the Litzcage6 design for 1.5 T head imaging.
METHODS
A uniquely designed wireless birdcage coil was constructed for head imaging, incorporating a Figure-of-Eight (Fo8) conductor pattern within its 16 rungs, each measuring 26.5 cm, the diameter of the cylindrical tube is 26 cm. Eight passive detune circuits were employed (Figure1) and equivalent circuit of the wireless coil as shown in Figure2.
During the receive phase, the cross-diodes remain OFF, the wireless coil operates in the Litzcage volume resonator mode, as shown in Figure 2b.
In the transmit phase, uniform transverse magnetic field flux passing through the upper and lower segments of the Fo8 loops induces counteracting currents, successfully achieving geometric decoupling from the body coil. Furthermore, passive detune circuits are utilized for decoupling the remaining sections of the coil, as shown in Figure 2c.
To quantitatively evaluate the extent of RF transparency of the wireless coil to the body coil, a set of EM simulations was performed using FEM-based Maxwell solver (Ansys HFSS) 7
The wireless Birdcage and Litzcage coils were simulated on a cylindrical surface rather than replicating the complex domed structure for simplicity. To evaluate detuning performance, the B1+ of the body coil was compared in scenarios with and without the detuned wireless coils, and the coil configurations were documented in Table 1.
Table1
EM simulation coil configuration Diameter / Hight rungs detune circuits added
Body coil only (a) 60 / 60 cm 16 N/A N/A
Body coil + wireless birdcage 26.5 / 26 cm 16 16 (b) 8 (d)
Body coil + wireless Litzcage 26.5 / 26 cm 16 16 (c) 8 (e)
Note : (a) to (e) corresponding to the scenarios of the simulation results in figure 6
All MR measurements were performed using a 1.5T whole-body scanner (Siemens Sempra).
.
RESULTS
The wireless coil’s operating frequency was at 63.67 MHz. The unloaded Q-factor was ~350 and was ~35 with a human head.
The system's RF power calibration shows a minimal 0.2% difference with and without the wireless Litzcage coil, indicating its near invisibility in the transmit phase. This aligns with the simulation results in Figure 3e.
SNR maps were generated by processing gradient-recalled echo (GRE) images reconstructed from raw data. Individual receive channel images were combined using the commonly used "Sum-of-Squares" (SoS) technique. The wireless coil exhibited approximately 3.9 times higher SNR compared to the body coil. Notably, there was a 10% increase in SNR in the central region and a 21% decrease at the surface when compared to a 12-channel receive array, as depicted in Figure 4a-c.
Figure 5 shows T1/T2-weighted and FLAIR images for the same healthy female volunteer. The wireless Litzcage provided similar image quality when compared to the commercial 12-channel wired local array. The high degree of image uniformity could also validate that the wireless coil was adequately detuned during the transmit phase, ensuring the uniform transmit field of the body coil remained unaffected
DISCUSSION
The wireless coil is suitable for most applications without compromising patient safety in Rx-only mode, For specific areas like the knee and other body parts where phase wrap needs to be avoided, the Tx/Rx mode (which does not detune during the transmit phase) is appropriate. The wireless Litzcage coil has limitations for parallel imaging with the current MRI system setup. Alternative approaches such as compressive sensing or deep learning techniques can be explored in such cases.
CONCLUSION
The domed wireless Litzcage coil offers comparable image quality to a wired receive array while being simple, lightweight, and cost-effective in design. This technology can be extended for application in MRI systems of 0.55T, 3.0T, and 7T. It is applicable for extremity, breast, and body imaging, enhancing patient comfort and allowing more flexible patient positioning. Different types of inductive wireless coils might outperform wired coils in MRI-guided intraoperative and interventional procedures, including laser and microwave ablation surgeries.
REFERENCES
Over the recent decades, molecular dynamics (MD) has emerged as a promising tool for investigating droplet interaction with surfaces and its local effects. We examine the classic problem of spreading liquid droplets on surfaces that govern myriad processes ranging from coating and printing to even biological systems. Spreading is usually studied using only the evolution of contact radius $r$ with respect to time. In the complete wetting regime, the droplet fully spreads on the surface, whereas in partial wetting, it assumes a cap-shaped form signified by a contact angle. A simple power law of the form $r \propto t^{\alpha}$ can describe spreading in the complete wetting regime.[1] However, empirical parameters are often proposed to model its behavior if the droplet does not completely spread on the surface. Additionally, neither case has established a universal exponent $\alpha$. In this study, we use MD simulations of water droplets spreading on a Lennard-Jones surface with various degrees of wettability to investigate a new spreading model. A new method describing the contact radius of droplets in MD simulations is proposed and verified via comparison with established techniques. The applicability of the non-dimensional form of the model to both wetting regimes is explored. Furthermore, the equidimensional equation $\frac{t}{r}\frac{dr}{dt}$ is also discussed following the work of Stone $\textit{et al.}$[2] The dimensionless form reveals the effect of surface wettability and its contact angle on the exponent $\alpha$. The results from our unifying approach to the study of spreading lead to a deeper understanding of the process, particularly in the partial wetting regime.
[1] Nieminen, J. A.; Abraham, D. B.; Karttunen, M.; Kaski, K. Molecular Dynamics of a Microscopic Droplet on Solid Surface. Phys. Rev. Lett. 1992, 69, 124–127.
[2] Bird, J. C.; Mandre, S.; and Stone, H. A. Short-time Dynamics of Partial Wetting. Phys. Rev. Lett. 2008, 23, 234501.
Introduction
Understanding how the polarization states of light are affected by the optical components in a confocal scanning light ophthalmoscope (CSLO) is essential for the development of a novel retinal polarimetry imaging instrument, to be used in in vivo retinal imaging for the detection of protein biomarkers of brain diseases. We measured, modeled, and investigated compensation of the changes in light polarization upon interaction with mirrors, lenses and beam splitters in the instrument.
Methods
The influence of different beam splitters (BS) on different states of polarized light (including linear, circular, and elliptical) was measured experimentally, using a standard Stokes polarimeter. The polarization states were measured without and with the components in the light path (λ=633nm). Interactions with polarized light were calculated from measurements. Additional effects of other previously used CSLO components, as a function of angle of view, were modeled using polarization ray tracing in CODE V, an optical design software package.
Results
The non polarizing BS, mirrors and lenses have a significant (p_adj<0.05) but relatively small effects on polarized light states. The dichroic BS (which separates different wavelengths) had a much larger effect, systematically reversing the handedness of light (p_adj<0.05). We discuss how the larger effects can be compensated and polarization states optimized when using polarized light to create visible retinal biomarkers of brain diseases.
Conclusions
To make retinal biomarkers of interest more visible, the large effects of dichroic beam splitters on the polarization states of light need to be compensated during measurements [1]. Other optical components have smaller effects which can be accounted for following the measurements.
References
[1] Bélanger, E., Turcotte, R., Daradich, A., Sadetsky, G., Gravel, P., Bachand, K., De Koninck, Y., & Côté, D. C. (2015). Maintaining polarization in polarimetric multiphoton microscopy. Journal of Biophotonics, 8(11-12), 884–888. https://doi.org/10.1002/jbio.201400116
Bacterial meningitis is a life threatening disease resulting from the bacterial infection of the meninges, which are the layers protecting the brain and spinal cord. The bacteria that cause this affliction can be diagnosed via a spinal tap where a sample of the patient’s cerebrospinal fluid (CSF) is taken and tested for bacteria. Currently, it can take up to three days to receive positive test results. Without swift diagnosis, this illness can progress to the point of irreversible brain damage and, in severe cases, death.
To address this issue of long diagnostic wait times and the corresponding prescription of inappropriate broad-spectrum antibiotics, the use of laser-induced breakdown spectroscopy (LIBS) as a rapid diagnostic is being investigated. LIBS can be performed very quickly and with minimal sample preparation. This technique can provide a rapid and accurate pathogen diagnosis by ablating a target specimen and measuring its elemental composition. During the ablation process, a high-temperature microplasma is created. The atomic emission from the plasma is collected and dispersed using a high-resolution Echelle spectrometer to produce a broadband elemental emission spectrum. The ratios of the elements measured have been shown to be unique to a variety of bacterial species. This technique has been used in the past to successfully detect the presence of bacteria in clinical specimens of blood and urine as well as to differentiate between four different species of bacterial pathogens in these fluids.
In the current work, artificial cerebrospinal fluid (aCSF)—a safe and synthetic fluid with ionic concentrations that mimic clinical CSF—was used. LIBS spectra were obtained from aCSF alone and compared with LIBS spectra acquired from aCSF into which known aliquots of several bacterial pathogens which may include Staphylococcus aureus, Escherichia coli, Steptococcus mitis, Mycobacterium smegmatis, and Enterobacter clocae were added. The computerized chemometric algorithms and machine learning techniques used to classify the resulting spectra will be presented. The overall sensitivity and specificity of the diagnostic test will be discussed, as will the overall ability of the diagnostic to accurately identify the bacteria in the aCSF.
Urinary tract infections (UTIs) are the second most common infectious disease for which people seek treatment. The current gold standard for diagnosis requires the culturing of bacteria, a method that is time-consuming, costly, and can result in false-negatives. As an alternative diagnostic technique, laser-induced breakdown spectroscopy (LIBS) is being investigated for the rapid and accurate identification of pathogenic bacteria in clinical specimens of urine.
LIBS utilizes a nanosecond laser pulse to ablate a target, producing a plasma upon which spectroscopic analysis is performed. A broadband high-resolution Echelle spectrometer with an intensified-CCD camera allows for the measurement of a high signal-to-noise optical emission spectrum which can be used to make a near-instantaneous determination of the elemental composition of a target. To stimulate clinical UTIs, sterile urine specimens obtained from four patients at a local hospital were spiked with known concentrations of different bacterial species including Escherichia coli, Staphylococcus aureus, and Enterobacter cloacae. A partial least squares discriminant analysis (PLS-DA) performed on the spectra obtained from these specimens resulted in a 98.3% sensitivity and a 97.9% specificity for the detection of pathogenic cells in urine when single-shot LIBS spectra were tested. When the model was constructed using the average of thirty single-shot spectra acquired from a single target, a 100% sensitivity and a 100% specificity was obtained. Once a sample was identified as bacteria-positive, more advanced machine learning techniques were needed to differentiate the spectra acquired from the three bacterial species. The average sensitivity and specificity of an artificial neural network analysis with principal component analysis pre-processing (PCA-ANN) was 70.9% and 85.5% respectively. Ongoing work to improve these discrimination algorithms will be presented as well as efforts to improve the deposition method to increase the repeatability and improve the signal to noise of the spectra acquired from urine specimens.
Rapid pathogen detection is essential for controlling infectious disease outbreaks and minimizing healthcare-associated costs worldwide. For example, delays in the diagnosis of a pathogen present in the blood (bacteremia) can contribute to increased patient mortality if the infection progresses to sepsis. Laser-induced breakdown spectroscopy (LIBS) is a relatively simple and versatile elemental analytic technique that has demonstrated the ability to quickly identify bacterial pathogens in fluids with minimal sample preparation. In this study, LIBS showed high specificity and sensitivity in not only detecting the presence of bacteria in blood samples, but also discriminating between four different species using chemometric and machine-learning algorithms.
Blood samples obtained from patients at a local hospital were intentionally spiked with known aliquots of Escherichia coli, Staphylococcus aureus, Enterobacter cloacae, and Pseudomonas aeruginosa to simulate blood infections. After deposition of these samples on inexpensive, disposable filter media, approximately 30 single-shot LIBS spectra were acquired per filter. The intensities of fifteen emission lines from Ca, Mg, Na, C, and P were obtained from each spectrum, and these intensities were used as variables in the subsequent data analysis. Partial least squares discriminant analysis (PLS-DA) was used to discriminate between spectra acquired from sterile control samples and those infected with bacteria. This test possessed a 96.3% sensitivity and 98.6% specificity. The LIBS spectrum from 200 nm – 590 nm was input into an artificial neural network analysis with principal component analysis pre-processing (PCA-ANN) to diagnose the bacterial species once detected. PCA-ANN performed on these spectra returned an average sensitivity of 85.5%, an average specificity of 95.0%, and a classification accuracy of 92.5%. These results highlight the capability of LIBS to be a rapid and reliable method for the diagnosis of blood infections. This result has the potential to significantly reduce testing times compared to conventional laboratory methods, minimizing patient suffering and reducing global healthcare costs.
The discovery of the endothelial wall within human vasculature has significantly advanced our comprehension of cardiovascular physiology and pathology. Endothelial function, which governs vascular dilation and constriction in response to stimuli, is crucial for maintaining optimal blood flow dynamics. Any disruptions in endothelial function can lead to dysregulation of blood flow, ultimately contributing to the development and progression of cardiovascular diseases. The goal of this study is to evaluate responses elicited through induced ischemia utilizing a blood pressure cuff and to analyze the reactive hyperemic response through the use of optical means such as photoplethysmography (PPG) and a muscle saturation oximeter. Existing applications of endothelial function assessment induce occlusion at 50 mmHg above systolic pressure. However, there is no golden standard pressure for full arterial occlusion, so the current focus of this research is based on refining the protocol, exploring different pressures, and attempting to enhance the strength of the signals produced through the PPG sensor and the muscle oximeter. 10 healthy participants were subject to an 8-minute experiment that ecompassed a 1-minute baseline reading, arterial occlusion at 150 mmHg for 3 minutes, and a recovery period of 4 minutes. The procedure was repeated at a pressure of 200 mmHg on the opposing hand. The primary focus of the analysis was examining the hyperemic response of the muscle oxygen saturation data, specifically the amplitude of the muscle oxygen saturation and the time duration to the maximum saturation amongst participants. The first derivative test was also applied to the hyperemic response, and analysis was completed on the peak time of the slope, peak slope value, and the full-width half maximum. Overall, the two pressures provided no statistically significant results, which can conclude that for forearm occlusion, 150 mmHg can be used as it is more tolerable and comfortable for participants.
Precise diagnosis of Alzheimer’s disease (AD) is crucial to ensure timely intervention and evaluate patient prognosis. Although integrating multi-modal neuroimaging such as MRI and PET has the potential, there are still challenges in effectively integrating multi-modal images. To this end, we propose a deep learning-based framework that uses Mutual Information Decomposition to obtain modality-specific information and combines attention mechanisms to learn the optimal multi-modal feature combinations. Our proposed framework includes three parts. First, we design a feature extractor for modality-specific information through mutual information separation. Second, we optimize the combination of modality-specific features by adding attention constraints. Third, we mitigate the over-fitting of the model through multi-task learning to improve the generalization ability. Evaluation results on the ADNI dataset highlight the effectiveness of our method. Our work demonstrates the potential of effectively integrating multi-modal neuroimaging data for advancing early AD detection and treatment.
Biopolymers such as collagen and DNA play a fundamental role in cell dynamics, and many physiological functions rely on events that modify their structures. Understanding how mechanical force affects biopolymer structure and function at the molecular level could help elucidate how cellular and extracellular processes are regulated by external stimuli. Furthermore, single-molecule studies can provide mechanistic insight that is relevant at higher-order scales.
In this project, we are merging single-molecule imaging with mechanical manipulation by integrating Total Internal Reflection Fluorescence (TIRF) microscopy and Magnetic Tweezers (MT) to reveal how force and temperature drive mechanisms such as binding of regulatory proteins to biopolymers. Obtaining accurate data on the force exerted by MT is essential for the comprehensive characterization of polymer behavior and its interactions with other molecules. Therefore, the development of a tailored methodology adapted specifically to our experimental goals is crucial. My role in this project includes the configuration of the MT instrument and the refinement of data collection techniques to suit the requirements of our experiments.
The analysis of these single molecule measurements could elucidate mechanisms of regulatory protein binding events to collagen and DNA’s response to force and temperature. In addition, the establishment of these experiments generates a foundation for studies on many other biomolecular systems.
Introduction: The rise of the COVID-19 pandemic brought to light a stark disparity in the reliability of tissue oximeters, as racialized groups saw increased mortality rates due to undetected hypoxic conditions. Commercial tissue oximeters are based on Near Infrared Spectroscopy (NIRS) techniques that use a limited number of wavelengths of light to determine tissue oxygenation. However, it is well established that this approach is biased by high melanin content in the epidermis (i.e., in darker-skinned patients) due to its highly absorbing nature in the near-infrared region. Interestingly, the absorption spectrum of melanin has a monotonic decrease with wavelength, similar to the contribution of light scattering whose confounding effects can be mitigated by using dozens of wavelengths (i.e., hyperspectral oximetry). The objective of this study was to compare the accuracy of commercial and hyperspectral tissue oximetry in estimating the concentration of oxygenated hemoglobin (cHbO) in tissue-mimicking phantoms.
Methods: Solid phantoms of the human epidermis were made with varying concentrations of water-soluble nigrosine (a dye that mimics the absorption of human melanin) to replicate light, medium, and dark skin. These skin phantoms were placed on top of a liquid phantom that simulates fully oxygenated tissue conditions. Optical probes were secured to the surface of the skin layer for hyperspectral NIRS measurements. A no-skin condition (i.e, probes positioned in contact with the liquid phantom) served as reference for assessing the accuracy of the two methods. Commercial oximeter data was emulated by isolating data from several wavelengths. cHbO levels were determined using spectral derivatives and the Beer-Lambert Law for hyperspectral and commercial conditions, respectively. Statistical analysis employed independent sample t-tests (p < 0.05).
Results: Hyperspectral oximetry accurately determined cHbO in medium and dark pigmentation conditions, while the commercial system consistently underestimated cHbO across all skin tones.
Discussion: Hyperspectral oximetry is the superior technique to determine cHbO in darker skin pigmentation conditions. Current limitations of this work include the potential heterogeneity of the epidermal layers. Future work will seek to extend this comparison to more clinically relevant measures, such as tissue blood oxygen saturation.
Breast cancer is the most common cancer type in women accounting for ~25% of new cases of all cancers and 14% of cancer deaths in Canadian females. The metastatic spread of breast cancer cells from the primary tumour is the dominant contributor to mortality in these patients. The mechanisms by which cancer cells metastasize are diverse with suggestions that in vivo generated electric fields (EF) may contribute to directed breast cancer cell migration (electrotaxis)[1,2]. However, the mechanism of electrotaxis is unknown. Recently, new contactless electrotaxis assays have been developed and wirelessly applied AC EF were shown to alter the directional migration of breast cancer cells[3]. These results motivated us to further investigate the possible effect of wireless DC EF on breast cancer cell migration considering the endogenous occurrence of DC EF in tissues. We used 3D printing and biocompatible metamaterials to develop a wireless DC EF electrotaxis device, which allows for customized EF control and cell migration imaging. Multiphysics modeling characterized the DC EF in the cell chamber, offering improved reproducibility and consistency of EF application to the cells. Using this prototype device, we tested the migration of MDA-MB-231 cells, a human metastatic breast cancer cell line. Our preliminary results showed that the wireless DC EF altered cell migratory turning behaviors. The results of our ongoing research integrating experimental and modeling approaches will be presented. This metamaterial-assisted wireless EF device may be broadly useful for electrotaxis studies with the potential to enable novel therapeutic intervention strategies for cancers.
[1] C. McCaig, et al. “Controlling cell behavior electrically: current views and future potential” Physiological reviews vol. 85,3 (2005):943-78
[2] D. Wu, et al. “DC electric fields direct breast cancer cell migration, induce EGFR polarization, and increase the intracellular level of calcium ions” Cell biochemistry and biophysics vol. 67,3 (2013):1115-25
[3] D. Ahirwar, et al. “Non-contact method for directing electrotaxis” Scientific reports vol. 5 (2015):11005
In lungs, a lipid-protein surfactant layer enables breathing by reducing surface tension at the air-water interface. Lung surfactant function requires cycling of material between bilayer reservoirs and an active surfactant layer, presumably involving transient formation of highly curved lipid structures. Surfactant protein SP-B, a 79-residue protein that forms homodimers, is essential for lung function. To study how lipid-SP-B interactions might contribute to implied lipid assembly reorganization, GROMACS molecular dynamics simulations were used to study conformation and orientation of SP-B fragments SP-B$_{1-9}$ and SP-B$_{1-25}$ in DPPC/POPG (7:3) model lipid bilayers. SP-B$_{1-9}$ includes SP-B’s initial 7-residue insertion sequence. SP-B$_{1-25}$ also includes the first of SP-B’s four amphipathic helices. To obtain averages and test for protein-induced bilayer perturbation, each simulation involved multiple copies of each fragment (18 SP-B${1-9}$ or 9 SP-B$_{1-25}$ copies per bilayer leaflet). Simulations were also run with no peptide and with only one copy of SP-B$_{1-9}$ per leaflet. The simulation with multiple copies of SP-B$_{1-9}$ started with randomly oriented peptides inserted in the lipid bilayer and ran for 450 ns. SP-B1-25 simulations started with (i) randomly oriented peptides, (ii) peptides oriented along the bilayer surfaces, and (iii) peptides inserted across the bilayer (trans). On average, SP-B$_{1-9}$ was found to tilt only slightly into the bilayer with its N-terminal phenylalanine staying within about 0.75 nm of the bilayer phosphate layer. Comprising the insertion sequence plus first SP-B helix, SP-B$_{1-25}$, tended to remain in the trans-bilayer orientation when started in that orientation. If started roughly parallel to the bilayer surface, SP-B$_{1-25}$ tended to settle into non-trans orientations but with excursions toward the trans configuration. When starting from random peptide orientations, SP-B$_{1-25}$ largely settled into a mixture of trans and surface orientations. SP-B$_{1-25}$ is nearly the first third of the SP-B monomer. Its capacity to be accommodated in both trans-bilayer and single-leaflet environments may reflect SP-B’s role in promoting lipid assembly reorganization implied by the cycling between bilayer reservoir and surface-active layer structures. Supported by NSERC, ACENet, and Digital Research Alliance of Canada.
Introduction
Positron Emission Tomography (PET) is the gold standard for imaging CMRO2, the cerebral metabolic rate of oxygen (Fan et al., NeuroImage, 2020). However, the procedure requires up to three radiotracers and invasive arterial sampling. Incorporating MRI techniques can simplify the procedure (Ssali et al.,JNM, 2018; Narciso et al., Phys Med Biol, 2021). PET/MR imaging of oxidative metabolism (PMROx) uses whole-brain (WB) CMRO2 measured by MRI to calibrate simultaneously acquired [15O]O2-PET data, eliminating arterial sampling. Arterial spin labeling (ASL) can be used to replace [15O]H2O-PET, reducing the requisite number of radiotracers to one. PMROx is non-invasive yet maintains the ability of PET to quantify the oxygen extraction fraction (OEF). CMRO2 can be imaged under different metabolic states (e.g., rest and during a functional task) in a single imaging session as the acquisition time is ~5 min. The accuracy of PMROx was previously demonstrated in a porcine model (Narciso et al., JNM, 2021). The aim of the current work was to present initial data translating PMROx to human participants.
Methods
Data were acquired from n=13 healthy subjects on a Siemens 3T Biograph mMR scanner. Five minutes of list-mode PET data were acquired after inhalation of ~2000 MBq of [15O]O2 while measuring WB CMRO2 by MRI (Jain et al., J Cereb Blood Flow Metab, 2010) at rest and during a finger-tapping task. PET images were reconstructed using MR-based attenuation correction maps, motion-corrected, and smoothed by a 4mm Gaussian filter. Pseudo-continuous ASL (TR/TE: 4210/37.86 ms, post-labeling delay: 1.7s, labeling duration: 1.5s) was collected during the PET acquisition. ASL images were motion-corrected and smoothed by a 6mm filter. All images were pre-processed in SPM12 and calculations were completed with in-house MATLAB scripts.
Results
Results were normalized to Montreal Neurological Institute (MNI) atlas space. Average resting CMRO2 across subjects was 4.5 and 3.5 mL/100g/min for grey and white matter, respectively. CMRO2 was observed to increase ~25% in the motor cortex during tapping.
Discussion
This preliminary study demonstrates the feasibility of imaging CMRO2 in a span of 5 minutes by combining [15O]O2-PET with MRI, reducing the requisite number of radiotracers to one and eliminating arterial sampling. CMRO2 values were in good agreement with literature values.
Breast cancer is the leading cause of cancer in women worldwide and surgery to remove the tumour and stage the cancer is a crucial component of most treatment plans. Radioguided surgery using a hand-held gamma probe that counts gamma photons is a common technique that allows surgeons to locate non-palpable, radiolabeled lesions in the operating room. While gamma probes are effective for detection of low-energy radiotracers, increased scattering at higher energies degrades the probe’s resolution and limits the use of high-energy radiotracers, such as positron emitters. Understanding the physics of photon interactions and their influence on the shape of a detected gamma-ray energy spectrum, we hypothesized that a machine learning model could analyze the energy spectrum recorded by a gamma probe and predict the location from which the gamma photons originated. As such, the goal of the study was to assess how well machine learning improves a gamma probe’s ability to localize high-energy radiotracers.
Using Monte Carlo simulations, we modeled a custom designed multifocal gamma probe featuring a 4-segmented collimator and detector. To simulate surgery, a 511 keV radioactive point source was embedded 35 mm below the probe in phantom breast tissue. The source was positioned at various known x, y locations and a 4-channel energy spectrum was recorded. Simulations were repeated 300 times, and the data was split using an 80:20 ratio for training and testing. A 1D convolutional neural network (CNN) was trained to analyze the recorded energy spectra and predict the x, y location of the radioactive source.
The CNN was able to effectively predict the location of the radioactive source from a 4-channel energy spectrum of a multifocal gamma probe. As desired, there was a strong linear relationship (R2 = 0.93) between the true and predicted coordinate locations. The CNN had a small mean prediction error of 2.9 mm and could predict the location of the radioactive source over a large 40x40 mm field of view. The CNN predictions improved the resolution of the multifocal gamma probe by at least 10-fold compared to existing gamma probes. Overall, this work presents a new, real-time localization technique that offers higher resolution and more efficient directional guidance for detecting high-energy radiotracers with a hand-held gamma probe in surgery.
Clinicians typically rely on self-reporting outcomes to assess an arthritis treatment’s efficacy, leaving reported data subject to bias. Thus, there is need for devices that can measure and record patients’ daily activity to help assess patient recovery. Inertial measurement units (IMUs) combine an accelerometer and gyroscope to quantitatively measure physical motion; machine learning (ML) algorithms are well-suited to perform detailed time series analysis of the IMU data to identify specific patient activities. The objective of this study is to verify if the LSM6DSOX IMU, which contains a Machine Learning Core (MLC), can classify activities in real-time based on the 3D motion data from the IMU.
Training data was acquired from 5 participants using a single IMU attached at the base of each participant’s spine; the IMU measured 3 accelerations (in Gs) and 3 angular velocities (in degrees/s). Data was collected as the participants performed a series of activities and was then segmented and labelled based on activity type. The MLC is a binary decision tree model that uses a sliding window approach to contextualize sequences of consecutive time-series data. Scalar features are extracted from each data window to provide representative inputs to the classification model. Theses features had to be manually selected to help differentiate between the target activities using knowledge of the unique patterns produced in the data from different motions. After training, the model’s accuracy was checked on the test set.
Making predictions on the test set resulted in a test accuracy of 96% and precision scores of: 99% for stationary motion, 100% for walking, and 86% for running. The results demonstrate that a single IMU with built in MLC can effectively analyze and identify patterns in complex 3D motion time series data to correctly classify various physical activities. Using edge AI allows for low power operation, minimizes storage demands, and maximizes privacy making it ideal for long-term remote applications. This work could lead to a low-cost motion data acquisition system that provides objective activity data that could significantly improve the assessment of therapies for musculoskeletal conditions, like arthritis.
Detection of trace concentration of small molecules in various medias is highly necessary for many applications and uses, creating a need for the development of sensitive and reliable detection methods. Surface-enhanced Raman spectroscopy (SERS) offers a promising solution due to its ability to provide fingerprint-like spectra of molecules, enabling precise identification even at trace concentrations. In this study, we present the fabrication and testing of a SERS sensor tailored for the detection of small molecules such as benzoyl peroxide, gibberellic acid and salicylic acid.
Thin-film gold nanostructures were fabricated using the pulsed laser deposition technique. The thin film-like gold nanoparticle substrates, crafted through a top-down approach, exhibit elevated stability, sensitivity, enhanced accuracy, and heightened precision in measurement compared to colloidal solutions. Their controlled composition, thickness, and other properties facilitate uniform interaction between analyte and substrate, ensuring dependable and consistent performance across experiments.
The sensor's performance was evaluated by testing different concentrations of small molecules. The SERS sensor exhibited high sensitivity and specificity, allowing for the detection of small molecules at trace levels.
Furthermore, the efficacy of the SERS sensor was validated through real small molecule testing. Small molecule matrices were analyzed, and the sensor successfully detected and identified these molecules present in these complex matrices, demonstrating its practical utility in various applications.
Overall, our results highlight the potential of SERS-based sensors as powerful tools for the rapid and reliable detection of small molecules, ensuring the integrity of certain medias.
In this poster, a newly built Capacitively-Coupled radio-frequency argon discharge is used to explore the behaviours of pristine plasma, dusty plasma, and misty plasma. The primary objective is to analyse the behaviour of the discharges in various regimes, including single-drop and burst liquid drops regimes. Ultimately, the project aims to advance our understanding of aerosol-assisted plasma processes and their broad applications in nanomaterial synthesis and thin film deposition.
The interactions between the plasma and microscopic liquid droplets ("misty plasma") is of particular interest. Indeed, the droplet charging mechanism and the trapping of the droplet are still open issues in misty plasmas. The project specifically focuses on examining the evolution of Capacitively-Coupled Plasma argon discharge under diverse conditions: pristine discharge, microdroplet injections using different liquids, nanoparticle injections and dust, including ZnO (6 nm) and SiO2 (20 nm). Essential parameters such as droplet size, quantity, and injected species will undergo systematic testing and comparison to understand their effects on the discharge behaviour. The characterization of the Capacitively Coupled Plasma will involve various techniques, including Microwave Interferometry, Optical Emission Spectroscopy, and electrical diagnostics measurements.
Discharges in liquid is a growing field of study in the cold plasma community. The non-equilibrium properties of such plasmas enable the production of reactive species in the liquid phase, which trigger some chemical reactions not accessible using the conventional chemical processes. Such unique properties make in-liquid discharges promising for different applications, namely liquid depollution, dye degradation, or nanoparticle synthesis. The ignition of a discharge in liquid is not straightforward due to high liquid density, and pulsed high voltages with fast rising period are usually required. For instance, in deionized water with a pin-to-plate electrode configuration separated by a gap of ~300 μm, a voltage of ~20 kV is needed to ignite a discharge. The characterization of such a small discharge, e.g. by imaging, is challenging. Further, the strong discharge emission hinders fundamental understanding of the discharge development. More recently, we have demonstrated that discharge ignition can be facilitated in water by adding a layer of low dielectric liquid at the top of water. This is due to the difference of the dielectric permittivity of the two liquids, which enhances the electric field magnitude, and thus, allows the ignition of a discharge that has a length of several mm (up to 4 mm at 20 kV amplitude and pulse width of 500 ns). In this study, we present the characteristics of such discharges using different imaging techniques. First, we used 1 ns time-resolve ICCD imaging of the discharge emission in the visible range. Second, a backlight imaging using a high-speed camera to study the bubble dynamics after the discharge at μs time scale. Finally, betatron x-rays from a laser-plasma accelerator were used to image the first instants of the discharge. This novel imaging technique reveals the dynamics of a low-density region induced by the discharge, which is typically obscured by saturation in the visible range.
This paper introduces a novel technology for the removal of carbon dioxide from the atmosphere using plasma technology, inspired by the dry methane reforming method. Carbon dioxide and methane are the primary greenhouse gases responsible for climate change on Earth's surface. Dry reforming of methane (DRM) entails the simultaneous conversion of methane and carbon dioxide into synthesis gas and higher hydrocarbons. We employ plasma technology to activate chemical reactions and eliminate carbon dioxide, providing a sustainable and efficient alternative to conventional methods. In this article, we explore the potential applications and environmental implications of carbon dioxide removal through various types of hot and cold plasmas. Additionally, we investigate the future prospects of this innovative technology in the realms of nuclear energy and environmental sustainability. For example, the utilization of microwave plasma shows promising implications for carbon capture and storage in nuclear energy applications.
The objective of this article is to examine new scientific methods of plasma technology for carbon dioxide removal and its synergy with energy technologies, such as hot plasma in fusion machines. Furthermore, we will compare the performance, conditions, and consequences of employing different types of cold and hot plasma for carbon dioxide removal, providing explanations for each approach.
Ultimately, through the presentation of the proposed model, we assert that plasma technology has the capacity to effectively eradicate carbon dioxide, demonstrating its innovative nature, adaptability, and ability to address current global challenges. This technology represents a sustainable and long-lasting solution.
Plasma Immersion Ion Implantation (PIII) allows the modification of the surface properties of materials used for the manufacturing of semiconductor devices.
It is based on a target immersed in plasma on which a series of negative high‐voltage pulses (NHVP) is applied in order to accelerate plasma ions into the target surface.
Understanding the evolution of plasma parameters during the implantation process, such as electron and ion temperature, electron density, plasma potential, and ion velocity, is crucial for a good control of PIII.
The objective of this research is to study the sheath evolution during PIII, as it is critical for controlling the implantation dose, the rate of implantation, and the charge accumulation on the surface of the sample.
Experiments are conducted at the USask Plasma Physics Lab using a low-temperature low-pressure Inductively Coupled Plasma (ICP) radio-frequency plasma source. Using Laser-Induced Fluorescence (LIF) diagnostic, time-averaged spatially resolved measurements of the ion velocity distribution function (IVDF) and ion temperature are obtained in the first step. In a second step, time-resolved LIF measurements will be made to obtain the evolution of the IVDF in the sheath region
during the implantation process.
This project explores the dynamics of plasma acceleration in low-$\beta$ plasmas, where magnetic energy dominates over internal kinetic energy, confining the plasma within magnetic fields. We investigate the adherence of low-$\beta$ plasmas to Alfvén's theorem, which describes the 'frozen-in' behavior of magnetic field lines. Such plasmas find applications in magnetic confinement fusion reactors, star atmospheres, and plasma-based space propulsion technologies.
Our study uses magnetohydrodynamics (MHD) and Particle-In-Cell (PIC) simulations to analyze plasma acceleration modes. We begin by reviewing Weber-Davis solar wind acceleration, following Parker's theoretical framework. Furthermore, we examine various plasma acceleration modes, including critical points responsible for transitioning solar winds from subsonic to supersonic velocities.
Transitioning from solar wind dynamics to magnetic nozzle scenarios, we investigate a convergent-divergent magnetic field configuration that converts plasma's thermal energy into directed kinetic energy. Through detailed comparisons of PIC and MHD simulations, we aim to elucidate plasma acceleration modes, with a particular focus on torsional Alfvén waves, pressure-induced acceleration, and centrifugal confinement.
The contamination of peanuts by Aspergillus flavus and the subsequent production of aflatoxin B1 (AFB1) poses significant health risks and economic losses to the food industry. High voltage atmospheric cold plasma (HVACP) has emerged as a promising non-thermal technology for mitigating fungal contamination and reducing mycotoxin levels in various food commodities, with short treatment time, low energy consumption, and no chemical residue on the food. Our previous study demonstrated that HVACP can effectively inactivate A. flavus and reduce AFB1 on raw peanut kernels with room air as working gas. To understand the effect of HVACP on the quality, the moisture content, color, hardness, fracture force, peroxide value, and chemical structure of peanut oil were assessed for the peanuts treated with HVACP using the optimal conditions regarding A. flavus inactivation and AFB1 reduction (90 kV for 10 min at 80% RH). There were no significant differences in moisture content, color, texture, peroxide values as well as the structure of peanut oil between untreated and treated peanut kernels (P > 0.05). HVACP exhibits a great potential intervention with a high efficiency to decontaminate the A. flavus and reduce AFB1 in peanut without negative effects on the quality, which is beneficial to food post-harvest processing and safety, offering a promising solution for the peanut industry to ensure the delivery of safe peanuts to consumers worldwide.
Peanuts are highly susceptible to contamination with Aspergillus spp. mold in the field or during storage, which may lead to moldy peanuts or generation of aflatoxin, both of which are food safety issues. A. flavus is the main mold that produces aflatoxin B1 (AFB1). High voltage atmospheric cold plasma (HVACP) is an emerging non-thermal technology with short treatment time, low energy consumption, that leaves no chemical residue on the food. In this study, peanut samples were inoculated with A. flavus spores and AFB1 toxin. Subsequently, samples were treated with HVACP at 90 kV and a power of 160 W for several treatment times (2, 5, and 10 min), relative humidities (RH, 5, 40, and 80%), and post-treatment storage times (0, 4, and 24 h) with a direct exposure mode using air (78% N2:22% O2) as the working gas. A reduction of 2.20 log cfu/sample of A. flavus spores was observed for the peanut treated for 5 min. More than 99.9% (3.0 log cfu/sample) of A. flavus was obtained with a HVACP treatment for 10 min at an 80% RH and post-treatment time of 24 h. A 67.8% AFB1 reduction was achieved in pure toxin with a treatment for 2 min with 5% RH air and no post-treatment. AFB1 toxin on peanuts was reduced by 71.3% and 84.5% by 2 and 10 min, respectively, for HVACP direct treatment in air with 80% RH at 90 kV. The reduction of AFB1 toxin increased in function of RH, with no differences in the color, texture and peroxide value of treated and control peanuts. Results indicate that HVACP is a promising technology to effectively inactivate A. flavus and reduce AFB1 on raw peanut kernels without adversely affected peanut quality.
The quantum internet is an emerging quantum technology that will enable the networking of quantum computers and secure communications via quantum key distribution. A key element of this network is the quantum repeater which promises to mitigate loss intrinsic to fiber-based communication. Quantum repeaters require a quantum memory capable of high-fidelity storage and retrieval of quantum optical states. Recently, such quantum memories have become commercially available, but are awaiting "field-testing". In this poster, we describe our work that seeks to test and verify the storage and retrieval of a Relative Intensity Squeezed state based on such a commercial quantum memory.
We derive a “classical-quantum” approximation scheme for a broad class of bipartite quantum systems from fully quantum dynamics. In this approximation, one subsystem evolves via classical equations of motion with quantum corrections, and the other subsystem evolves quantum mechanically with equations of motion informed by the evolving classical degrees of freedom. Using perturbation theory, we derive an estimate for the growth rate of entanglement of the subsystems and deduce a “scrambling time”—the time required for the subsystems to become significantly entangled from an initial product state. We argue that a necessary condition for the validity of the classical-quantum approximation is consistency of initial data with the generalized Bohr correspondence principle. We illustrate the general formalism by numerically studying the fully quantum, fully classical, and classical quantum dynamics of a system of two oscillators with nonlinear coupling. This system exhibits parametric resonance, and we show that quantum effects quench parametric resonance at late times. Lastly, we present a curious late-time scaling relation between the average value of the von Neumann entanglement of the interacting oscillator system and its total energy: S ∼ 2/3ln E.
We enter a concept that challenges conventional views on quantum particles by proposing a paradigm shift in the interpretations of their known behavior—introducing the concept of a continuous biphasic state with cyclically discontinuous states of matter. Inspired by advancements in relationalism theory that uses discrete signals rather than continuous motion for time, we delve into a thought experiment exploring biphasic transitions at the quantum level. We propose a zone whereby a biphasic state of matter and energy dynamically transition between distinct states giving rise to discrete quantum signals in our observed dimension. We propose that as speed/vibration increases, a proposed critical threshold is reached for continuous matter, leading to a zone with a cyclic and phasic transition characterized by matter-energy equivalence. In this zone, when in a cyclic energy-equivalent state it is postulated matter has no dimensional properties, including mass and gravity, and where collisions do not occur, challenging our traditional understanding of a continuous state of matter. This follows to suggest that at the speed of light (and beyond), the familiar state of matter ceases to exist, leaving only a stable energy-equivalent state. Conversely, when speed or vibration decreases, the biphasic transition reoccurs, until it slows enough that matter returns to a continuous state. Thus, a photon of light appears to us only when it slows to the speed/vibration of light. This perspective challenges existing paradigms, speculating that observations of discrete quantum phenomena, collisions, and gravity are manifestations of entities in this proposed biphasic state. The exploration of biphasic transitions opens new avenues for thought and invites a deeper exploration of the mysteries that quantum mechanics holds.
Numerous experimental observations have demonstrated that fundamental charges are quantized. Consequently, point charge models are extensively applied in foundational physical theories such as electromagnetism and quantum field theory, achieving significant success.
However, electromagnetic theoretical calculations indicate that the energy of a point charge diverges, a phenomenon recognized as a longstanding fundamental challenge in physics.
This paper proposes the hypothesis of a new fundamental physical constant, the limit electric potential constant. The author discusses the basis for resolving the divergence problem of point charge electric field energy if this hypothesis holds true.
Furthermore, the paper suggests that the limit electric potential constant could be a fundamental physical constant of equal importance to the speed of light and Planck's constant, potentially expanding Maxwell's equations and modern space-time theory.
The author has undertaken theoretical derivations, one of which includes the derivation of new Maxwell's equations. Within this new framework of electromagnetic theory, the infinite energy problem of point charges is completely resolved, and new physical effects are predicted for experimental verification.
In General Relativity, the ``ugly duckling'' of the Segre-Plebanski-Hawking-Ellis classification of stress-energy tensors (type III) is very difficult (and was believed to be impossible) to realise. Effective stress-energy tensors in alternative gravity cover a wider range of possibilities. We report a class of type III realisations in first-generation scalar-tensor and in Horndeski gravity, together with their physical interpretation. The ugly duckling may be a freak of nature of limited importance but it is not physically impossible.
[Based on N. Banerjee, V. Faraoni, R. Vanderwee, A. Giusti 2023, Phys. Rev. D 108, 084047 (arXiv:2307.13846)]
The KEYSTONE (KFPA Examinations of Young STellar Object Natal Environments) Survey observed ammonia gas toward 11 high-mass star-forming regions at distances of 0.7-2.7 kpc. Previous analysis of these data (Keown+, 2019) utilized a single line-of-sight velocity component in fitting the ammonia gas. Here we present results of a multiple-component fit to the same clouds over the NH3 (1,1) inversion transition. We find that at least two components justifiably improve the fit in an average of 20% of fitted pixels, with ~4% necessitating a third component. From this multi-component fitting, we produce a catalogue of dense cores and their associated virial parameters. We examine the dynamical state of these cores, to study the effect of external pressure on boundedness, how well these high-mass star-forming cores are contained under self-gravity. We highlight connections between the properties of these compact sources and those of the broader clouds. This work emphasizes the importance of applying detailed and adaptive models to the complex data generated by observations of highly active regions.
Finding a complete explanation for cosmological evolution in its very early stages (about 13 billion years ago) "can significantly advance our understanding of physics. Several models have been proposed, with the majority falling into a category called inflationary universes, where the universe experiences rapid exponential expansion. Despite numerous achievements of inflationary models in explaining the origin of the universe, it has been shown that inflationary models generically suffer from being geodesically past incomplete, which is a representation of singularity. Motivated by addressing the singularity problem, we study a recent model of the early universe, called Cuscuton bounce. This model utilizes a theory of modified gravity by the same name, i.e., Cuscuton, which was originally proposed as a dark-energy candidate, to produce a bouncing cosmology as opposed to inflationary ones. It has been shown that within the Cuscuton model, we can have a regular bounce without violation of the null energy condition in the matter sector, which is a common problem in most bouncing-cosmology models. In addition, the perturbations do not show any instabilities, and with the help of a spectator field, can generate a scale-invariant scalar power spectrum. We will then set out to investigate if this model has a strong coupling problem or any distinguishing and detectable signatures for non-Gaussianities. We expand the action to third order and obtain all the interaction terms that can generate non-Gaussianities or potentially lead to a strong coupling problem (breakdown of the perturbation theory). While we do not expect the breakdown of the theory, any distinct and detectable sign of non-Gaussianities would provide an exciting opportunity to test the model with upcoming cosmological observations over the next decade.
At the end of its evaporation, a black hole may leave a remnant where a large amount of information
is stored. We argue that the existence of an area gap as predicted by Loop Quantum Gravity removes
a main objection to this scenario. Remnants should radiate in the low-frequency spectrum. We
model this emission and derive properties of the diffuse radiation emitted by a population of such
objects. We show that the frequency and energy density of this radiation, which are measurable in
principle, suffice to estimate the mass of the parent holes and the remnant density, if the age of the
population is known.
We select two protostellar disks that are less than 500,000 years old, Oph IRS 63 and GSS 30 IRS 3, which have evidence of annular rings. For each disk, we use multi-wavelength, between 870 and 2000 microns, observations from ALMA to constrain disk models with and without rings using radiative transfer code, pdspy. We find that the models containing rings produce superior fits to both disks and that the location of the rings match previous studies. Additionally, we find that each ring is approximately 60% denser than its underlying disk, which could make these rings more likely locations for future pebble accretion, resulting in the formation of planetesimals and eventually planets.
We present a general map from Poisson brackets to commutators, motivated by the Koopman-von Neumann formulation of classical mechanics. This map translates the entire apparatus of (Poisson bracket) classical mechanics to a quantum-like language, either in Hilbert space (operators and wavefunctions) or in phase space (star-products and Wigner functions). The setup can be interpreted as a quantum mechanical system with double the degrees of freedom where the extra variables are restricted to appear only linearly in the theory.
A major challenge at the interface between quantum gravity and cosmology is to understand how cosmological structures can emerge from physics at the Planck scale. In this talk, I will provide a concrete example of such an emergence process by extracting the physics of scalar and isotropic cosmological perturbations from full quantum gravity, as described by a causally complete Barrett-Crane group field theory model. From the perspective of the underlying quantum gravity theory, cosmological perturbations will be associated with (relational) nearest-neighbor two-body entanglement, providing crucial insights into the potentially purely quantum-gravitational nature of cosmological perturbations. I will also show that at low energies the emergent relational dynamics of these perturbations are perfectly consistent with those of general relativity, while at trans-Planckian scales quantum effects become important. Finally, I will comment on the implications of these quantum effects for the physics of the early universe and outline future research directions.
In pursuit of a full-fledged theory of quantum gravity, operational approaches offer insights into quantum-gravitational effects produced by quantum superposition of different spacetimes not diffeomorphic to one another. Recent work applies this approach to superpose cylindrically identified Minkowski spacetimes (i.e. periodic boundary conditions) with different characteristic circumferences, where a two-level detector coupled to a quantum field residing in the spacetime exhibits resonance peaks in response at certain values of the superposed lengths. Here, we extend this analysis to a superposition of cylindrically identified Rindler spacetimes, considering a two-level detector constantly accelerated in the direction orthogonal to the identified length. Similarly to previous work, we find resonance peaks in the detector response at rational ratios of characteristic circumferences, which we observe to be accentuated by the acceleration of the detector. Furthermore, for the first time, we confirm the detailed balance condition, expected from the acceleration due to the Unruh effect, in superposition of spacetimes. The resonant structure of detector response in the presence of event horizons, for the first time observed in 3+1 dimensions, may offer clues to the nature of black hole entropy in the full theory of quantum gravity.
An area of active research in today’s particle physics is the search for neutrinoless double beta decay (0νββ). In this hypothetical process, the nucleus of a radioactive isotope decays into a daughter and two electrons, while their associated neutrinos, observed in beta decays, annihilate each other. If observed, this process will provide an answer to the question of whether the neutrino is a Majorana particle, meaning that neutrino is its own anti-particle. The detection of this decay signal could also help establish the absolute scale of neutrino masses.
nEXO is a future experiment that will look for 0νββ in 5 tonnes of liquid xenon enriched to 90% 136Xe using the concept of a time projection chamber (TPC). Its baseline design employs finely segmented detection strips to collect the ionization from xenon interactions, while scintillation light is readout by photosensors. Our research focus on the development of the charge collection, where we plan to understand and validate induction signals as well as further explore the potential for improved spatial resolution.
The Data-Directed paradigm (DDP) is an innovative approach to efficiently probe new physics across a large number of spectra in the presence of smoothly falling standard model backgrounds. DDP circumvents the need for simulated or functionally derived background estimates that are usually used in traditional analysis by directly predicting a statistical significance using a convolutional neural network trained to regress the log-likelihood based significance. A trained network is then used to identify mass bumps directly on the data without the need to completely model the background, thus saving a considerable amount of analysis time. By detecting mass bumps directly in the data, the DDP has the potential to greatly enhance the discovery reach by exploring many unmapped regions. The efficiency of the method has been demonstrated by successfully identifying various beyond standard model signals in simulated data. A detailed presentation of the methodology and recent developments will be presented.
Dark matter experiments are dedicated to unravel the mysteries of the universe's dark abundance. ARGO represents an advancement in the field of liquid argon detectors for dark matter search, building upon the achievements of current detectors such as DarkSide20k and DEAP-3600.
For this presentation, we consider a single-phase cylindical detector measuring 7 meters in diameter and 7 meters in height, equipped with silicon photomultipliers (SiPMs) for signal detection. The strategic placement of these SiPMs, whether internal or external to the detector vessel, depends on factors such as background radiation levels and position reconstruction accuracy.
This presentation will introduce several event reconstruction algorithms based on the charge and time distributions of SiPM signals within the ARGO detector. These algorithms play a crucial role in optimizing SiPM configurations and mitigating background in the detector.
The NEWS-G experiment, located at SNO lab, aims for direct detection of WIMPs via nuclear recoils using Spherical Proportional Counter (SPC). Accurate measurement of the recoil energy requires knowledge of quenching factor (QF). Our past measurements were performed in Ne+CH4 gas mixture at 2 bar. Next, we intend to measure QF for different gas mixtures with different detector parameters. To facilitate these in-beam QF measurements, we recently developed a novel technique to study SPC detector characteristics for different detector parameters for neutron scattering based experiments. We are also exploring the possibility to use the tandem accelerator at UdeM, which has the capability to reach neutron beam energy as low as 5 keV.
The poster will present the past measurement, current status, and the future plans of the NEWS-G collaboration in measuring QF. Along with the highlights of the new backing detector being built at Queen's University.
Precision cosmology has allowed us to learn a great deal about the very early universe through correlations in the primordial fluctuations. New data will abound in the next decade, from which we forecast the potential appearance of features in the correlations that could be due to new high-energy particles, otherwise inaccessible in particle accelerators on Earth. Moreover, the distinctive form of these features could inform us about the evolution of the very early universe, e.g., whether it was inflating or contracting before a bounce. Hardly any other single observable could properly distinguish whole scenarios in a model-independent way. Discriminating evidence for the paradigm (inflation) or an alternative (such as a bounce) would significantly advance our knowledge of primordial cosmology and high-energy physics. This talk will review the latest developments of this program.
Astrophysical black holes are surprisingly simple physical objects. Their gravitational field can be fully described by two parameters: mass and spin. We cannot directly observe black holes as no light escapes from the event horizon. However, we can detect the light from accreting gas, which forms a dense disk around the black hole, known as an accretion disk. The accretion of material by a massive black hole at the center of its host galaxy forms an active galactic nucleus (AGN), the innermost region of which emits X-ray radiation. An AGN is energetically efficient for regulating the growth of galaxies and is crucial for the development of the Universe we see today. One of the most important tools to probe the innermost accretion flow is the detection of X-ray reverberation echoes, where the X-ray photons reflected from the accretion disk are delayed relative to the primary X-ray source. In this talk, I will first discuss how detailed measurements of the reflected X-rays from the accretion disk can be used to probe the innermost regions of accretion flow just outside the event horizon and determine the fundamental properties of the black hole, such as its spin, across the complete mass scales from $\sim10^{5}-10^{10}$ solar masses. Peering into the growth channels of black holes, I will discuss how we can distinguish accretion vs. merger-dominated black hole growth and probe the cosmological evolution of black hole spins in the last 10 billion years of cosmic history. Finally, I will show how enigmatic relativistic winds or Ultra-Fast Outflows (UFOs) launched from the AGN accretion disk can be used to probe the feedback mechanism connecting the central black holes with their host galaxies.
Regular black hole metrics involve a universal, mass-independent regulator that can be up to O(700 km) while remaining consistent with terrestrial tests of Newtonian gravity and astrophysical tests of general relativistic orbits. However, for such large values of the regulator scale the horizon is lost. We solve this problem by proposing mass-dependent regulators. This allows for large, percent-level effects in observables for regular astrophysical black holes. By considering the deflection angle of light and the black hole shadow, we demonstrate the possibility of large observational effects explicitly.
I will present mechanisms to generate primordial magnetic fields in bouncing cosmology.
Pulsars are fast-spinning neutron stars that lose their rotational energy via various processes such as gravitational and electromagnetic radiation, particle acceleration, and mass loss processes. Pulsar energy dissipation can be quantified by a spin-down equation that measures the rate of change of pulsar rotational frequency as a function of the frequency itself. We explore the pulsar spin-down equation and consider the spin-down term up to the seventh order in frequency. The seventh-order spin-down term accounts for energy carried away in the form of gravitational radiation due to a current-type quadrupole in the pulsar induced by r-modes. We derive analytical formulae of pulsar r-mode gravitational wave frequency in terms of pulsar compactness, tidal deformability, r-mode amplitude, and gravitational wave amplitude. We find solutions to the above relationships using the Lambert-Tsallis and Lambert-W functions. We also present an analytic solution of the pulsar rotational period from the spin-down equation and numerically verify it for the Crab pulsar PSR B0531+21. Accurate analysis of pulsar energy loss, spin-down, and gravitational wave emission are relevant for precise pulsar timing. The search for continuous gravitational waves with 3-rd generation ground-based and space-based gravitational wave detectors will provide additional insights to determine a more accurate neutron star equation of state.
Signals with varying frequencies manifest across numerous disciplines within physics, astrophysics, and various other fields. One prevalent example is the occurrence of chirp-like glitches in gravitational wave detector data, which can occasionally trigger false alarms in binary merger waveform detection pipelines. In this study, we propose a novel chirp transform method featuring a waveform model characterized by nonlinearly changing frequencies. We outline both the analytical and discrete representations of this non-linear chirp transform. To enhance efficiency, we implement approximations and mathematical manipulations to allow for the flexible adjustment of transformation parameters such as window sizes and chirp rates. Additionally, we leverage the efficiency of the Fast Fourier Transform algorithm for numerical computations.
Our approach is tailored to identify and classify glitch signals that bear similarity to binary merger gravitational wave signatures. By harnessing the power of the chirp transform technique, we facilitate the comprehensive classification and analysis of detector glitch signals across multiple domains, including time, frequency, and chirp rate.
We welcome you to join a unique symposium, where we will break from convention with an unconferencing style to explore Building Communities of Practice for Equity, Diversity, and Inclusion (EDI) and Outreach in Physics. This event invites students, postdocs, faculty, and professionals alike to exchange insights, showcase best (and wise) practices, and celebrate grassroots EDI efforts within the physics community. Through participant-driven facilitated discussions, we will co-create an agenda that shares resources and fosters connections, emphasizing the importance of inclusivity. With discussions covering strategies for EDI initiatives, decolonization efforts, reaching under-served communities, and impactful outreach and informal education, this symposium offers a space to learn, inspire, and connect as we work towards a more equitable and diverse future in Canadian physics.
The Government of Canada launched the National Quantum Strategy on January 13, 2023, to support Canada’s quantum sector and solidify Canada’s leadership in this fast-growing field. The strategy will amplify Canada’s strength in quantum science and supply of talent, grow its quantum technologies and companies, and advance quantum research and commercialization in Canada. This talk will provide a progress update on the research, talent, and commercialization pillars of the National Quantum Strategy, as well as international initiatives relevant to quantum scientists. The talk will also preview what's on the horizon and make time for questions and feedback from the Physics community.
Mitacs empowers Canadian innovation through effective partnerships that deliver solutions to our most pressing problems. This presentation will highlight our various partnership funding programs, including our joint programming with NSERC Alliance. The application timeline, peer review process, and different funding models will be highlighted. Example projects, initiatives, and key stats on how Mitacs is supporting quantum research will also be presented.
The National Research Council Canada (NRC), as Canada's national laboratory, plays a crucial role in supporting the government of Canada's initiatives. In 2019, the NRC launched the Collaborative Science and Technology Innovation Program (CSTIP) to foster innovation and support Canadian businesses in adopting new technologies. The NRC has also received partial funding from the National Quantum Strategy (NQS) to support commercialization of quantum technologies. Among these initiatives, the Quantum Sensors Challenge Program (QSP) stands out, aiming to commercialize quantum sensors for industrial applications. This presentation will provide an overview of QSP's progress, success factors/metrics, the challenges, and the future directions.
QSP has achieved significant milestones, driving innovation and commercial capabilities in quantum sensing. Moving forward, QSP will leverage from collaboration with other government departments (OGDs), such as the National Sciences and Engineering Research Council (NSERC) and the Department of National Defense (DND), to fulfill its mission. In this talk, we'll also introduce the Applied Quantum Computing (AQC) challenge program, which focuses on developing quantum algorithms and software to enable scientific discovery and technological advancements.
Through these initiatives, NRC facilitates collaboration across private and public sectors to drive quantum innovation in Canada.
With the ongoing energy transition, electricity is becoming the main energy carrier and commodity but the rate of deployment of solar, wind and hydraulic energy harvesting capacity far outpaces the capabilities to immediately use, transport, or store the electricity being produced. This technological bottleneck is because few industrial processes use electricity as main energy input, and that electricity cannot easily be stored and transported at scale. These observations are supported by the World Economic Forum that identified as top cleantech research and development priorities for 2026-2030 zero-carbon fuels, industrial chemical conversion processes and inter-seasonal electricity storage. There is clearly an urgent need for on-demand and scalable renewable electricity conversion processes and storage means. Plasma is a promising processing medium to electrify several industry-relevant chemical processes thanks to its reliance on electricity as sole energy input, seemingly unlimited range of chemical reaction conditions, and natural fit to distributed utilization (small to large scale via parallelization, fast light-up/turn-down cycles, control of reaction times from hours down to microseconds). Through careful control of the electrical power and reactive gas delivery, chemical processing conditions not otherwise achievable can be attained with high energy and material efficiencies. Such novel reactive environment is particularly appealing for hard-to-decarbonize processes (e.g. ammonia and ethylene synthesis) and for the conversion and upcycling of stable and abundant greenhouse gases (e.g. carbon dioxide, methane). In comparison with the semiconductor industry where unique plasma technologies enable most breakthroughs and thus, secured dominance, much remains to be developed, understood and adopted in the chemical process industry characterized by conservatism and massive assets. In this talk, I will review the (non-fusion) plasma technologies being develop and present a personal outlook for the the energy transition context.
The reduced electric field, denoted as E/N, where E represents the magnitude of the electric field within the plasma and N is the total gas number density is a crucial parameter influencing electron-impact driven energy processes and electron kinetics in electrical discharges. This parameter intuitively accounts for scaling the effects of accelerating electric fields by the number density of the available collisional partners. Collisional energy transfers from free electrons to other molecular and atomic species of the gas result in a complex chemistry featuring phenomena such as gas heating, rotational excitation of molecules within the gas, vibrational excitation of those molecules, electronic excitation of both atomic and molecular species, as well as dissociation and ionization. As highlighted in the 2022 Plasma Roadmap, the field of Low Temperature Plasma science and technology heavily relies on our capability to harness, engineer and control these complex energy transfers toward very diverse applications. The accurate measurement of the electric field magnitude, particularly in high-pressure conditions, becomes imperative due to the exponential dependence of rates for electron impact-driven processes on E/N. Furthermore, sub-nanosecond resolved E-field magnitude measurements are often needed under high-pressure conditions because of the very transient electric field dynamics when plasmas are generated using excitation voltages featuring a fast rise time.
In this context, we report on the development of a very sensitive Electric Field Induced Second Harmonic (E-FISH) generation diagnostic setup. This system is capable of measuring electric field magnitudes as low as 5 V/cm in room air and at the picosecond timescale. This advancement represents an improvement by over two orders of magnitude compared to most E-FISH systems encountered in the literature, where reported detection limits are typically around 500 V/cm – 1 kV/cm. This enhanced capability is especially important when characterizing electric field reversals in plasma discharges. Through a comparative analysis with standard E-FISH systems, we explore necessary upgrades and discuss potential avenues for further advances.
In the realm of plasma physics, the intricate interplay between electrical discharges and dielectric materials remains a subject of fascination. This study delves into the propagation of an atmospheric pressure streamer-spark discharge directed towards a water droplet in a Pin-Droplet-Pin configuration, presenting a unique opportunity to explore the interfacial dynamics between electrical conductivity, discharge propagation, and optical emissions. Such discharges exhibit great interest in many applications such nanomaterial synthesis, water treatment, and medical treatments. To enhance the efficiency of water activation, reducing the surface-to-volume ratio (SVR) has proven effective in maximizing the solvation process. Understanding the propagation of the streamer discharge on the droplet's surface and the subsequent transition to a spark is essential for optimizing these processes.
In this communication, we investigate the propagation of nanosecond discharges across a millimetric droplet that has various electrical conductivity from 0.05 to 5 mS/cm. The discharges are characterized electrically as well as optically, using time-resolved (1-ns-integrated) ICCD images and optical emission spectroscopy. The results show great influence of the electrical conductivity on the occurrence of a primary and secondary streamers as well as their transition to a spark.
Gliding arc discharges (GAD) provide an interesting discharge plasma platform where two regimes are theoretically possible – the thermal or equilibrium regime, where the plasma is in thermodynamic equilibrium, and the non-thermal or non-equilibrium regime, where a gradient is observed across different plasma temperatures The non-equilibrium state, characterized by high electron density plasma at atmospheric pressure, has propelled GAD into the forefront of plasma chemistry applications. However, a comprehensive understanding of temperatures, densities, and mechanisms in both regimes, as well as the conditions governing each, remains essential. In the following work, translational (TT), rotational (TR), vibrational (TV), and electron (TE) temperatures are investigated in the two regimes of the GAD plasma using optical emission spectroscopy of argon 2p—1s transitions (Paschen notation) along with collisional-radiative (CR) modeling of argon 2p states in an argon GAD plasma at atmospheric pressure in the presence of naturally occurring or admixtures of water vapor or N2. More specifically, TT is investigated from the line broadening of certain Ar emission lines using a hyperfine spectrometer, TR and TV are deduced from either the OH(A2Σ+ − X2Πi) or the N2+(B2Σu+ − X2Σg+) rovibrational systems, and TE is obtained from comparing measured and simulated Ar spectra via the CR model. Furthermore, electrical diagnostics are used to obtain TE , electron density (nE) and reduced electric field (E/n) in the two regimes of the GAD and compared with the results found from optical methods.
In-liquid pulsed spark discharges are transient plasmas with demonstrated applications, such as electrical discharge machining or nanomaterial synthesis. The characteristics of in-liquid discharges, such as temperature and density of the various present species (electrons, ions, radicals, etc.), are not well known. One way to probe the plasma parameters is to perform Optical Emission Spectroscopy (OES). In the case of in-water discharges, the hydrogen Balmer alpha (Hα) line is emitted and mostly broadened by Stark effect, itself related to the electronic density. However, the fast evolution of the plasma properties on the nanosecond scale makes the measurement and interpretation of the spectrum challenging. In these plasmas, the electronic density varies significantly by a few orders of magnitude during a discharge period of ~1 μs. This leads to an Hα profile that cannot be fitted by a conventional Voigt profile, especially over extended integration times. In this work, we present a method based on Bayesian inferences to exploit the time-integrated emission profile of the Hα line. The method coupled to a model derived with high accuracy the line profile and the evolution of the electronic density. The results are in a good agreement with time resolved measurements performed with shorter integration time (50 ns). This kind of method is a promising inexpensive tool to study plasmas that exhibit more complex spectra and significantly evolve over time.
To facilitate understanding the current state of the art, this introductory session will give a brief overview of the fundamental concepts for medical ultrasound imaging and how current methodologies take advantage of them.
Introduction: Joint laxity has been hypothesized to be a risk factor for thumb osteoarthritis (OA). Previous studies assessing thumb biomechanics have utilized various imaging modalities including radiography, magnetic resonance imaging, computed tomography imaging, and ultrasound (US). However, these imaging techniques provide limited information on joint laxity during motion. This work validates the use of a novel 4DUS imaging system to characterize the laxity of the thumb joint.
Methods: A 4DUS system consisting of a motorized semi-submerged transducer assembly was developed. A high frequency transducer was automatically translated laterally along the location of the thumb joint. 4DUS and 4DCT images of thumb abduction were collected from five healthy volunteers and five thumb OA patients. The distance between the bones of the thumb joint along with the length of the dorsoradial ligament in each image were measured to characterize ligament laxity. Intra- and inter-class correlation coefficients were calculated to determine the reproducibility of the measurements.
Results: The average maximum length of the dorsoradial ligament in the healthy cohort and patient cohort was 12.76 and 15.53 mm, respectively. Registration of the 4DCT images to the 4DUS images validated the 4DUS system’s capability to detect bony landmarks, such as the base of the first metacarpal and the bony angulation of the trapezium. With intraclass and interclass correlation coefficients greater than 0.9, the ligament length measurements indicate excellent repeatability.
Conclusion: In this preliminary study, a 4DUS system for the assessment of ligament behavior during thumb motion was developed, and its reliability and reproducibility were tested. This system will be used in a cohort of thumb OA patients to evaluate the patterns of ligament laxity associated with various stages of disease progression. This imaging system will provide an explanation of the changes to the thumb’s stabilizing ligaments that influence the onset and progression of thumb OA.
Breast cancer is the most common cancer in women worldwide. Two million women are diagnosed annually, resulting in 685,000 deaths. Early diagnosis is critical to reducing mortality. Although mammography is the gold standard, dense breast tissues reduce detection sensitivity, potentially delaying diagnoses. Therefore, there is a need for more accessible and cost-effective supplemental screening technologies, especially for women with dense breasts. To address these challenges, a promising approach involves combining cost-effective and accessible ultrasound imaging with economical hardware and software. Among these technologies, Doppler imaging plays a crucial role in the clinical evaluation of breast abnormalities, as intratumoural blood flow has been shown to correlate with aggressiveness and histological grade of tumours. We have developed a novel, portable, and patient-dedicated 3D automated breast ultrasound (ABUS) system for point-of-care breast cancer supplemental screening. Our proposed system can aid in the early detection of breast cancer in women with dense breasts. Additionally, it offers the advantage of incorporating Doppler imaging for the assessment of blood flow within suspicious lesions, a capability not commonly available with commercial ABUS systems. By leveraging Doppler imaging in conjunction with 3D B-mode ABUS, this innovative approach could improve breast cancer-related health outcomes, especially for at-risk populations.
Introduction: Inflammation of the joint lining, or synovium, is an important aspect of osteoarthritis (OA) and contributes to disease pathogenesis and symptoms. The basal thumb joint is a common site of OA and is important for hand function. Physical therapy treatments aim to improve patient pain and function. However, the vascular changes and response to exercise are not fully understood in thumb OA. Ultrasound (US) imaging provides soft tissue and joint visualization, and Doppler US technologies can detect and visualize blood flow. Previous synovial blood flow investigation has been limited to two-dimensional visualization, lacking the comprehensive three-dimensional (3D) visualization of the synovial vasculature. This work aims to assess synovial blood flow changes with exercise in thumb OA patients using a 3DUS imaging system.
Methods: A 3DUS system was developed with Doppler imaging technologies to detect and visualize blood flow. The 3DUS device translated a US transducer across a linear region of interest. 3DUS images were acquired over a five-centimetre length using a 14L5 linear transducer and a Canon Aplio i800 US machine with superb microvascular imaging (SMI) Doppler technology. Thirteen thumb OA patients were imaged with 3DUS SMI before and after completing resistance thumb exercises. The synovial volume was manually segmented pre- and post-exercise and the coloured voxels were automatically counted with software. Synovial blood flow volumes and fractions were calculated.
Results: 3DUS SMI images acquired pre- and post-exercise detected and visualized synovial blood flow in thumb OA patients. The absolute mean change in US-detectable synovial blood flow volume with exercise was 1.31 $mm^3$ ± 2.59 $mm^3$ and 1.70 $mm^3$ ± 2.87 $mm^3$ for the thumb OA patients with detectable blood flow within the region of synovial inflammation.
Conclusion: This study demonstrated the ability of a novel 3DUS imaging device to investigate and measure the effect of exercise on US-detectable synovial blood flow in thumb OA. This work implemented a novel method of quantifying changes in blood flow to gain insight into the response of the synovial vasculature and its role in the disease process. This novel 3DUS device can provide a new method of measuring active joint inflammation and monitoring changes and responses to treatment.
Brachytherapy is a type of radiation therapy typically used in the treatment course of cervical cancer. During brachytherapy, highly radioactive sources are placed within the patient using specialized applicators or needles. While the applicators are inserted under the guidance of two-dimensional (2D) ultrasound (US), computed tomography (CT) or magnetic resonance (MR) imaging is subsequently used to localize the applicator and plan the radiation dose. These modalities excel at confirming applicator placement and the surrounding anatomy, however, they are costly and inaccessible to underfunded healthcare centers. Our group has previously developed a three-dimensional (3D) US system that acquires volumetric images of the female pelvis during the applicator insertion procedure to overcome these limitations. We now propose the use of 3D US images as a viable dose planning modality. As such, this feasibility study examined two different brachytherapy applicators within anthropomorphic female pelvic phantoms. Both phantoms were individually imaged with the 3D US system and offline contours of the target volume and nearby organs-at-risk (OARs) were completed by two trained observers. Intra- and inter-user variability statistics were obtained for the contours to assess user reliability. The applicators were digitized within our treatment planning system and a 3D US radiotherapy plan was developed. Simultaneously, conventional CT dose plans were developed on the same phantoms following our clinical protocol. The two sets of plans were then compared to ascertain the efficacy and clinical relevance of our proposed 3D US approach. Our preliminary results indicate that the 3D US plans meet the primary objectives of our clinical protocol for dose to the target volume as well as OARs. Our future work involves performing 3D US-based radiotherapy plans on patient images, for which a clinical trial is currently in progress. This work has the potential to enhance the accessibility and affordability of cervical brachytherapy procedures.
Prostate cancer (PCa) is the most diagnosed cancer among Canadian men, with an estimated 24,600 new cases in 2022, making up about 21% of new cases and anticipated to cause approximately 4,600 deaths. Prostate biopsy, essential for PCa diagnosis, often uses 2D transrectal ultrasound (TRUS) for tissue extraction but suffers from a high false negative rate of 21-47% due to its inability to directly visualize PCa. This necessitates repeat biopsies and has led to the development of specialized TRUS-guided biopsy devices for precise targeting. Despite high sensitivity in detecting early high-grade PCa, current PET/CT and PET/MRI techniques still yield false negatives. Collaboratively developed with Lakehead University, the novel prostate-specific PET (p-PET) system offers improved sensitivity and resolution, presenting an opportunity to integrate a biopsy approach for accurate early-stage PCa detection. This project aims to develop and integrate a 3D TRUS and PET-guided system for prostate biopsy, including a robotic system for biopsy needle guidance. Objectives include developing a motorized 3D TRUS mechanism with integrated needle guidance, incorporating 3D TRUS software into the PET system for trans-perineal biopsy, and evaluating the system's accuracy using phantom models. The proposed system, adapted the biopsy system from the 3D TRUS with MR prostate fusion biopsy and for gynecological brachytherapy systems developed in our lab, will feature a side-firing ultrasound probe and a needle guidance template on a stabilized, tracked support for precise biopsy control. Custom software will be used to perform needle segmentation from ultrasound and co-register with the p-PET image. Experiments will assess system accuracy and aim to demonstrate a statistically significant improvement in trajectory and needle tip error rates. This p-PET-3D TRUS approach has the potential to significantly improve the identification and targeting of early-stage PCa and promises to increase the accuracy of PCa diagnosis, reducing false negatives and refining biopsy precision.
Response functions are a fundamental aspect of physics; they represent the link between experimental observations and the underlying quantum many-body state. In particular, dynamical response functions are part of the toolbox that physicists use to unravel the nature of correlated matter. In this talk, I will discuss some aspects of obtaining response functions on quantum computers. First, I will introduce a new method for measuring response functions by using a linear response framework and making the experiment an inextricable part of the quantum simulation. This method can be frequency- and momentum-selective, avoids limitations on operators that can be directly measured, and is ancilla-free. As prototypical examples of response functions, we demonstrate that both bosonic and fermionic Green’s functions can be obtained, and apply these ideas to the study of a charge-density-wave material. The linear response method provides a robust framework for using quantum computers to study systems in physics and chemistry. It also provides new paradigms for computing response functions on classical computers. Second, I will highlight some of our recent work using Lie algebraic methods to analyze and simulate dynamics on quantum computers. Lie algebras are a natural way to investigate certain properties of quantum circuits, and conversely use them to build desired quantum circuits. I will overview some of our work in this area, including building exact unitaries via Cartan decomposition, performing circuit compression, and analyzing barren plateaus.
The role of the particle-particle interaction becomes increasingly important if the spectral band structure of a free system has increasing degeneracy. Ultimately, it will be the role of interactions to choose the state of the system. Examples include the systems with the lowest band having a degenerate minimum along a closed contour in the reciprocal space -- the Moat. Any weak perturbation can set a new energy scale describing the state with qualitatively different properties in such a limit of infinite degeneracy. In this talk, I will discuss the general principles behind the universal properties of correlated bosons on moat bands, which host topological order with long-range quantum entanglement. In particular, I will discuss moat-band phenomena in shallowly inverted InAs/GaSb quantum wells, where we observe an unconventional time-reversal-symmetry breaking excitonic ground state under imbalanced electron and hole densities. I will show that the strong frustration of the system leads to a moat band for excitons, resulting in a time-reversal-symmetry breaking excitonic topological order, which explains all our experimental observations.
Many elusive quantum phenomena emerge from the interaction of a quantum system with its classical environment. Quantum simulators enable us to program this interaction by using measurement operations. Measurements generally remove part of the quantum entanglement built between the qubits inside a simulator. While in simple cases entanglement may disappear at a constant rate as we measure qubits one by one, the evolution of entanglement under measurements for a given class of quantum states is generally unknown. In this talk, I will show how consecutive measurements of the qubits in a quantum processor can lead to entanglement criticality. Specifically, partial measurement of the qubits prepared in an entangled superposition of ground states to a classical spin model drives the qubit array into a spin glass phase of entanglement. Our theory is verified on quantum processors with up to 48 qubits, allowing us to experimentally estimate the vitrification point and its critical exponent, which obey spin glass theory exactly. Finally, I will discuss the potential to exploit the new physics discovered for the development of quantum algorithms.
Accurate measurements of the distribution of dark matter on small scales and the expansion rate of the Universe (H0) have emerged as two of the most pressing challenges in modern cosmology. Strong gravitational lensing serves as a natural phenomenon that can probe both. Specifically, surface brightness modeling of galaxy-galaxy lenses enables detailed mapping of matter distribution in the foreground, while lensed quasar time delays offer a precise method for measuring H0. Upcoming optical surveys like LSST and Euclid are set to discover hundreds of thousands of strong lenses, representing an increase of several orders of magnitude over current samples.
However, accurate and unbiased analysis of these large data volumes using traditional likelihood-based methods has proven to be impractical. Our team leads the development of rigorous machine learning-based statistical frameworks for strong lensing data analysis. I will share some of the latest exciting results, particularly in the inference of high-dimensional variables. I will show that, beyond accelerating the analysis, these methods enable unprecedented levels of accuracy previously deemed unattainable.
I will conclude by discussing how the application of these methods to the H0 and the small-scale problems could cause a paradigm shift in the field of cosmology.
I will discuss how recent astrophysical observations of neutron stars, together with advances in statistical methods, allow us to probe the behavior of matter at the highest densities anywhere in the universe while self-consistently controlling the number and impact of theoretical uncertainties required a priori. We will discuss key take-aways from the astrophysical data and what to watch for over the next few years.
The masses of astronomical systems such as star clusters, galaxies, and star cluster systems are fundamental quantities in astrophysics. For example, a robust estimate of the Milky Way Galaxy's total mass and cumulative mass profile provides insight into the size and mass of our Galaxy's dark matter halo. As another example, estimates of the mass and mass profiles of old star clusters (globular clusters) allows us to test current theories about the dynamical evolution of these systems in the context of the Milky Way potential. However, estimating the masses of these systems is often a challenging statistical and computational problem because of issues such as incomplete data, selection bias, and complex models. In this talk, I will describe the latest statistical and computational methods used in my research group to estimate the mass and mass profile of the Milky Way Galaxy and globular clusters. I will also present our newest research on the relationship between globular cluster system mass and host galaxy mass using generalized linear models such as hurdle models and zero-inflated models. These novel statistical methods not only make the most of the latest space-based and ground-based telescope data, but are also uncovering new questions about the co-evolution of these systems over cosmic time.
Explorations of the origin of dark matter lead to rich experimental opportunities to discover the underlying new physics. I will discuss light sterile neutrino dark matter that is produced from the Standard Model plasma in the early universe. Leading production mechanisms include out-of-equilibrium neutrino oscillation supplemented by neutrino self-interactions (freeze in) and relativistic thermal freeze out followed by an entropy injection. I will show how the different histories of universe are encoded in the relic dark matter and impact their subsequent evolutions. I will present existing experimental constraints on these possibilities and highlight the complementary probes and new opportunities at upcoming cosmic and neutrino frontiers.
The Global Argon Dark Matter Collaboration is working on a series of direct searches for dark matter using liquid-argon targets. We are currently operating DEAP-3600 at SNOLAB and are upgrading it to reach design sensitivity and to prove a background model with a rate of ~1 event/tonne year. We are also building the DarkSide-20k detector, currently under construction at the LNGS laboratory in Italy. DarkSide-20k is a two-phase Time Projection Chamber with low-radioactivity acrylic walls and optical readout with Silicon PhotoMultipliers (SiPMs). Notably, DarkSide-20k will be filled with Underground Argon, low in the cosmogenically-produced background of Ar-39. We will discuss the sensitivity and status of DarkSide-20k. The collaboration is starting early design work on ARGO, a proposed multi-hundred-tonne detector for deployment at SNOLAB. We will discuss early planning and design concepts for ARGO. The Canadian efforts in these international projects will be emphasized.
The nEXO neutrinoless double beta decay experiment aims to detect a hypothetical decay mode in the isotope xenon-136.
A positive observation of this decay mode would serve as direct evidence for lepton number violation and confirm the Majorana nature of neutrinos, representing a breakthrough in physics beyond the Standard Model.
Such an observation could also offer new pathways for understanding the mass generation mechanism of fermions, and potentially provide insights into the matter-antimatter asymmetry problem.
To increase the likelihood of observing neutrinoless double beta decay, nEXO requires stringent measures for background mitigation, such as placing the experiment deep underground to shield it from cosmic rays. Despite these measures, the residual cosmic muon flux remains a concern.
This talk will present an evaluation of the cosmogenic background rate in nEXO as well as the impact of these backgrounds on the experiment's sensitivity to neutrinoless double beta decay.
We introduce an initial design for an anti-coincident water-Cherenkov muon veto aimed at mitigating these cosmogenic backgrounds.
Additionally, the low background environment of nEXO enables the search for other rare interactions at the MeV scale, including those from astrophysical sources.
As such, a preliminary evaluation of nEXO's sensitivity to neutrinos originating from nearby galactic core-collapse supernovae is provided.
We welcome you to join a unique symposium, where we will break from convention with an unconferencing style to explore Building Communities of Practice for Equity, Diversity, and Inclusion (EDI) and Outreach in Physics. This event invites students, postdocs, faculty, and professionals alike to exchange insights, showcase best (and wise) practices, and celebrate grassroots EDI efforts within the physics community. Through participant-driven facilitated discussions, we will co-create an agenda that shares resources and fosters connections, emphasizing the importance of inclusivity. With discussions covering strategies for EDI initiatives, decolonization efforts, reaching under-served communities, and impactful outreach and informal education, this symposium offers a space to learn, inspire, and connect as we work towards a more equitable and diverse future in Canadian physics.
The rapid growth of the quantum industry has brought about a shift in the need to efficiently train the next generation of scientists and engineers, to address the challenges of further developing quantum computers. As part of its multi-pronged approach to drive the quantum computing industry forward and address the demand for trained quantum specialists, Xanadu has been collaborating with professors in nearly 60 universities worldwide (over 15 in Canada) to incorporate practical quantum computing education using PennyLane (Xanadu's open-source software library for quantum programming) in undergraduate and graduate courses. We present an analysis of how quantum computing is being taught across Canada and around the world, and the impact of experiential learning in quantum computing using open-source resources.
As quantum technologies advance, they hold the potential to revolutionize global industries, offering unmatched computational power, secure communication, and cutting-edge sensing capabilities. With this rapid evolution comes a surge in demand for proficient quantum professionals, leading Canadian universities to pioneer inventive education and training initiatives. This presentation explores the quantum education landscape across Canadian universities, highlighting their crucial role in shaping the future quantum workforce.
This presentation will spotlight the Master of Quantum Computing program at the University of Calgary as a case study. This professional master's degree equips students with a deep understanding of quantum computing, enabling them to evaluate cutting-edge research, commercial applications, and business cases. Through a blend of coursework, research projects, and group collaborations, students develop original insights and critical thinking skills essential for professional quantum applications. Moreover, the program integrates practical experiences through professional internships, where students apply their knowledge to real-world business challenges. Research internships further enable students to undertake applied projects, bridging theoretical understanding with practical applications in commercial and public sectors.
By examining initiatives like the Master of Quantum Computing program, this talk aims to provide valuable insights into the evolving quantum education landscape in Canada and its profound implications for shaping the future quantum workforce.
The rapid growth in interest in quantum technologies has resulted in the need to expand traditional course offerings and degree programs to train the next generation of researchers and quantum scientists. Most programs have focused on graduate courses and research opportunities for students with a physics background. Laurier’s combination of physics and computer science within a single undergraduate department, provided a unique opportunity to introduce the first undergraduate 3rd year course in quantum computing, open to all science majors with the required mathematical background. This talk will describe the goals and framework used to build the course, the outcomes over the past ten years of teaching the course and the lessons learned along the way.
In patients with triple-negative breast cancer, the most difficult form of breast cancer to treat, local recurrence rates are as high as 15%, despite the addition of adjuvant radiotherapy to surgery. Non-thermal plasma (NTP) could be used to treat the tumor bed immediately after the removal of the tumor, to eliminate any remaining tumor cells and thus reduce the risk of recurrence, even with positive margins. NTP can be applied directly to cancer cells to increase the intracellular content of reactive oxygen species, leading to cell death.
Human triple-negative breast cancer line MDA-MB-231 were grown in immunocompromised mice and the tumors were then either untreated (control), treated with gas alone (gas control), or treated with NTP. The NTP device used in this study is the Convertible Plasma Jet (CPJ) from Montreal startup NexPlasmaGen. Here, NTP is sustained by a helium flow +/- the addition of oxygen, passing through electrodes in a coaxial configuration driven by a 13.56 MHz electric field.
At necropsy, tumors treated with NTP were on average 50% smaller than those untreated or treated with gas alone. Moreover, of 46 tumors treated with plasma, 3 had completely disappeared (6.5%). These results show that NTP is capable of killing triple-negative breast cancer cells in vivo in a single treatment and could therefore help secure the tumor bed post surgery.
A first-in-human clinical study on 24 breast cancer patients is being prepared at CHUM for 2024. Its goal will be to determine the safety of the CPJ and the potential cosmetic effects associated with its use.
This presentation will provide an overview of HVACP research in the Sustainable Food Systems Innovation Laboratory at the University of Guelph and highlight potential commercial applications for this cold plasma technology. Plasma, also known as the fourth state of matter, is defined as a partially ionized gas composed of electrons, ions, radicals and elements (O, N, H) in an excited state. Atmospheric cold plasma treatments produce 1,000’s of ppm’s of reactive gas species (RGS) from air (e.g., peroxides, ozone, nitrates), while maintaining a gas temperature of 40 oC or less. The RGS can achieve rapid decontamination of bacteria, yeast, mold, spores, and other biological contaminants like toxins and pesticides from food and non-food surfaces improving food safety and reducing food losses. HVACP is a cold plasma technology that can efficiently generate these RGS in large containers (up to 200 liters) using only a few hundred watts of electricity. Furthermore, when using air as the working gas, the cost of an atmospheric cold plasma treatment is only a few dollars per metric ton. HVACP has been shown to be very effective for microbial decontamination while having minimal impact on the organoleptic characteristics and nutritional value of the treated food. Some examples highlighted in this presentation will include the removal of mold and aflatoxin from peanuts, doubling the shelf-life of strawberries, improving the safety and shelf-life of fresh cheese, and elimination of viruses, such as Covid-19, from imported shrimp. Additionally, pilot scale examples of HVACP treatment prototypes for fresh fruits and vegetables will be presented along with cost information.
A novel treatment for chronic wounds is cold atmospheric plasma. Non-equilibrium plasmas generate highly reactive species at low temperatures. This allows treatment of human tissue in-vivo and can induce locally confined redox-chemistry in biological organisms. Reactive oxygen and nitrogen species are known to play a vital role in cell signaling and influence a range of mechanisms implicated in all phases of wound healing—inflammation, vascular formation, proliferation, remodeling of scar tissue [1]. Plasma gives us a tool for controlling and modulating dosage of the redox species cocktail deposited on the wound bed—by modifying plasma environment and the mixture in the feed gas [2]. By better understanding the effect of reactive species on cells, we can tailor the plasma reactivity to supply the adapted treatment to the tissue. A special focus is put on fibrosis, a form of disrupted wound healing. Reactive species have dual functions depending on the healing phase, their concentrations, etc. This is why a good characterisation of the plasma composition is essential. Plasma composition was simplified to the hypothesis of 2 regimes which could have dual effects on fibrosis—oxygen regime vs nitrogen regime. Reactive oxygen species are known to be pro-inflammatory by increasing the oxidative stress in the environment of the cells, and reactive nitrogen species have anti-inflammatory behavior [3]. Spectroscopy techniques are used to measure key species density in the plasma, informing us on which regime is at play e.g., UV-absorption spectroscopy at 254nm is used for quantifying ozone. Biological experiments are conducted with each regime, analyzing cell behavior as a response to modulated plasma treatment. Tailoring the plasma reactivity to biological needs, to reach a bio-chemical effect on imbalanced healing environment in tissue models will deepen our comprehension on the physiology of chronic wounds. It will also pave the way to a personalized plasma treatment technology greatly beneficing the health sector.
[1] Dubey, S. K., Parab, S., Alexander, A., Agrawal, M., Achalla, V. P. K., Pal, U. N., Pandey, M. M., & Kesharwani, P. (2022). Cold atmospheric plasma therapy in wound healing. Process Biochemistry, 112, 112-123.
[2] Schmidt, B., A., Bansemer, R., Reuter, S., & Weltmann, K.-D. (2016). How to produce an NOx- instead of Ox-based chemistry with a cold atmospheric plasma jet. Plasma Processes and Polymers, 13(11), 1120-1127.
[3] Feibel, D., Golda, J., Held, J., Awakowicz, P., Schulz-von der Gathen, V., Suschek, C. V., Opländer, C., & Jansen, F. (2023). Gas Flow-Dependent Modification of Plasma Chemistry in µAPP Jet-Generated Cold Atmospheric Plasma and Its Impact on Human Skin Fibroblasts. Biomedicines, 11(5).
Hydroponic growth of food plants in greenhouses plays a key role to assure future autonomy of food supply, especially in harsher climate zones like Canada. Unfortunately, however, greenhouse culture yields are drastically reduced by the proliferation of pathogenic microorganisms due to its humid environment. In Canada, the fungus Pythium ultimum has a particularly large impact on food production, by causing root rot. Chlorination and ozone have failed to combat Pythium spp. We thus envision water treatment with non-thermal air plasma. It is an efficient source of chemically highly reactive oxygen and nitrogen species that are responsible for the anti-microbial activity of plasma. As an additional benefit to the pathogen inactivating properties of plasma treated water, the plasma generated reactive nitrogen species constitute a chemical approach to fixate nitrogen, providing one of the essential plant nutrients.
A chemical study of the reactive species is liquid phase will be presented, before comparing the efficiency of Pythium Ultimum deactivation in liquid phase following different plasma treatments. This efficiency will first be analysed with the use of inoculated distilled water ELISA assays, followed by a hyphal mass growth comparison in dextrose broth medium over the period of 1 week post treatment. Further proof of pythium inactivation is investigated with SEM imaging and the plausible chemical pathways for inactivation are discussed.
Purpose: In quantitative nuclear medicine, the displacement of the positron before annihilation induces a loss of spatial resolution. As the size of the detectors decreases, this effect becomes non-negligible. In this presentation, I shall address a method of quantifying the displacement of the positron, especially its impacts on the segmentations in dynamic acquisitions. This method takes into account the specific radionuclide, through its spatial distribution, and the physical dimensions of the voxels.
Method: Using Monte Carlo simulations, it was possible to validate an analytical solution to the loss of spatial resolution in nuclear medicine. This was computed using the estimated displacement of the positron within a voxel of the image.
Since many reconstruction schemes are commercial, only the resulting image can be used. The present method is applicable to the post-processed image, with minimal information about the reconstruction schemes within the clinical device. It is thus applicable to a wide variety of images and can be used for already acquired acquisitions.
The whole process is included in the TRU-IMP graphical user interface.
Results: This method of quantifying the loss of resolution takes into account the radionuclide used through its displacement and the size of the voxels.
The process was used on phantom and pre-clinical acquisitions in the context of segmentation. The results are coherent with expectations, in the sense that a bigger segmented volume reduces this loss of resolution, as the central volume is still conserved.
Future Work: Future works will be twofold: trying to investigate the impact of a magnetic field on this loss of resolution, especially in the context of PET-MRI; developing a method of segmentation using this analysis as a loss function.
Brain metastases have been effectively treated with stereotactic radiosurgery (SRS) delivered to visible growths followed by whole brain radiotherapy (WBRT) for microscopic disease. SRS alone is the preferred treatment despite high recurrence, as conventional WBRT is associated with increased cognitive decline. With improved systemic treatments breast cancer patients are living longer challenging the decision to withhold WBRT. Cognitive decline has been linked to chronic inflammation; radiation induces inflammation via glial cell (microglia, astrocytes) activation. When glial cells are activated, translocator proteins (TSPO) on the mitochondria upregulate and promote inflammation. Glial activation has been assessed in neurological disease using 18F-FEPPA (https://pubchem.ncbi.nlm.nih.gov/compound/24875298) ligand with high affinity for TSPO with positron emission tomography(PET).The aim of this study is to investigate the neuroinflammatory response of half brain irradiation using 18F-FEPPA PET.
To evaluate radiation induced glial activation, half brain irradiation was performed on non-tumor bearing immunocompetent mice (BALB/c) using a micro-CT/RT system with sham (n=9), 4Gy (n=9) and 12Gy (n=9) in one fraction. Dynamic 18F-FEPPA PET was acquired for 90 minutes at 48 hours, 2 weeks, and 4 weeks post-irradiation to quantify level and duration of glial activation. Immunohistochemistry was completed with stains for TSPO the specific ligand of 18F-FEPPA.
Currently, 18F-FEPPA-PET dynamic scans were acquired for n=8 mice, with n=2 additional mice at each dose and timepoint for immunohistochemistry. Unirradiated and irradiated hemispheres of the brain for all mice showed similar radiotracer uptake with 18F-FEPPA-PET and histology signal for TSPO; suggesting partial brain irradiation triggers global inflammation in the brain at 48 hours, 2 weeks, and 4 weeks. However, differences were present in the TSPO stain signal corresponding to different areas of the brain.
This study will map the brains spatiotemporal dose-response when partially irradiated. Further PET and histology data collection with glial activation immunofluorescent stain and analysis is on-going. Following this, half brain irradiation will be investigated in a breast cancer brain metastasis model to provide a comprehensive understanding of radiation and subsequent glial activation.
Tumour associated macrophages (TAMs) constitute up to 50% of the breast cancer microenvironment and are linked to adverse patient outcomes. Conventionally, TAM density is assessed through immunohistochemistry (IHC); however, this relies on invasive biopsies and is not representative of the entire tumour. Thus, there is a need for non-invasive, quantitative imaging for in vivo TAM assessment. Superparamagnetic iron oxide (SPIO) particles have been injected intravenously (IV) to label macrophages in situ for TAM imaging with MRI; however, quantifying TAM density is challenging. Magnetic particle imaging (MPI) is an emerging modality which can detect cells labelled with SPIO nanoparticles and can be used for non-invasive TAM assessment.
MPI has previously been evaluated for TAM cell tracking, however, quantification was only possible for fixed tumour tissues imaged ex vivo due to known dynamic range limitations. When iron samples with large differences in concentrations are present in the same field of view (FOV) there is signal oversaturation from the higher signal due to the requirement for regularization for stable reconstruction. This represents a major roadblock for in vivo MPI in applications where two or more sources of signal exist. In this study, we address this challenge by employing an advanced reconstruction algorithm, allowing for a small FOV to be focused on the tumour. We then demonstrate the success of this method with an in vivo tumour model and show enhanced image quality and successful quantification of TAMs in mouse mammary tumours with different metastatic potentials (4T1, n=8 and E0771, n=8).
Utilizing in vivo MPI, we did not see significant differences in the MPI signal for 4T1 tumours compared to E0771. This work presents the first demonstration of in vivo imaging and quantification of TAMs using MPI. Our findings highlight the potential of MPI for in vivo TAM quantification despite dynamic range limitations, offering a promising avenue for broader applications in cancer research and potentially overcoming constraints of MPI in other in vivo imaging contexts.
Magnetic resonance imaging generates image contrast via interactions between the nuclear magnetic moment of atoms and applied magnetic fields. Hydrogen has a non-zero magnetic moment and is abundant in the water within human tissue, making it the predominant source of signal in MRI. Moreover, the active nature of MRI, where the spins are “excited” and then the magnetic fields they emit are subsequently measured, leads to a rich landscape of potential contrast mechanisms. One such contrast is the molecular diffusion of water, which can be measured using diffusion MRI (dMRI). Diffusion MRI provides unique insight into microscopic tissue structure because the distance water molecules diffuse in the time scales relevant to dMRI is comparable to cell sizes (~µm), and cell membranes inhibit diffusive motion. However, a limitation of the “apparent diffusion coefficient” measured with traditional dMRI is that it is an over-simplification of the complex dynamics of diffusion in tissue. In contrast, diffusion dynamics depend on the size, shape, packing density, and permeability of cellular structures, among other tissue properties. Fortunately, it is possible to indirectly probe these different tissue properties using advanced diffusion MRI methods that manipulate additional acquisition parameters. This presentation will introduce diffusion MRI and its clinical applications, along with recent advances that offer improved microstructural specificity.
Background
Sodium (23Na) magnetic resonance imaging (MRI) can detect the increased tissue sodium concentration (TSC) exhibited in several tumour types. For prostate cancer imaging, 23Na MRI is conventionally performed using an endorectal coil which is associated with a nonuniform sensitivity profile and limited field of view, constraining its clinical utility. To address these challenges, we have developed a completely external, non-invasive 23Na MRI coil to measure TSC differences between prostate cancer and normal tissue.
Methods
MR imaging was performed at 3 Tesla in six healthy volunteers and ten patients with biopsy-proven prostate cancer. The radiofrequency system included an external flexible transmit/receive butterfly coil consisting of two loops (diameters=18cm, tuning=32.6 MHz) built in-house for 23Na MRI. 23Na MRI was acquired using a 3D density-adapted radial projection sequence and a nominal isotropic resolution of 5×5×5 mm3. The normal peripheral zone (PZ), normal transition zone (TZ), and prostate cancer lesions in the PZ and TZ were delineated by a radiologist using only proton MRI datasets. The percent difference in TSC (∆TSC) between each lesion and surrounding normal PZ and TZ was evaluated using a one-sample t-test. Total TSC was also compared between the patients and volunteers using a one-way analysis of variance and Tukey test for multiple comparisons.
Results
Across ten patients, a total of 13 lesions were detected (8 PZ, 5 TZ). Comparing PZ and TZ lesions to normal PZ and TZ, respectively, the mean ∆TSC (-20.7%) was significantly lower than 0%. There were no significant differences in the TSC of normal tissue between patients (PZ: 74.9 ± 14.4 mM, TZ: 78.9 ± 20.1 mM) and volunteers (PZ: 66.5 ± 12.0 mM, TZ: 63.6 ± 14.4 mM).
Discussion
For the first time, an external 23Na MRI coil was used to quantify TSC in human prostate cancer and normal prostate tissue. In contrast to previous studies employing 23Na MRI endorectal coils, lesions presented with lower TSC compared to surrounding normal tissue. This finding suggests that TSC is influenced in part by differences in cell density. Specifically, many tumour types including prostate cancer are characterized by a denser cellular environment compared to normal tissue, which would decrease total TSC. Future work will focus on establishing the sensitivity of this coil in characterizing tumour aggressiveness using Gleason grade defined by whole-mount histopathology as the ground truth.
Heavy fermion compounds are strongly correlated systems with partially filled 4f or 5f electron bands. The ground states of heavy fermion materials are determined by a competition between the on-site Kondo interaction that screens the local 4f or 5f magnetic moments and the inter-site Ruderman-Kittel-Kasuya-Yosida exchange interaction. Muon spin rotation and relaxation ($\mu$SR) techniques have been used for decades to investigate these ground states. In recent years we have applied $\mu$SR to the study of two heavy-fermion compounds of special interest, namely, the candidate topological Kondo insulator SmB6 and the rare spin-triplet superconductor UTe2. In this talk I will describe some of our experiments on these compounds and forthcoming $\mu$SR capabilities at TRIUMF.
Newly discovered properties of magic angle graphene and other systems from the same family propelled the field of twistronics and motivated new research into tunable unconventional quantum phases. The research is driven in part by the search for robust quantum anomalous Hall insulators, topological superconductivity, correlated electronic states, and fractional statistics and by the prospect of quantum simulation in solid state. Scanning tunneling microscopy (STM) has proved crucial for the progress of the moiré physics research. Through high-resolution magnetic-field scanning tunneling spectroscopy, we demonstrate the importance of the fine details of quantum geometry in moiré quantum matter. Specifically, I will report on the detection of the orbital magnetic moment and the emergent anomalously large orbital magnetic susceptibility in twisted double bilayer graphene.
Time- and angle-resolved photoemission spectroscopy (TR-ARPES) is a powerful technique for exploring intrinsic and light-induced electrodynamics in quantum materials [1]. In this talk, I will present the novel TR-ARPES endstation and the Advanced Laser Light Source (ALLS) user facility. I will show how, by combining sample voltage bias and a hemispherical electron analyzer with next generation deflector technology, we are able to probe a large fraction of the momentum space of quantum materials even with low photon energy ultraviolet light (6 eV).
This technical capability allowed us to investigate electron dynamics driven by mid-infrared light in Bi2Sr2CaCu2O8+x (Bi2212), the prototypical high-temperature cuprate superconductor, far beyond the near-nodal region previously explored [2,3]. I will present preliminary results on the momentum dependence of the light-induced melting of the long-range coherent of the macroscopic superconducting condensate. This study demonstrates the state-of-the-art capabilities of the TR-ARPES endstation at ALLS and provides new insights into the transient evolution of electron interactions in cuprates.
[1] Boschini, Zonno, Damascelli arXiv:2309.03935 (to appear in Reviews of Modern Physics)
[2] Smallwood et al., Science 336, 1137-1139 (2012)
[3] Boschini et al., Nature Materials 17, 416-420 (2018)
To be revised shortly
Monique Rivard is a recent Engineering Physics graduate working at Honeywell Aerospace supporting a wide variety of satellite/space development programs. During her undergraduate degree, Monique worked for the Advanced Aircraft Design Laboratory at Royal Military College, blending physics with prototype fixed-wing vehicles to investigate autonomous gliding modes and navigation with magnetic field mapping. At Honeywell, her work includes embedded flight software development, ground station management and configuration, testing and design of terminal subsystems for optical intersatellite and lunar communication links, and science instruments for geophysical monitoring. This talk will provide an overview of the types of projects new graduates with physics backgrounds can support in the Aerospace Industry as well as Monique's path to industry through academia and a professional internship.
We will present an overview of Rydberg atom-based sensing for applications in metrology, communications and radar. Rydberg atom sensors are a new type of radio frequency sensor that promise to have a wide range of uses. Rydberg atom-based sensors have advantages like electromagnetic transparency, self-calibration, broad carrier bandwidth, and optical readout that are unique when compared to conventional antennas. Experiments on a novel approach to Rydberg atom-based sensing that uses a collinear three-photon read-out and detection scheme will be described in this presentation. The experiments show that the collinear three-photon scheme extends the sensing range of the self-calibrated, Autler-Townes sensing mode to lower electric field strengths, while simultaneously improving sensitivity. We demonstrate proof of concept and present concrete results from first experiments, where the spectral resolution is increased by >18 over conventional methods and the sensitivity is increased by >15 over other all-optical readout experiments. Approaches to engineering vapor cells for specific applications will also be described. Experiments on vapor cells that integrate amplification of the RF target signal will be presented. The vapor cell designs are centered on photonic crystal and metamaterial concepts. Vapor cell engineering is a critical element in the development of any Rydberg atom-based sensor.
Decide Competitors for Thurs. PM.
Twenty years ago, I co-authored a review of computational astrophysics with Jon Hakkila and Derek Busazi. At the time, we were asked to speculate on where the field would go, from data analysis to simulation work, and what progress would be possible with increased computational power. Having been tasked with presenting a brief review of the current state-of-the-art in computational astrophysics for the CAP congress, I will use this overview to compare to our predictions of 20 years ago, and at the same time to make some predictions for the future. As you can likely imagine, a lot of what we predicted came true, especially in terms of how the field consolidated, but there are some real surprises in what we didn’t see coming. The next twenty years is harder to predict, with radical changes coming to both algorithms and computational hardware, but I’ll take the risk of outlining a few key paradigms that are likely to change how we do computational astrophysics in the long term.
The formation and evolution of galaxies is inherently a multi-physics and multi-scale problem, involving dark matter, star formation and feedback, collisional relaxation-driven N-body dynamics, stellar and binary evolution, and supermassive black-holes (and much more!). Numerical simulations are an extremely powerful tool to incorporate these physical processes and explore their interplay, but suffer limitations from the large range of scales at play. In this talk, I will review recent advances in numerical simulations of galaxy formation and evolution, with a focus on those that aim to resolve the multiphase nature of the interstellar medium and the stellar body of galaxies.
Nuclear astrophysics combines astronomical observations, computational astrophysics simulations and nuclear physics experiments to reveal how the elements formed in stars and stellar explosions. One of the key problems at the centre of nuclear astrophysics is to understand why the most metal-poor stars that formed in the early universe show enigmatic anomalies, such as two-order magnitude enhancements of C as well as many heavy elements, such as Sr, Zr, Ba, Eu and Pb compared to the solar system abundance distribution. Large observational survey-based campaigns have revealed the statistics of different types of enhancements of these C-enhanced metal-poor stars. These can be related to different dynamic astrophysical origin events in which neutron-capture nucleosynthesis can be induced with a range of high neutron densities. Large-scale stellar hydrodynamics simulations of the dynamic origin of n-capture elements characterize the astrophysical context of this nucleosynthesis and prompted the discovery of a new intermediate n-capture regime in merging H and He-shell convection zones. However, due to the high neutron-densities involved the reaction pathways involve dominantly unstable species for which the required (n,γ) cross sections are only known from rather uncertain Hauser-Feshbach models. This severely limits our ability to validate our simulations with the astronomical abundance observations. I will briefly describe how we organize the necessary multi-disciplinary interactions between observers, computational astrophysicists and nuclear physics experimenters in the Canadian Nuclear Physics for Astrophysics Network (CaNPAN), and how future measurements of n-capture cross sections in inverse kinematics with a storage ring at TRIUMF would allow us to understand how the elements are made in the first generations of stars.
Signals in dark matter direct detection experiments depend on the dark matter distribution in the vicinity of our Sun. If there is a population of high speed dark matter particles in our Solar neighborhood, it can significantly alter the interpretation of results from direct detection experiments. Cosmological simulations that sample potential Milky Way formation histories are powerful tools, which can be used to characterize the signatures of such high speed particles either originating from massive satellite galaxies or from outside of our Milky Way. I will discuss the impact of the high speed dark matter particles originating from the Large Magellanic Cloud in state-of-the-art cosmological simulations, and their implications for dark matter direct detection. I will also discuss whether the local dark matter velocity distribution contains any extragalactic high speed particles.
Over more than a decade, the IceCube Neutrino Observatory has accumulated enormous datasets of neutrinos with energies in the GeV to PeV-scale, opening a new window in searches for beyond the Standard Model physics at the energy frontier. In this talk I will discuss the latest IceCube results, including on-going new searches for sterile neutrinos and dark matter, and provide a look forward of what to expect from the next generation of neutrino telescopes including the Canada-based Pacific Ocean Neutrino Experiment.
In my talk I will present recent results from the ATLAS Experiment concerning the search for beyond the standard model physics that could explain the particle nature of dark matter.
We welcome you to join a unique symposium, where we will break from convention with an unconferencing style to explore Building Communities of Practice for Equity, Diversity, and Inclusion (EDI) and Outreach in Physics. This event invites students, postdocs, faculty, and professionals alike to exchange insights, showcase best (and wise) practices, and celebrate grassroots EDI efforts within the physics community. Through participant-driven facilitated discussions, we will co-create an agenda that shares resources and fosters connections, emphasizing the importance of inclusivity. With discussions covering strategies for EDI initiatives, decolonization efforts, reaching under-served communities, and impactful outreach and informal education, this symposium offers a space to learn, inspire, and connect as we work towards a more equitable and diverse future in Canadian physics.
Xanadu is a Canadian quantum computing company with the mission to build quantum computers that are useful and available to people everywhere. Xanadu is one of the world’s leading quantum hardware and software companies and also leads the development of PennyLane, an open-source software library for quantum computing and application development. In this talk, attendees will learn about Xanadu’s technologies and R&D activities, the Xanadu Residency Program, and how to prepare for working or doing an internship in the rapidly evolving quantum computing industry.
In this talk, we’ll learn about the recent developments in the IBM quantum hardware, quantum programming software stack (Qiskit patterns and Qiskit Runtime) and some of the utility-scale experiments IBM has been working on in collaboration with the academia and industry partners.
There is a wide breadth of careers that physics graduates can go into outside academia; everything from aerospace to finance to computing to engineering to quantum. Come and learn more from our panelists which include:
Jim Shaffer (Quantum Valley Ideas Lab)
Issac Du Vlugt (Xanadu)
Monique Rivard (Honeywell)
Daniel Cluff (CANmind)
Bill Whelan (Fieldetect).
Dielectric Barrier Discharges (DBDs) can be used for many atmospheric pressure applications, including thin-film coating, sterilisation, treatment of flue and toxic gases, aerodynamic flow control, and energy-efficient lighting devices [1-3]. Depending on the gas, electrical parameters, and electrode configuration, these discharges can operate in the classical filamentary mode or in a homogeneous mode [4-5]. The filamentary mode can be very restrictive for some applications (e.g. surface coating). Nevertheless, conditions to get a homogeneous DBD can also be restrictive. Homogeneous DBDs at atmospheric pressure have been obtained in helium, argon, and nitrogen [5]. In nitrogen, the ionisation level is too low to allow the formation of a cathode fall. Thus, the electrical field is quasi-uniform over the discharge gap, like in low-pressure Townsend discharges, and the obtained discharge is called Atmospheric Pressure Townsend Discharge (APTD) [5]. For a Townsend breakdown to occur, a production source of secondary electrons is necessary to sustain the APTD when the electric field is low.
This work aims to synthesise the mechanisms that could be at the origin of the production of seed electrons in various molecular gases and to understand how to favorise promote the obtention of a Townsend breakdown in various molecular gases. Then, during this presentation, a non-exhaustive overview of the different pre-ionization mechanisms will be provided, and the effect of the main experimental parameters (dielectric materials, gas flow, impurities, shape and frequency of the applied voltage, …) will be discussed. The presentation will be illustrated by the results of APTD obtained in various gases such as N2, N2 + oxidizing gases (O2, NO, N2O), Air [7], CO2 [6], N2O …
[1] S. Samukawa et al., J. Phys. D: Appl. Phys. 45 (2012) 253001
[2] I Adamovich et al., J. Phys. D: Appl. Phys. 55 (2022) 373001
[3] U. Kogelschatz, Plasma Chem Plasma P 23 (1), 1-46 (2003)
[4] R. Brandenburg, Plasma Sources Sci. Technol. (2017) 26 053001
[5] F. Massines et al., Eur. Phys. J. Appl. Phys. 47, 22805 (2009)
[6] C. Bajon et al., Plasma Sources Sci. Technol. (2023) 32 045012
[7] A. Belinger et al., J. Phys. D: Appl. Phys., 2022, 55 (46), pp.465201
Low-temperature plasmas are currently used in many applications, ranging from surface modifications to life science, where discharges operated at atmospheric pressure are common [1,2]. In many plasma sources, the discharge inception and development take place on the nanosecond time scale, while the discharge dimensions are in the mm scale [2]. Consequently, only very fast diagnostics with high spatial resolution can resolve the spatio-temporal discharge evolution, and thereby gain insight in basic discharge properties, especially with respect to the discharge inception.
In this contribution, the benefits and immense possibilities of sub-ns optical diagnostics will be presented. An overview of emission-based state-of-the-art techniques will be given, which a specific focus on fast imaging by intensified charge-coupled device (iCCD) and streak camera systems. With these systems, it was possible to realise temporal resolutions of 5 ps and spatial resolutions of 2 µm [3].
Furthermore, the effectiveness of synchronised optical and electrical diagnostics as a powerful tool for obtaining essential discharge parameters will be demonstrated through the analysis of two examples of atmospheric pressure discharges: pulsed-driven dielectric barrier discharges [4,5] and sub-ns pulsed spark discharges [3]. By combining the sub-ns optical diagnostics with fast electrical measurements, valuable insights into the dynamics and characteristics of low-temperature plasmas can be obtained. This comprehensive understanding of plasma behaviour is essential for further advancements in plasma-based technologies and their applications.
This contribution was supported by the DFG (German Research Foundation) – project number 466331904.
[1] M. Laroussi, Front. Phys. 8:74 (2020)
[2] R. Brandenburg, Plasma Sources Sci. Technol. 26 053001 (2017)
[3] H. Höft et al., Plasma Sources Sci. Technol. 29 085002 (2020)
[4] H. Höft et al., Plasma Sources Sci. Technol. 27 03LT01 (2018)
[5] H. Höft et al., J. Phys. D: Appl. Phys. 55 424003 (2022)
A rotamak device based on the original Flinders Rotamak1 is being designed and constructed at Queen’s University. Unlike the original Flinders design, which had a spherical pyrex discharge chamber, the Queen’s University rotamak incorporates a cylindrical pyrex chamber having a diameter of about 31 cm and a height of about 47 cm The design and modelling of the various rotamak subsystems will be presented along with experimental results from those subsystems which have already been commissioned. The vacuum and fuelling systems, and the radio frequency pre-ionization system will be presented in particular detail.
References:
1 I.R. Jones, Physics of Plasmas 6(5), May 1999, pp1950-1957.
The challenges posed by small and heterogeneous medical datasets significantly impede AI development in biomedical data analysis. My research addresses this issue by utilizing innovative partially personalized federated learning frameworks. These frameworks facilitate collaborative learning across multiple medical centers, enhancing the development of precise, personalized AI models. In this presentation, I will begin by introducing the concept of federated learning. Following this, I will present two straightforward yet effective methods for enabling personalized federated learning to support biomedical data analysis using convolutional neural networks and transformers.
Medical images play a critical role in patient management and are used to decide treatment; they also comprise nearly 80% of all hospital data. We have an opportunity to leverage AI in medical imaging, to change the way medicine is practiced. AI tools for medical imaging can offer more accurate diagnosis, improve inter-rater agreement across specialists, and reduce turn-around-time -- all which improve quality of care. This talk will discuss some of the design, development and deployment considerations, as well as progress of AI in medical imaging, for radiology and pathology workflow augmentation.
Targeted Alpha Therapy (TAT) is a mode of cancer treatment in which alpha-emitting radionuclides attached to selective delivery molecules are injected into patients to preferentially kill cancer cells, a promising candidate radionuclide is actinium-225. Due to the relatively low radio-activities used (MBq’s) in this treatment and the absence of positron emissions in actinium-225’s decay chain, well established methods such as SPECT or PET are not suitable for imaging in-vivo dose distributions. To address this issue, we are investigating the use of a cylindrical single volume Compton camera for imaging patients undergoing targeted alpha therapy. Using the kinematics of Compton scattering, Compton cameras can determine the energies and directions of incident gamma rays without mechanical collimation. This allows the detector to have a relatively high sensitivity so that more of the scarce gamma emissions from a TAT radiopharmaceutical can be captured. By performing Monte Carlo simulations with Geant4 and using our implementation of a List-Mode Ordered Subset Expectation Maximisation (LM-OSEM) algorithm for image reconstruction, we present the assessment of different scintillator materials and geometries to demonstrate the feasibility of such a device.
The tunability of the stacking order in van der Waals materials provides a new and powerful method to engineer their physical properties. In parallel-stacked transition metal dichalcogenides, also known as the rhombohedral stacking order, the equilibrium atomic structure is asymmetric between layers, leading to a spontaneous electrical polarization across the vdW gap. Under an external electric field, the layer configuration and its associated polarization can be switched - a phenomenon recently termed as sliding ferroelectricity. We experimentally measured the polarization strength and its spatial distribution in chemically synthesized rhombohedral MoS2. We observed that the domain size distribution follows a power-law distribution, suggesting that the shear strain occurring during the mechanical exfoliation can induce an avalanche of domain wall motion. These pre-existing domain walls were found to be crucial for the polarization switching behavior and we leveraged them to achieve a non-volatile control over the optical response of these layered semiconductors.
Quantum dimer magnets represent a textbook example of quantum magnetism, where nearest-neighbor spins entangle to form spin singlets (dimers). The excitations of the quantum dimer magnet are a triplet of Stot = 1 states, referred to as triplons. An applied magnetic field causes the lowest energy triplet state (Sz = 1) to be driven to degeneracy with the singlet ground state, resulting in Bose-Einstein condensation (BEC) of the triplons. Typically, the magnetic field induced BEC state for quantum dimer magnets presents as a symmetric dome in the field vs. temperature phase diagram and the system can be effectively mapped to a BEC of triplons in the vicinity of the transitions into and out of the dome. Here we will discuss Yb2Si2O7 which exhibits a monoclinic lattice that forms distorted honeycomb layers of magnetic Yb3+ ions that eludes magnetic order down to 50 mK, entering a quantum dimer magnet state. In Yb2Si2O7, an asymmetric dome was observed in the phase diagram and the critical magnetic fields to enter the BEC are significantly smaller than those of other quantum dimer systems. Theoretical explanations of the asymmetric dome in the phase diagram have differed, particularly focusing on possible anisotropy in the exchange interactions and g-tensor. We will present new inelastic neutron scattering measurements on Yb2Si2O7 that aim to quantify the strength and anisotropy of the magnetic exchange interactions. We have combined multiple measurements of the field polarized spinwaves along different crystallographic directions and linear spinwave theory to fit the magnetic exchange interactions in Yb2Si2O7. These fits provide further evidence that the exchange is highly symmetric (Heisenberg or XXZ-type), contrary to theoretical predictions. We will also discuss our measurements and fitting of the crystal field excitations to determine the crystal field ground state and g-tensor anisotropy of the Yb3+ ions. With these results, we aim to provide the necessary ground state parameters to effectively understand the perplexing field-induced state of Yb2Si2O7 and provide the foundation for future studies to tune the ground state towards novel quantum phases.
The quantum nature of materials spans microscopic to macroscopic scales. This enables a wide array of physical properties that facilitate applications such as energy relevant technologies. The exotic properties arise from intertwined couplings of symmetry, topology, dimensionality and strong correlations, and are sensitive to external stimuli. We can use this sensitivity to unravel the complex interplay between the degrees of freedom by perturbing the interaction parameters and observing the corresponding responses. Pressure provides a clean and effective tunning parameter, but introduces challenges to successfully performing in situ measurements, especially at low temperatures. In this talk, I will discuss how we employ complementary optical techniques to probe the quantum physics of the strongly spin-orbit-coupled Mott insulator and Weyl semimetal under extreme conditions. Taken together, these experiments open new windows into quantum materials by providing multiple probes for accessing and studying novel phases.
Fingerprints of the properties of exotic nuclei on nucleosynthesis observables have been used for decades to frame our picture of how the heaviest elements in our Solar System came to be. The abundance of elements in our Sun, as well as nearby metal-poor stars, hints at multiple neutron capture nucleosynthesis processes, the slow (s), intermediate (i) and rapid (r) neutron capture processes. While the s-process terminates its heavy element production at Pb-208, we know that the r-process or i-process must be capable of going beyond since we observe long-lived actinides like U-238 in stars and traces of Cm-247 in meteorites. However which astrophysical site(s) are responsible for actinide production, and how heavy of actinides ultimately can be produced, remains unclear. The path forward relies heavily on computational calculations, with nucleosynthesis network solvers which post-process output from large scale hydrodynamics simulations. These networks make use of thousands of pieces of nuclear data, from separation energies of isotopes across the nuclear chart, to their capture, decay, and fission rates and branchings. The need to navigate large data can continue after the network run as well, for example further post-processing utilizing databases of emission spectra are required to predict light curves and gamma-rays. It is via such efforts to combine results from leading hydrodynamics simulations of proposed sites with carefully considered nuclear data libraries that we can make meaningful comparisons to astrophysical observables. For instance, in this talk I will discuss how utilizing observations of metal-poor stars rich in r-process elements, our calculations suggest the presence of fission fragments from isotopes with A~260 [1]. Then, utilizing MeV gamma-rays, I will discuss how our nucleosynthesis predictions point to a 2.6 MeV emission line of Tl-208 that could be used to hunt locally for in situ neutron capture nucleosynthesis, from both i-process and r-process sources [2].
[1] Roederer, Vassh, et al. Science 382, 6675, 1177-1180, Dec 2023
[2] Vassh, Wang, Lariviere, et al. Phys.Rev.Lett. 132, 052701, Jan 2024
Our Galaxy is filled with complex astronomical phenomena, ranging from star formation, planet formation, stellar interactions and mergers, accretion discs, tidal disruption events and more. Many of these systems and objects involve a diverse array of physics, such as magnetic fields, dust, or general relativity. This can make numerical simulations difficult to perform, as it means creating multi-physics solvers that work together to produce accurate solutions. The range of spatial and temporal scales also makes it challenging to maintain computational efficiency, for example, following the formation of a protostar from a molecular cloud core involves nearly 20 orders of magnitude change in density. I will discuss our efforts to build smoothed particle hydrodynamics (SPH) methods for the general study of astrophysics within the Milky Way, focusing on the multi-physics solvers we have built into the Phantom SPH code and our efforts to scale up parallel performance.
Axion-like particles have been gaining popularity recently as potential solutions to dark matter with fascinating wave-like dynamics. In this talk, I will specifically consider fuzzy or ultralight dark matter (FDM/ULDM): a candidate that keeps the successes of CDM on large scales but alleviates tensions on small scales. This small-scale behavior is due to characteristic observable cores in ULDM called solitons, which also correspond to the ground state of the equations governing ULDM. On the other hand, this same signature makes ULDM halos expensive and onerous to simulate by demanding very high resolution. One promising avenue for computationally studying ULDM dynamics is by treating individual halos as hydrogen atoms and calculating the full spectrum of their eigenstates instead; individual eigenstates can then be linked to the qualitative behavior of the halo. In this talk, I will outline how and why this approach is useful, as well as show its applications to observationally relevant phenomena, including the formation of halos, the core-halo mass relation of ULDM, and the effects of supernovae feedback on spherically symmetric halos.
This paper proposes a conceptual framework that predicts the rotational velocity of spiral galaxies with respect to the radial distance from their center of mass, without hypothesizing about any dark matter. It is based on an emergent modified gravity paradigm derived from Einstein’s general relativity and relies on erfc scalar field metrics. A short recall of the previously published modified gravity model is first presented, then the theoretical equation describing a galaxy velocity profile is established, based on an emergent parameter, the galaxy's proper length. Levenberg-Marquard curve fitting optimizations on the 551 galaxies of the Sofue (2018) database are reported. The proposed equation fits the galaxy velocity behaviors all over the radial distances from the galactic center of mass. The model predicts, without requiring any dark matter, very good results with an SNR > 20dB in 91% of the 291 galaxies of the C-series (266/291), 81% of the 31 galaxies of the P-series (25/31), and 60% % of the 229 galaxies of the S-series (138/229). Moreover, the whole model is consistent with the Thully-Fischer relationship that can be derived from it. According to the present paradigm, the constant component of the erfc potential associated with a given mass plays the role of a huge baryonic energy reservoir that is involved in the galaxy rotations, fixing for each galaxy, a constant velocity upper limit. In a sense, this baryonic energy relying on a constant gravitational potential can misleadingly be interpreted as a kind of dark matter. Taking the other side of the coin, the velocity profile of galaxies, with their tendency to become constant at a great distance, can be seen as a direct manifestation and support to the whole emergent erfc potential paradigm.
Low mass accelerated dark matter (DM) is very well motivated and has been a subject of much attention in the literature. These fast-moving particles can gain enough kinetic energy to surpass the energy thresholds of some Large volume terrestrial detectors. For instance, fast-moving DM can deposit sizable amounts of energy at both large volume neutrino detectors and dark matter direct detection experiments. In this talk, I will focus on searches for both multi-component "boosted" DM and cosmic-ray accelerated DM. I will present recent and on-going work which explores these accelerate DM scenarios using a variety of probes.
The Super Cryogenic Dark Matter Search (SuperCDMS) SNOLAB is a world-leading direct detection experiment currently under construction, expected to begin full science runs next year. The successor to the SuperCDMS detectors previously operated at the Soudan Underground Laboratory in Minnesota, it will continue the progression of ever-improving dark matter sensitivities using cryogenic Ge and Si crystals. It employs two detector designs, with photolithographically patterned quantum sensors for ionization and phonon signals from particle interactions. The detectors are being deployed in a new radiopure cryostat and shield, which drastically reduce background sources such as radioactive decay products. With the lower energy thresholds enabled by these developments, the focus of the search is widening: from traditional WIMPs at the mass scale of several GeV, to sub-GeV dark matter particle candidates. Detector characterizations, background measurements, and calibrations are already underway at test facilities, including the Cryogenic Underground TEst (CUTE) facility at SNOLAB, aided by gram-scale "HVeV" prototype detectors featuring single electron-hole pair sensitivity. In this talk, I will present the status and plans of the SuperCDMS SNOLAB experiment and associated activities at test facilities, as well as projections for their science results over the next several years.
The microphysical properties of Dark Matter (DM), such as its mass and coupling strength, are typically assumed to retain their vacuum values for any given model when considering DM behaviour at a range of scales. However, DM interactions in different astrophysical and cosmological environments is impacted by the properties of the background which in turn can substantially affect both DM production and the detection prospects for any given model. This is particularly true for models where a mixing between DM and another field gives rise to oscillations, such as in the case of sterile neutrinos, dark photons and axions.
In this talk, I will provide an overview of some of these effects, especially in the context of DM production. I will detail a general framework for calculating DM abundance when DM is produced through the oscillation of a beyond-the-Standard Model state, in the presence and absence of a resonance. I will discuss the viable parameter space for such a production mechanism and the associated phenomenology.
Big-bang nucleosynthesis (BBN) probes the cosmic mass-energy density at temperatures $\sim 10$ MeV to $\sim 100$ keV. Here, we consider the effect of a cosmic matter-like species that is non-relativistic and pressureless during BBN. Such a component must decay; doing so during BBN can alter the baryon-to-photon ratio, $\eta$, and the effective number of neutrino species. We use light element abundances and the cosmic microwave background (CMB) constraints on $\eta$ and $N_\nu$ to place constraints on such a matter component. We find that electromagnetic decays heat the photons relative to neutrinos, and thus dilute the effective number of relativistic species to $N_{\rm eff} < 3$ for the case of three Standard Model neutrino species. Intriguingly, likelihood results based on Planck CMB data alone find $N_{\nu} = 2.800 \pm 0.294$, and when combined with standard BBN and the observations of D and $^4$He give $N_{\nu} = 2.898 \pm 0.141$. While both results are consistent with the Standard Model, we find that a nonzero abundance of electromagnetically decaying matter gives a better fit to these results. Our best-fit results are for a matter species that decays entirely electromagnetically with a lifetime $\tau_X = 0.89 \ \rm sec$ and pre-decay density that is a fraction $\xi = (\rho_X/\rho_{\rm rad})|_{10 \ \rm MeV} = 0.0026$ of the radiation energy density at 10 MeV; similarly good fits are found over a range where $\xi \tau_X^{1/2}$ is constant. On the other hand, decaying matter often spoils the BBN+CMB concordance, and we present limits in the $(\tau_X,\xi)$ plane for both electromagnetic and invisible decays. For dark (invisible) decays, standard BBN (i.e. $\xi=0$) supplies the best fit. We end with a brief discussion of the impact of future measurements including CMB-S4.
All CAP delegates who support equity in physics are warmly invited to join DGEP and come to our reception at Western’s Grad Club. Light appetizers will be served and drinks will be available for purchase. We appreciate your RSVP in advance (via the CAP registration webpage or by emailing programs@cap.ca) for planning purposes.
Celebration of the new DQI
It is a tremendously exci5ng 5me for fusion energy: a9er six decades of research and experimenta5on, self-propaga5ng fusion burns have been achieved in the laboratory at the Na5onal Igni5on Facility (NIF) at the Lawrence Livermore Na5onal Laboratory (LLNL) in California. NIF, the world’s largest and most energe5c laser, uses 192 laser beams to deliver over two megajoules of energy in nanoseconds, compressing and hea5ng mm-scale fusion fuel capsules to temperatures and densi5es greater than the center of the sun. On December 5, 2022, for the first 5me in the history of laboratory fusion research, igni5on was achieved, where a target released more energy out than went in to drive it, a key goal of the U.S. stockpile stewardship program and an essen5al first step on the path to fusion energy. This talk will give an overview of the scien5fic and engineering advancements that brought about this breakthrough in iner5al confinement fusion (ICF), and the next steps being taken to push to higher yields.
The success of the NIF has contributed to a surge of interest in fusion energy, in both the public and private sectors. The progress and challenges for iner5al fusion energy (IFE) will be presented, along with the work being done at LLNL and elsewhere to bring about this energy source of the future.
On December 5, 2022 a milestone was reached in fusion energy research with the achievement of scientific breakeven for the first time in a controlled laboratory environment [1]. This was done at the Lawrence Livermore National Laboratory (LLNL) using 2.05 MJ of laser drive energy and releasing 3.1 MJ of fusion energy in an indirect-drive scheme, coupling laser energy into x-rays before imploding the fuel capsule. This demonstration that fusion energy can now be used as a clean, carbon free, source of energy has spurred a significant jump in interest around the world in pursuing routes to fusion energy. We have been studying the physics of laser fusion at the University of Alberta for several decades and recently have been investigating issues related to direct -drive laser schemes using more advanced ignition techniques such as Shock ignition and Fast Ignition. Such schemes would have significantly higher gains than the indirect-drive scheme demonstrated at LLNL. Key issues for Shock ignition, using a high intensity laser spike to generate an intense compressional shock wave at the end of the normal fuel compression pulse, are the laser-plasma instabilities leading to backscatter and nonuniform illumination of the fuel capsule, degrading the fusion yield significantly. For the Fast Ignition technique, a separate intense 20-40 ps laser pulse is used to generate a beam of protons or electrons which is directed at a small spot on the side of the laser-compressed fuel core to ignite the fusion reactions. For Fast Ignition using protons, the stopping power of the protons in the dense compressed fuel core is an important parameter. For Fast Ignition using electrons, guiding of the electrons to the ignition point at the edge of the fuel core will require strong magnetic guide fields. We have been collaborating on a number of international experimental campaigns to investigate these issues. A brief overview of the issues and results from these campaigns will be presented.
Minimizing or eliminating the need for induction from a central solenoid during startup, ramp-up and sustainment of a tokamak plasma is a critical challenge in magnetic fusion energy. Solenoid-free startup techniques such as helicity injection (HI) and radiofrequency (RF) wave injection offer the potential to simplify the cost and complexity of fusion energy systems by reducing the technical requirements of, or need for, a central solenoid. Pegasus-III is a new solenoid-free, extremely low aspect ratio spherical tokamak (ST) (A > 1.22, $I_p$ < 0.3 MA, $B_T$ < 0.6 T, pulse length ~ 100 ms) focused on studying innovative non-solenoidal tokamak startup techniques. Pegasus-III will be equipped with a new local helicity injection (LHI) system capable of $I_p$ < 0.3 MA, sustained coaxial helicity injection (CHI) system, transient CHI and a 28 GHz gyrotron-based system for initial electron Bernstein wave (EBW) and electron cyclotron (EC) heating. Initial experiments have focused on establishing high-Ip LHI scenarios and have successfully produced $I_p$ > 200 kA plasmas with $I_{inj}$ ~ 12 kA with toroidal field of 0.3 T. Near term efforts are focused on increasing $B_T$ to 0.6 T. Follow on experiments will focus on the deployment and testing of transient CHI, modest sustained CHI and low-power microwave studies. Pegasus-III will provide key enabling power plant relevant technology to directly test proposed plasma startup, ramp-up scenarios envisioned for larger scale ST devices, investigating methods to synergistically improve the target plasma for consequent bootstrap and NBI current sustainment.
*Work supported by US DOE grants DE-SC0019008.
Lung cancer remains the leading cause of cancer death in Canada. More accurate risk stratification tools are needed to determine patient prognosis and aid in determining optimal treatment plans. Computed tomography (CT) and positron emission tomography (PET) images are widely used for cancer staging. Artificial intelligence (AI) models integrating quantitative imaging biomarkers have the potential to provide additional information on disease prognosis that is not visible to the radiologists’ eye. This talk will describe the development, validation, and evaluation of a clinical decision-support system integrating AI models utilizing multi-modality imaging and clinical information for the risk stratification of lung cancer patients following surgery. These computational imaging-based models can assist clinicians in decision-making and allow for personalized medicine, with the goal of improving outcomes for cancer patients.
This is an invited talk and the abstract will be posted when available.
Quantifying structural and functional (ventilation and perfusion) abnormalities is relevant to the clinical understanding and management of patients with chronic obstructive pulmonary disease (COPD). There exist several established computed tomography (CT) and magnetic resonance imaging (MRI)-based methodologies that can quantify structural and ventilation/perfusion information, but they require injected or inhaled contrast agents, ionizing radiation or expensive and specialized equipment. The purpose of this presentation is to review emerging imaging methods that provide structural/functional information at low cost, and therefore may be widely utilized for assessment of lung diseases.
Electrides are unconventional ionic crystals with excess electrons that decouple from the atomic nuclei and fill voids in the lattice. With layered electrides, the excess electrons form 2D delocalized sheets confined to the interstitial region between the atomic layers, or on their surfaces when exfoliated to form 2D electrides (known as “electrenes”). The spatial decoupling of the electrons from the lattice results in very low work function, high conductivity, and weak electron-phonon coupling. High throughput screening has expanded the number of known electrides from a few to a few hundred and has identified electrides that are magnetic, semiconducting, topologically nontrivial, and superconducting. As a result, electrides are promising material candidates for applications related to transparent conductors, solid-state dopants, 2D semiconductor contacts, electron emitters, and interconnects. In this talk, I will present our recent efforts exploring the unusual transport and electron-phonon scattering characteristics of layered electrides using density functional theory.
Novel quasi-2D magnets are attracting much attention recently. The MBE synthesis route is highly desirable for interface control when implementing strain engineering and/or hybridizing with other quantum systems. In situ prepared atomically sharp interfaces further enable fundamentally new phenomena, while providing opportunities in spintronics, leveraging interface-driven versatility [1]. Ferromagnetic Cr$_2$Te$_3$ ultrathin films, optimally grown on Al$_2$O$_3$(0001) and SrTiO$_3$(111), manifest an extraordinary sign reversal in the anomalous Hall conductivity as temperature and/or strain are modulated. It turns out that the nontrivial Berry curvature in the electronic-structure momentum space is responsible for this exotic behavior [2]. Moreover, when proximitized with (Bi,Sb)$_2$Te$_3$-type topological insulator, via the Bloembergen-Rowland interaction, magnetic ordering in monolayer Cr$_2$Te$_3$ is favorably enhanced, displaying an increased Curie temperature [3]. Combining advanced scanning tunneling microscopy, magnetic force microscopy, transmission electron microscopy, depth-sensitive polarized neutron reflectometry, magnetotransport and ab initio simulation, Cr$_2$Te$_3$ has been established as a far-reaching platform for further investigating the marriage of magnetism and topology, in both real and reciprocal spaces. These findings provide new perspectives to the magnetic topological materials in general, that are topical for the future development of topological spintronics.
References
[1] H. Chi and J. S. Moodera, "Progress and prospects in the quantum anomalous Hall effect", APL Mater. 10, 090903 (2022). https://doi.org/10.1063/5.0100989
[2] H. Chi, Y. Ou, T. B. Eldred, W. Gao, S. Kwon, J. Murray, M. Dreyer, R. E. Butera, . . . J. S. Moodera, "Strain-tunable Berry curvature in quasi-two-dimensional chromium telluride", Nat. Commun. 14, 3222 (2023). https://doi.org/10.1038/s41467-023-38995-4
[3] Y. Ou, M. Mirzhalilov, N. M. Nemes, J. L. Martinez, M. Rocci, A. Akey, W. Ge, D. Suri, . . . H. Chi, "Enhanced Ferromagnetism in Monolayer Cr$_2$Te$_3$ via Topological Insulator Coupling", arXiv:2312.15028 (2024). https://doi.org/10.48550/arXiv.2312.15028
We report X-ray powder diffraction, elemental analysis, electrical resistivity, magnetic susceptibility, specific heat and angle-resolved photoemission spectroscopy (ARPES) in single crystals of YNiSn2. This compound crystallizes in a orthorhombic crystal structure (space group Cmcm) with lattice parameters a = 4.409 ˚A, b = 16.435 ˚A, c = 4.339 ˚A. YNiSn2 presents a weak Pauli paramagnetic susceptibility χ0 = 2(3)×10−5 emu/mol-Oe and a small electronic heat capacity Sommerfeld coefficient γ = 4 mJ/molK2, consistent to a low-density of states at Fermi level. Interestingly, YNiSn2 presents, at T = 1.8 K a giant positive magnetoresistance (MR) of nearly 1000 % with a quasi-linear field dependent increase up to H = 16 T and a field induced metal-insulator-like crossover at high field. Furthermore, quantum oscillations were observed for particular field orientations. The ARPES experiments reveals interesting features that may be associated with the presence of surface states and Dirac cones in the band structure of the material. This compound could be the first realization of a topological Dirac semimetal evolving a transition metal p and/or d electrons.
Dr. Young-Kee Kim, President of the American Physical Society (APS), will provide a brief update on APS initiatives, including recent collaboration and cooperation between the APS and CAP.
The ATLAS detector upgrade for the HL-LHC, scheduled to begin operation in 2029, is an ambitious program to extend the LHC physics program of discoveries and measurements with a record luminosity of high-energy parton collisions. Canadian institutions are playing a leading role in designing, building, and commissioning the upgraded detector, including the charged-particle Inner Tracker, the Liquid Argon Calorimeter, and the Muon Spectrometer. A snapshot of these projects is presented, describing their new cutting-edge technologies, progress on their construction, and how the ATLAS Collaboration is preparing for their physical & software integrations.
As we approach the beginning of the High Luminosity Large Hadron Collider (HL-LHC) by the decade’s end, the computational demands of traditional collision simulations have become untenably high. Current methods, relying heavily on Monte Carlo simulations for event showers in calorimeters, are projected to require millions of CPU-years annually, a demand far beyond current capabilities. This bottleneck presents a unique opportunity for breakthroughs in computational physics through the integration of generative AI with quantum computing technologies. We propose a Quantum-Assisted deep generative model that combines a variational autoencoder (VAE) with a Restricted Boltzmann Machine (RBM) embedded in its latent space. The RBM in latent space provides further expresiveness to the model. By designing RBM nodes and connections to leverage qubits and couplers available in D-Wave’s Pegasus Quantum Annealer, our model is able to combine classical and quantum computing. We will make some initial comments on the infrastructure needed for deployment at scale.
PIONEER is a next-generation pion decay experiment that has been approved as high priority at the Paul Scherrer Institute in Switzerland. Building on the former PIENU experiment at TRIUMF, which to date provides the most precise measurement of the ratio of pion decays to electrons compared to muons $\left(R_{e/\mu}\right)$, PIONEER aims to improve this $R_{e/\mu}$ measurement by at least an order of magnitude. This would match the precision of the Standard Model calculation and provide a stringent test of lepton flavour universality. To achieve this goal, PIONEER will employ a modern detector based on two key components: an active highly segmented target, and a large acceptance, 25-radiation length calorimeter. Two calorimeter technology options are being compared: LYSO crystals, and liquid xenon (LXe). Both technologies are fast responding with high light yield, with a key difference being the homogeneity of LXe compared to the natural segmentation of crystals. Liquid xenon homogeneity allows for better energy resolution and angle-independent response, but the lack of natural segmentation raises potential pileup suppression questions. LXe has been successfully used in several low-rate experiments, for example in dark matter and neutrinoless double beta decay searches, but has rarely been used in high-rate experiments. The MEG experiment’s LXe calorimeter is the closest comparable to PIONEER, operating at a ~MHz rate but with a completely different geometry and background configuration. I will present initial Monte Carlo simulations that assess the performance of the envisioned PIONEER LXe calorimeter in combination with the rest of the detectors using a realistic beam simulation.
High-resolution, high-granularity calorimetry plays a crucial role in the advancement of modern particle detectors. These detectors are crucial for precise measurements across a broad spectrum of physics phenomena, including the potential detection of dark matter and supersymmetry particles. The CALICE international collaboration has developed scalable calorimeter prototypes to meet the demanding requirements of such detectors. One such prototype is the Digital Hadronic Calorimeter (DHCAL), optimized for event reconstruction using the Particle Flow algorithm. The cubic meter DHCAL, consisting of about 500000 of 1cm2 readout pads without absorber plates, has been tested at Fermilab. Thanks to its imaging capabilities, the DHCAL provides a powerful tool for detailed analysis of particle showers. In this study, I report on the performance analysis of the DHCAL specifically for pion measurements. Experimental data in the energy range of 1 to 10 GeV is utilized and will be compared with Monte Carlo simulations. This analysis comprises event selection, particle identification, and calibration procedures. The primary objective of my research is to enhance the performance of future detectors through a better understanding of hadronic showers with improved data reading speed, and significant cost reduction.
Long-Lived Particles (LLPs) beyond the Standard Model appear in many
theoretical frameworks that address fundamental questions such as the
hierarchy problem, dark matter, neutrino masses, and the baryon
asymmetry of the universe. The LHC may in fact be producing copious
numbers of neutral LLPs with masses above a GeV, only to have these
sneaky particles escape the main detectors without being spotted. To
fill this gap, we have proposed the MATHUSLA detector (MAssive Timing
Hodoscope for Ultra-Stable neutraL pArticles), which would be
constructed on the surface above CMS and would take data during
High-Luminosity LHC operations. The detector would be composed of
several layers of solid plastic scintillator, with wavelength-shifting
fibers connected to silicon photomultipliers, monitoring an empty
air-filled decay volume. In this talk, we will show a new, smaller MATHUSLA design that could be accommodated by available funding envelopes, while still providing world-leading LLP reach. We will also report on background studies for rare Standard Model processes, and the construction of "demonstrator modules" at the University of Victoria and the University of Toronto.
Dwarf spheroidal galaxies are ideal candidates for indirect dark matter searches. They are dark matter dominated and usually contain no intrinsic astrophysical sources of gamma ray emission. In order to accurately predict the dark matter annihilation signal from dwarf spheroidal galaxies, it is crucial to correctly model the phase space distribution of dark matter in them. Hydrodynamical simulations of galaxy formation provide important information on the dark matter distribution in dwarf spheroidal analogues. I will present the dark matter density profile and velocity distribution of Milky Way’s dwarf spheroidal galaxies extracted from state-of-the-art hydrodynamical simulations, with a focus on the Sagittarius dwarf spheroidal galaxy. I will also discuss the implications for indirect dark matter searches from dwarf spheroidal galaxies for velocity-dependent dark matter annihilation models.
Strong experimental evidence points to the existence of dark matter (DM) but, to date, there has been no direct detection of any DM candidate. However, an extensive effort towards this goal is underway. In this talk I will discuss the phenomenology of possible DM candidates that could be detectable in collider based experiments. I will first quickly review a class of models known as dark sector models, also known as hidden sector models. Such models are generally connect to the standard model through so called portal interactions. I will be further focusing on only one such interaction, known as the vector portal, or as kinetic mixing. One interesting feature of vector portal models is that dark sector particles can acquire an effective electric charge that is a small fraction of that of the electron. Such particles are therefore known as milli-charged. I will then present recent results showing the range of parameter space for such models that can be probed by an upcoming LHC experiment: MoEDAL-MAPP. I will be discussing the cases of both milli-charged fermions and milli-charged scalars.
Some aspects of the transition from non-living to living matter are best understood within the context of the theory of phase transitions. A good example is the emergence of homochirality, where stereoisomers of left- and right-handed molecules are believed to have undergone spontaneous symmetry breaking from a racemic mix of both forms to a homochiral set of all left or all right-handed akin to symmetry breaking in the physics of non-equilibrium second order phase transitions. This is among the leading explanations for the observed single chirality of all left-handed amino acids in proteins and all right-handed sugars in DNA and RNA. In this talk, building on earlier theoretical work of Laurent et al., I discuss evidence of an earlier phase transition in the origin of life that would have preceded this transition in which life started as achiral networks of interacting molecules that underwent a phase transition to chiral-dominated molecular networks. This transition is first order, suggesting a phase of early evolution in the emergence of life of couple phase transitions with relic evidence of this structure in modern metabolism. I discuss implications for the experimental approaches to origins of life chemistry and also deeper theoretical implications for the role of chirality and chiral symmetry breaking in a more fundamental understanding of what life is.
Molecules in which one of its constituting atoms contain a short-lived, radioactive nuclide were recently introduced as intriguing objects of research. These radioactive molecules can be tailored to maximize the sensitivity to new physics beyond the Standard Model of particle physics. For example, when incorporating octuple-deformed (‘pear-shaped’) radionuclides into polar molecules, one obtains captivating probes for molecular electric dipole moments (EDMs) with unparalleled sensitivity to phenomena associated with time-reversal-symmetry breaking, especially inside the atomic nucleus. Uncovering novel sources of time-reversal violation has the potential to resolve one of the most tantalizing puzzle in modern physics, i.e. why there is more matter than antimatter in the universe.
Due to their short half-lives, spanning mere weeks, days, or even less, the radioisotopes of interest do not occur naturally but can be synthesized at radioactive ion beam (RIB) facilities such as at TRIUMF, Canada’s particle accelerator centre. There, the recently formed RadMol collaboration is pursuing a program to fully exploit the science potential of radioactive molecules.
In this talk, RadMol’s scientific vision will be presented along with its recent experimental advances. Among others, these include the formation of molecules or their sympathetic cooling via co-trapped and laser cooled ions, both achieved in cooler-bunchers commonly available at modern RIB facilities.
The TUCAN (TRIUMF Ultra-Cold Advanced Neutron) collaboration aims to measure the neutron electric dipole moment (EDM) with improved precision. The experiment will use a new high-intensity ultracold neutron (UCN) source currently being constructed at TRIUMF (Canada's Particle Accelerator Centre, Vancouver, BC). Our UCN production scheme is based on spallation neutron production and super-thermal UCN conversion with superfluid helium (He-II), and has been successfully demonstrated by a prototype UCN source operated from 2017-2019 at TRIUMF. With our newly upgraded source, a statistical EDM sensitivity of 10−27 e⋅cm would be achieved in 400 days of data taking. Core components of the new UCN source, such as optimized neutron moderators, a high-performance helium cryostat, and a nickel-plated UCN production vessel have been built and tested, and are being assembled at TRIUMF. The development of components of the neutron EDM spectrometer, such as a magnetically shielded room, atomic magnetometers, UCN polarization analyzers and a UCN precession chamber are advancing in parallel. In this presentation, an overview of the recent progress by the TUCAN collaboration will be presented, and prospects for the new neutron EDM measurement will be discussed.
There is renewed interest in the exploration of the nature of low-lying collective excitations in nuclei since several studies have posed serious questions regarding the veracity of multiphonon quadrupole vibrations. A recent survey of nuclei with low-lying states previously believed as having spherical vibrational structure found that very few passed the criteria. Of the few remaining candidates, which included $^{98,100}$Ru, the state of the spectroscopic data were insufficient to make conclusions. To address this issue, we have pursued a variety of studies to explore their nuclear structure, including a study on the $^{100}$Ru nucleus through a proton-transfer-reaction experiment at the Maier Leibnitz Laboratorium (MLL) facility in Garching, Germany. Using a 22 MeV proton beam, we performed the $^{103}$Rh($p,\alpha)^{100}$Ru reaction and the resulting emitted $\alpha$ particles were analyzed with a Q3D magnetic spectrograph. The results of the experiment, including the angular distributions of the population cross section, will be presented.
The FIssion Product Prompt gamma-ray spectrometer (FIPPS) is the new nuclear physics instrument at the Institut Laue-Langevin (ILL). FIPPS takes advantage of an intense “pencil-like” neutron beam (flux 10$^{8}$ n/s/cm$^{2}$) for inducing neutron capture and neutron-induced fission reactions and study the nuclear structure via high-resolution gamma-ray spectroscopy. The array is composed by 8 Compton suppressed HPGe clover detectors. Ancillary devices are possible, as LaBr$_{3}$ detectors for fast timing measurements or additional clover detectors (from the IFIN-HH collaboration) to increase efficiency and granularity.
The instrument performances will be shown with particular focus on the technique for correcting cross-talk effects affecting the energy resolution of the clover detector. Using a recently developed Geant4 simulation code, angular correlation analyses using a hybrid gamma-ray array could be possible. Examples from (n,γ) and neutron-induced fission reactions will be shown.
The Geant4 simulations also allowed to analyze the scintillator-based active target data in order to extract lifetimes in the sub-ps timescale in neutron-rich fission fragments, by analyzing the shape of the peaks in the energy spectrum. This method will be presented, as well as new results in Zr and Nb nuclei.
In order to extend the number of measurable lifetimes in fission fragments, a plunger device is under development. This device will be the first implementation of a system similar to the one described in for lifetimes measurements in fission fragments produced at a neutron beam. The design of such a device, including a mass identification setup (3-5 units mass resolution) will be shown and its implementation for a test with a $^{252}$Cf spontaneous fission source will be outlined. Finally, the results of the test at the LOHENGRIN spectrometer of the mass identification setup will be presented.
We study the excitation spectrum of the one-dimensional spin-1/2 XXZ chain with antiferromagnetic Ising anisotropy across a magnetic quantum phase transition induced by the application of a site-dependent transverse magnetic field. Motivated by the chain antiferromagnet BaCo2V2O8, we consider a situation where the transverse magnetic field has a strong uniform component and a weaker staggered part. Using a combination of analytical approaches and the numerically exact time-dependent matrix product state method, we determine the nature of the excitations giving rise to the spin dynamical structure factor. Below the quantum phase transition, we identify high-energy many-body two-magnon and three-magnon repulsively bound states which are clearly visible due to the staggered component of the magnetic field. At high magnetic fields and low temperature, single magnons dominate the dynamics. These results are in very good agreement with terahertz spectroscopy measurements.
A theme of our research is to use nanostructures as building blocks to fabricate materials with the aim of exploiting nanoscale control to tailor materials behaviour – potentially even quantum behaviour - from the nanoscale up. As a testbed, we study strong interactions between localized, unpaired spins and delocalized electrons. Such interactions play a key role in phenomena ranging from the Kondo effect to high Tc superconductivity. Using short (therefore, conducting) butanedithiol (HS(CH2)4SH) molecules as crosslinkers and Au (metal) nanoparticles, we have observed for the first time a Kondo effect in this nanostructured material. Leveraging nanoscale control and using Au nanoshells, which are more insulating, we observe the Kondo temperature-scale increases 10-fold, to >250K. Interestingly, the metallic and insulating systems, respectively, exhibit magnetism consistent with paramagnetism and antiferromagnetism. These results point to molecule linker-nanoparticle assemblies as a versatile means to generate materials exhibiting a range of strong electron-electron interactions.
We use the methods of group theory to completely block-diagonalize the
general, 4-parameter exchange Hamiltonian of a 16-site spin-1/2
pyrochlore cluster. By using the reduced density matrices, we can
calculate the concurrence between different spin sites at T=0 and when T
is small.
Two dimensional perovskites with a quantum well structures have demonstrated superior air stability because of the hydrophilic property functionalized by the organic molecule spacers. Thus, 2D perovskites are promising absorbers for the next generation photovoltaic technology. However, it remains a significant challenge to desirable free carriers because of the strong quantum confinements, leading to the exciton generation upon light absorption.
In this report, we demonstrated that we engineer the nanostructure, perovskite composition, and organic molecule spacers to tune the carrier generation from strong bonded exciton to free carriers. Using the novel ultrafast photocurrent spectroscopy, we have characterized the ultrafast carrier dynamics in less than one nanosecond upon a femtosecond laser excitation. By studying the carrier dynamics under temperature, electrical field, and photon flux, we have achieved 100 % free carrier photogeneration quantum efficiency. We have addressed the most fundamental photogeneration question for the perovskite field and pave the pathway for the perovskite solar cell efficiency improvement.
References:
• Kanishka Kobbekaduwa, et.al. and Jianbo Gao. (2023). Ultrafast Carrier Drift Transport Dynamics in CsPbI3 Perovskite Nanocrystalline Thin Films. ACS Nano.
• Kanishka Kobbekaduwa, et.al. and Jianbo Gao. (2021). In-situ observation of trapped carriers in organic metal halide perovskite films with ultra-fast temporal and ultra-high energetic resolutions. Nature Communications.
A set of pure quantum states is said to be "distinguishable" if upon sampling one at random, there exists a measurement to perfectly determine which state was sampled. It is well-known that a set is distinguishable if and only if its members are mutually orthogonal. In this talk, we explore some variants of distinguishability such as "antidistinguishability", which asks for the existence of a measurement that perfectly determines some state that was not sampled, and "state exclusion", which asks for the existence of a measurement that perfectly determines some subset of m states that were not sampled. We show that these problems are captured exactly by a linear algebraic concept called the "factor width" of a matrix, and we use this connection to establish several new bounds on antidistinguishability and state exclusion.
This talk aims to reveal the causal reasoning that underpins both the foundations of quantum theory and the superficially-unrelated data science framework of graphical models, also known as Bayesian networks. We will connect quantum nonlocality, as characterized by Bell's Theorem, with the idea of causal discovery in the presence of latent confounders. Understanding this relationship provides novel dividends to both fields: Causal inference sheds new light on device-independent randomness witnesses and measures of multipartite entanglement, and connection-aware statisticians are just beginning to recycle decades of insight around Bell’s theorem. This talk is designed to transcend disciplinary boundaries and to enrich our understanding of causality in a quantum world
All interested attendees of Wednesday’s symposium on “Building Communities of Practice for EDI and Outreach” are welcome to join the organizers of our divisions’ first symposium for an informal discussion, reflection, and planning session. We warmly welcome volunteers of any experience level interested in collaborating to build and maintain a community of practice and to plan a symposium for CAP Congress 2025.
All interested attendees of Wednesday’s symposium on “Building Communities of Practice for EDI and Outreach” are welcome to join the organizers of our divisions’ first symposium for an informal discussion, reflection, and planning session. We warmly welcome volunteers of any experience level interested in collaborating to build and maintain a community of practice and to plan a symposium for CAP Congress 2025.
In recent decades, our understanding of quantum matter has deepened significantly through a series of discoveries. Topology and entanglement have emerged as modern frameworks for classifying distinct phases of matter. Notably, the entanglement of many particles can give rise to entirely new phases with topological order. Examples include quantum spin liquids featuring unconventional excitations. However, identifying solid-state materials that exhibit these phases with fractional excitations has proven to be a challenging endeavor. In this talk, I will discuss recent advancements in quantum materials, strategies for designing target materials, and the challenges surrounding the search for quantum spin liquids in strongly correlated materials.
The electronic spins of single atomic defects in diamond can serve as magnetic sensors with exceptional sensitivity and nanoscale spatial resolution. So far, the nitrogen-vacancy (NV) center has been used for sensing external targets, partly due to its exceptional spin coherence under various experimental (including ambient) conditions. In this talk, I will discuss my postdoctoral work (Degen group; ETH Zurich) in creating an NV-NMR platform for molecular sensing. Our team developed fabrication and surface treatments for improving sensitivity while enabling highly generalizable molecular surface functionalization [1]. These techniques were subsequently used to detect conformational changes in few-molecule DNA samples. In parallel, we developed optimized diamond nanopillar structures for improving NV fluorescence collection, yielding a factor-of-three measurement speed-up [2]. I will conclude by outlining plans for my new lab to improve magnetic sensitivity further, enabling single-nuclear-spin detection within functionalized molecules and opening the door for structure elucidation or reaction monitoring on the single-molecule level.
[1] Abendroth et al., Nano Letters 22, (2022).
[2] Zhu et al., Nano Letters 23, (2023).
The nitrogen-vacancy (NV) centres in diamonds are solid-state quantum emitters exhibiting unique spin and optical properties at room temperature. They are sensitive to magnetic fields, temperature, pressure, and other physical quantities, making them valuable as probes for sensing all of these physical quantities. In our work, we use NVs to form a magnetic microscope, with a high spatial resolution (~250 nm) limited by the diffraction limit and sensitivity of <1 μT/sqrt(Hz). This
configuration is sometimes referred to as a Quantum Diamond Microscope (QDM) (1).
We use the QDM to reveal and understand the fundamental processes of magnetic domain pattern formation and their variation with temperature and external bias field, as well as characterizing the
Curie temperature (Tc) of recently discovered van der Waals (vdW) magnetic materials, namely Iron Germanium Telluride (Fe$_5$GeTe$_2$). We exfoliate these vdW materials down to a few nanometres. We observe that depending upon the thickness, their fundamental properties such as Tc, magnetization, domain pattern, etc. change.
We focus on measuring the Tc and imaging the domain structure of FGT flakes through out-of-plane magnetization. (Fig. 1). Our results (2) indicate structural features affecting magnetic orientation in these flakes, as well as a decrease in Tc as we are making the transition from bulk to 2D, which further decreases as the thickness of these 2D flakes decreases.
References:
(1) Levine, Edlyn V., Turner, Matthew J., Kehayias, Pauli, Hart, Connor A., Langellier, Nicholas, Trubko, Raisa, Glenn, David R., Fu, Roger R. and Walsworth, Ronald L. "Principles and techniques of the quantum diamond microscope" Nanophotonics, vol. 8, no. 11, 2019, pp. 1945-
1973.
(2) Bindu, Bindu et. Al, in preparation.
We use the port-based teleportation protocol to study teleportation over infinitesimally small distances, where the vacuum of a quantum field serves as the source of entanglement. We find that the resulting motion is equivalent to a quantum teleportation-induced Brownian motion. Purifying the interactions, from measurements to unitary operations, leads to motion described by Schrodinger’s equation. Hence, this synthesis brings together three different concepts: teleportation, quantum evolution and classical evolution.
One of the most basic notions in physics is the partitioning of a system into subsystems and the study of correlations among its parts. Operationally, subsystems are distinguished by physically accessible observables which are often implicitly specified relative to some external and/or background structure. In the absence of external relata as in Page-Wootters dynamics, gauge theories, and gravity, physical observables must be relationally specified relative to some internal dynamical degrees of freedom, ultimately quantum, that is a quantum reference frame (QRF). In this talk, I will discuss how different QRFs identify distinct external-frame-independent/gauge-invariant notions of subsystems. As a consequence, physical properties of subsystems such as entanglement, dynamics (open vs. closed), and thermodynamics are contingent on the choice of internal frame. In particular, such a relational definition of subsystems provides an alternative proposal for defining entanglement entropy in gauge theories.
Minimal uncertainty, also known as generalized uncertainty principle, is in effect a modification to the algebra of a quantum system and can be considered a deformed algebra approach to quantum gravity.
I will present the full spacetime effective metric of a Schwarzschild black hole within the minimal uncertainty approach. I will explain how one can quantize the interior, and extend the solution to the full spacetime. I will also present some of the key properties of such a modified spherically symmetric black hole, such as singularity resolution, as well as some of its phenomenological aspects.
If relativistic gravitation has a quantum description, it must be meaningful to consider a spacetime metric in a genuine quantum superposition. But how might such a superposition be described, and how could observers detect it? I will present a new operational framework for studying “superpositions of spacetimes” via model particle detectors. After presenting the general approach, I show how it can be applied to describe a spacetime generated that is a superposition of two expanding spacetimes. I will then move on to show how black holes in two spatial dimensions can be placed in a superposition of masses and how such detectors would respond. The response exhibits signatures of quantum-gravitational effects reminiscent of Bekenstein’s seminal conjecture concerning the quantized mass spectrum of black holes in quantum gravity. I will provide further remarks concerning the meaning of the spacetime metric, and on distinguishing spacetime superpositions that are genuinely quantum-gravitational, notably with reference to recent proposals to test gravitationally-induced entanglement.
We explore the generalized volume complexity of odd-dimensional asymptotically Anti-de Sitter (AdS) Myers-Perry black holes with equal angular momenta following the complexity equals anything proposal. Initially, we determine the codimension-one generalized volume complexity by finding the extreme of the generally covariant volume functional. We show that its late-time growth rate aligns with the critical momenta linked to the extremal hypersurface. Consequently, we select the Gauss-Bonnet invariant as the scalar function in the definition of generalized volume complexity to examine the complexity's temporal variation. Interestingly, we note the possibility of numerous pseudo phase behaviors intricately tied to the configurations of the effective potentials related to the codimension-one hypersurface. Nevertheless, the complexity shows a linear growth in the ultimate phase in every scenario. This suggests the consistency of the complexity equals anything proposal with respect to the AdS rotating black holes.
Cancer radiotherapy often lowers patients' lymphocyte counts. This radiation induced lymphopenia is significantly correlated with survival for certain treatment sites and cancer types. Despite this, the dose to lymphocytes is not explicitly minimised during clinical treatment planning.
Across patients, a given treatment modality, such as photon or proton therapy (PT), may not consistently provide the minimal blood dose. Which patient parameter, or combination thereof, that causes this variation is not well understood. Should a causative parameter be identified, it would provide a clinical indicator for which treatment provides the minimal blood dose. One such parameter to investigate is the target volume.
In line with current models, the dose to circulating blood was calculated as a surrogate for the dose to circulating lymphocytes. By doing so for twenty liver tumour patients from The Radiotherapy Optimisation Test Set (TROTS) it is possible to deduce which, if any, patient parameters are useful indicators for a given treatment modality. For example, as the patient dataset offers a range of target sizes (from 75 to 365 cm3) this parameter can be investigated.
The open-source treatment planning system matRad was used to create treatment plans for both photon therapy and PT. The resulting organ doses were then passed to the haematological dose (HEDOS) framework which calculated the dose to circulating blood. The results were then compared across modalities. Comparison across patient parameters is ongoing.
Initial results using a sample patient showed, for comparable standard deviations, a 23% decrease in the mean dose to circulating blood from PT as compared to photon therapy. Additionally, 2% of the circulating blood received a dose of 2.8Gy or higher in photon therapy versus that of 2.2Gy in PT.
Since RIL is correlated with survival, it is to be expected that reducing the dose to circulating blood will improve patient outcomes. The initial results showed that, as its dose to circulating blood is lower, this may be achieved by using PT.
Ongoing analysis will show if there are any correlations between patient parameters and the modality which provides circulating blood the lower dose. This subsequently reveals if any patient parameters are suitable indicators for a treatment modality.
Functional connectivity (FC) has a high energetic demand as demonstrated by hybrid [18F]-Flurodeoxyglucose (FDG) PET and resting state functional MRI (rsfMRI) studies (Tomasi et al., 2013). Regional Homogeneity (ReHo), a rsfMRI local connectivity metric, displays the strongest correlation with metabolism in healthy brain (Aiello et al.,2015), which is reduced in Alzheimer's (AD), potentially indicating a bioenergetic effect (Marchitelli et al., 2018). However, vascular dysfunction associated with AD can also have an effect on the phenomenon of reduced coupling. To explore the involvement of a bioenergetic mechanism, we examined changes in this coupling in another form of dementia called behavioral variant frontotemporal dementia (bvFTD) without notable vascular impairment.
A total of 16 bvFTD patients and 16 healthy controls underwent FDG-PET and rsfMRI scans. Preprocessing of rsfMRI data was completed using SPM12 and connectivity maps (ReHo) were generated with REST toolbox. FDG images were processed using in-house MATLAB scripts, and later compared between groups to examine metabolic changes. To study the relationship between connectivity and metabolism, a voxel-wise correlation analysis was performed between FDG and ReHo maps over whole-brain gray matter, followed by the comparison of mean correlation between groups (2 sample t-test; p<0.05, corrected for multiple comparisons). The entire process was repeated in key regions associated with bvFTD pathology.
Hypometabolism was found in prominent frontotemporal regions in bvFTD, consistent with literature. Significant positive correlations were observed between FDG and ReHo in all subjects, which however was diminished in bvFTD on a whole brain level as well as in disease specific regions such as anterior insula, orbitofrontal cortex and dorsolateral prefrontal cortex.
Reduced functional/metabolic coupling support the role of insufficient energy production causing disrupted neuronal communication. Considering bvFTD does not have notable vascular dysfunction unlike AD, the results support the bioenergetic role behind disrupted connectivity in dementia.
Near-infrared spectroscopy (NIRS) is a non-invasive tool used to assess cerebral health by estimating tissue blood content and oxygenation from measurements of light absorption. To evaluate the accuracy of NIRS devices and algorithms, tissue-mimicking phantoms (TMPs) are used. TMPs typically consist of light-scattering media and light-absorbing dyes, such as Intralipid and indocyanine green (ICG), respectively, to mimic the optical properties of biological tissue. However, the dyes’ absorption spectra can change based on their relative proportion with respect to the light-scattering media in the TMP, affecting the estimation accuracy of the TMP’s optical properties. The study objective was to investigate ICG absorption properties in Intralipid-based TMPs at varying concentrations.
Four sets of TMPs were prepared with 0.8% Intralipid, and ICG concentration was increased from 0 µM to 0.2 µM in steps of 0.04 µM. An off-the-shelf spectrometer (QE Pro, Ocean Insight) was used to measure the diffusely reflected light from TMPs at each ICG concentration. Measurements were acquired by a spatially-resolved approach at source-detector distances ranging from 2.7 to 3.5 cm. The effective attenuation coefficient (µeff) was estimated at each ICG concentration and used to compute the scattering (µs’) and absorption (µa) coefficients. ICG concentration was then estimated from µa and compared to expected values.
The estimated ICG concentrations in Intralipid increased with the addition of ICG. However, these concentrations were significantly different from expected values, as there was a linear scaling factor of 3.8, revealed by the plots’ slope of recovered concentrations against expected values.
The current study shows that estimated ICG concentrations in Intralipid-based TMPs increase linearly with the amount of dye in solution. However, the slope of recovered concentrations versus expected values was 3.8 rather than unity, indicating an overestimation. This error could result from molar extinction coefficient changes of ICG in water as its concentration increases, due to its interaction with Intralipid, needing further investigation. Immediate future work will investigate the optical properties of other commonly used dyes, namely methylene blue and India ink, in Intralipid and the properties of these dyes in inorganic light scatterers like TiO2 and glass microspheres.
At the organ and tissue level, the circulation relies on branching networks of microvessels to supply oxygen and other nutrients to all cells in support of metabolism, as well as remove metabolic waste, and derangement of the structure or function of these networks is directly linked to tissue dysfunction. Over a wide range of diameters, these networks are binary trees and display distinct geometric and hemodynamic properties. Although experiment-based reconstruction of these vascular structures has improved recently, there remains a strong motivation for developing theoretical models that match measured statistical properties of microvascular networks under healthy conditions and with elevated disease risk (e.g., diabetes) and can be used for computational studies of flow, transport, and regulation. These efforts have the ultimate objective of connecting specific vascular defects to observed modes of tissue dysfunction. In the present study, two-dimensional arteriolar networks in rat skeletal muscle are constructed based on the constrained constructive optimization (CCO) algorithm using published geometric and hemodynamic data obtained via intravital video-microscopy. Results obtained assuming blood is a single-phase Newtonian fluid demonstrate how network geometry, fractal dimension, and flow properties depend on the Murray’s law exponent (g). In addition, using a two-phase (plasma and red blood cells, RBCs) flow model, we show the importance of microvascular blood rheology in determining network properties. Future work will focus on constructing three-dimensional networks, tissues other than skeletal muscle, and determining the effects of both domain shape and g.
With external beam radiotherapy being a key tool in treating cancer, the possible harmful side effects, e.g. secondary cancer, need to be accounted for and hopefully minimized. As proton therapy (PT) typically spares more healthy tissue than photon therapy, PT may potentially lower the rate of secondary cancers in treated patients. Due to a lack of valid patient data, simulations are required to show this.
To obtain secondary cancer rates, firstly, matRad an open source clinical treatment planning software made in Matlab was used to create treatment plans on the patient's CT scans, then full body phantoms were combined with the CT scans. This method differs from other research as it gives more insight into the dose deposited outside the original CT scan allowing analysis to be done on the out of field organs. Finally, Monte Carlo simulations were ran using the Monte Carlo package MCsquared on the whole CT scan using the outputs and particle influences calculated from matRad. Using the results from the Monte Carlo simulations the Lifetime Attributable Risk (LAR) was calculated which is the percentage chance a given patient can develop secondary cancer due to the radiation they received from their treatment.
On average for ten head and neck cancer patients being treated with external beam radiotherapy, photons were 2.94 times more likely to cause secondary cancer in a patient when compared with PT.
These results show that treating head and neck patients with PT significantly lowers the chance of secondary cancer developing when compared to treating them with photons.
I will discuss using comparisons to facilitate learning using ComPAIR, open-source, peer feedback and teaching technology developed at the University of British Columbia. ComPAIR is currently being used in over 60 courses across all disciplines and Faculties at the University of British Columbia and at over six institutions outside of the University of British Columbia. ComPAIR makes use of students’ inherent ability and desire to compare: according to the psychological principle of comparative judgment, novices are much better at choosing the “better” of two answers than they are at giving those answers an absolute score. By scaffolding peer feedback through comparisons, ComPAIR provides an engaging, simple, and safe environment that supports two distinct outcomes: (a) students learn how to assess their own work and that of others in a way that (b) facilitates the learning of subtle aspects of course content through the act of comparing.
In this session, I will give a specific example of using ComPAIR in a third-year course on the Physics of Climate and Energy where we do four-week-long “ big picture questions” that have students tackle vaguely defined problems as a class but submit papers individually to ComPAIR. I will also talk about a study in a first year biology class where we measured student learning with a diagnostic after completing an activity where students use ComPAIR to reflect on their answers versus where students have access to the answer key.
For the past decade or so, I have been experimenting with the boundary between art and science. I have repurposed my scientific images of pattern formation experiments and pattern-forming natural phenomena by presenting them as art. I have exhibiting images and videos in art galleries and juried art shows. I have brought artists into my research lab for several hands-on workshops. I was the co-organizer of the "ArtSci Salon", an evening meet-up group at the Fields Institute of Mathematical Science in Toronto. I have released a trove of icicle shape data for free use under the Creative Commons. I have collaborated with sound artists and composers to use pattern formation images and videos as input to their creative processes. All these activities can be viewed equally as art-making or as scientific outreach. I call my stuff "scientific folk art". I claim that aesthetics is a valid motivation for scientific studies in pattern formation, and that exhibiting and talking about pattern formation as art is a valid form of scientific outreach. This approach generates wide-ranging conversations across traditionally separate disciplines. The art world offers a new and relatively untapped venue for science outreach activities, as well as being a lot of fun to explore.
Dimer models and loop models have long been studied as prototypes for quantum ordering with local constraints. Robust physical realizations are not known, especially in solid state materials. We propose that a quantum loop model is indeed realized in MX$_2$, where M=Mo, W and X=S, Se or Te. In the single-layer 1T structure, each metal atom is in an octahedral chalcogen cage with two electrons in d orbitals. The geometry of the t$_{2g}$ wavefunctions leads to highly directional overlaps between neighbouring metal atoms. Each metal atom participates in two covalent bonds, oriented towards two of the six nearest neighbours. These bonds connect to form loops on the underlying triangular lattice. We build a minimal model including local resonance processes and potential energy. We map out a phase diagram using exact diagonalization on small clusters. We find a phase that closely resembles the 1T' deformation seen in these materials. We discuss further experimental tests and consequences for other transition metal dichalcogenides.
We consider the square lattice S=½ quantum compass model (QCM) parameterized by
Jx, Jz, under an in-plane field. At the special field value,
(hx,hz)=2S(Jx,Jz), we show that the QCM Hamiltonian may be written in a
form such that two simple product states can be identified as exact
ground-states, below a gap. Exact excited states can also be found. The exact
product states are characterized by a staggered vector chirality, attaining a
non-zero value in the surrounding phase. The resulting gapped phase occupies
most of the in-plane field phase diagram but is clearly distinct from the
high field polarized phase. Using iDMRG and iPEPS techniques in combination
with exact diagonalizations and analytical arguments, we determine the
complete in-plane field phase diagram [1]. Our findings are important for
understanding the field dependent phase diagram of materials with
predominantly directionally-dependent Ising interactions, and duality
relations connects the QCM model to the Xu-Moore model and the toric code.
Many-body localization impedes the spread of information encoded in initial conditions, blocking (or at least radically slowing) thermalization of an isolated quantum system. We examine the potential to tailor the growth of entanglement in the Fermi Hubbard model by tuning disorder in both the charge and spin degrees of freedom. We begin by expressing the Hamiltonian in terms of a set of optimally localized conserved quantities, and examine in detail the growth of entanglement entropy and its connection with the coupling between these local integrals of motion. We demonstrate how the strength of the disorder in charge and in spin controls the time scales seen in entanglement growth. We also show a shift in behaviour between the weakly and strongly interacting limit in which local integrals of motion lose their close association with Anderson localized single-particle states.
Graphene is a beautiful and incredibly versatile platform for investigating emergent electronic phenomena. Confining electrons to two dimensions enhances their influence on one another, and empowers us to alter their environment with external fields and additional layers. Strategic combinations and arrangements of layered materials can yield new physics and surprising electronic properties. I will discuss how certain arrangements can lead to superconductivity, magnetism, and topology from layers that have none of these properties on their own, and how these phenomena manifest in transport experiments.
What is life? This is among the most difficult open problems in science. The definitions we have now all fall short. None help us understand how life originates from planetary chemistry, nor do they account for the full range of possibilities for what life on other planets might be like. One approach has been to ask whether our current theories of physics are up-to-task. This was the approach adopted by the quantum physicist Erwin Schrodinger in his famous series of lectures addressing the topic “What is Life?”, which he delivered in 1943. But, what Schrodinger ultimately argued was that while life can be shown to be consistent with the known laws of physics, it also cannot be explained by them. In this talk I briefly review motivations for why our current theories of physics are not suited to solving the problem of life and why solving the origin of life may require radical new thinking and an experimentally testable theory for what life is. I discuss one promising new approach, Assembly Theory, useful for identifying and classifying “life” in terms of universal physics. If proven, the theory should apply not just to biological life on Earth but to any instance of life in the universe, even life as no one yet knows it. I discuss the foundations of the theory, what insights it provides into the origins of biochemistry, and how we might experimentally explore the origins of alien life in the lab with large scale experiments.
On June 27, 2019, NASA announced its next New Frontiers mission: Dragonfly. This audacious mission will send a rotorcraft to explore Saturn’s largest moon Titan, and evaluate its potential for prebiotic chemistry and (possibly) extraterrestrial life. The Dragonfly mission will also give us countless high-resolution views of this strangely Earth-like moon, showing us how rivers and sand dunes form on an icy moon with a thick atmosphere. In this presentation, I will provide a summary of the history of the Dragonfly mission, its scientific goals, and the next steps forward, from launch in 2028 to landing in the mid-2030s.
https://cap.ca/wp-content/uploads/2024/05/CAP_HS_Teachers_Workshop_Program2024.pdf
Black holes are perhaps the most enigmatic objects in nature. They are the end point of dying stars, form the central core of most galaxies, and can collide to produce ripples in space and time that we know as gravitational radiation. A key property of a black hole is its horizon — the boundary that separates the black hole from the rest of the universe. Understanding the physics of horizons has far reaching consequences, ranging from theoretical (e.g. the laws of black hole thermodynamics in quantum gravity) to experimental (e.g. the production of gravitational radiation in black hole collisions).
In this talk, I will review recent and ongoing work concerning dynamical features of black hole horizons, particularly in the case of mergers. I will discuss how two black holes become one and the physics that mediates this process. I will discuss aspects of both the apparent horizon and event horizon, in the latter case highlighting recent developments of possible relevance to black hole entropy.
Black holes stand as enigmatic phenomena within our universe, yet their precise definition presents a big challenge. The original definitions are only useful in static situations since they rely on global properties (we need to know the history of the whole universe to detect a black hole!). Marginally outer trapped surfaces (MOTS) were introduced in an effort to provide a quasilocal definition of a Black Hole. In recent years, they have turned out to be essential to studying certain aspects of the merger of black holes. In this talk, I will show how some classical tools from differential geometry and functional analysis can shed light on the relation between symmetry and stability of MOTS.
I will discuss a class of time-dependent, asymptotically flat and spherically symmetric metrics which model gravitational collapse in quantum gravity developed by myself and the other listed authors. Motivating the work was the intuition that quantum gravity should not exhibit curvature singularities and indeed, the metrics lead to singularity resolution with horizon formation and evaporation following a matter bounce. We also look at how a matter field behaves with this background,
Gravitational solitons are globally stationary, geodesically complete spacetimes with positive energy. Interestingly, they do not have an event horizon, and according to the Lichnerowicz Theorem, no such electrovacuum solutions exist in four dimensions. In this talk, I will introduce a family of gravitational solitons in anti-de Sitter spacetimes. I will explain their geometric and thermodynamic properties.
In underserved regions, infants face heightened risks of brain injury due to the prevalence of adverse factors like infections and malnutrition, compounded by the absence of suitable monitoring tools. Detecting early signs of neonatal brain injury through monitoring cerebral blood oxygenation offers hope in addressing this critical need for underserved communities [1]. The goal of this project is to develop a noninvasive, wearable optical device for monitoring neonatal cerebral blood oxygenation in low resource settings.
More specifically, we leveraged widely available consumer electronics to develop a low-cost near-infrared spectroscopy (NIRS) system [2], specifically designed for neonatal neuromonitoring in resource-limited settings. The device was based on a fitness tracking smartwatch (MAXM86146, Maxim Integrated) that includes two photodetectors, synchronization algorithms supporting up to four light-emitting diodes (LEDs), and high-speed real-time data acquisition equipped with advanced noise-canceling algorithms. The MAXM86146 was supplemented with a dual-wavelength LED (SMT730D/850D, Marubeni) emitting light at 730 and 850 nm. We subsequently designed a homemade driver to control the LEDs’ power to allow the emitters and detectors to be positioned 3 cm apart for improved sensitivity to deep tissues.
To evaluate our approach, we conducted a cuff occlusion experiment on the forearm of a healthy adult. The device was placed on the subject’s skin, and the light intensity from each wavelength was measured in real-time. Next, the measurements were analyzed using an algorithm based on the modified Beer-Lambert law [2] to quantify changes in oxy- and deoxy-hemoglobin (HbO2 and Hb) concentrations over time. The results showed the expected rapid decrease in HbO2 concentration during the arterial occlusion period. Furthermore, the high sampling rate of the device enabled us to monitor heart pulses throughout the experiment.
Future work will include a comprehensive evaluation of the device, including assessing its performance in tissue-mimicking phantoms and in-vivo experiments with healthy volunteers before its deployment in the clinic.
This project is funded by the Western University of Ontario, under a Frugal Biomedical Innovations, Catalyst Grant.
[1] Rajaram, A., et al. Scientific Reports 12.1 (2022): 181.
[2] Ferrari, M., et al. Neuroimage 63.2 (2012): 921-935.
A novel hand-held point-of-care technology has been developed for “on-farm” detection of pathogens, using the well-established loop-mediated isothermal amplification (LAMP) assay for replicating DNA at constant temperature. The technology uses off-the-shelf primer sets and either a fluorescent DNA binding dye or a fluorescently-labelled DNA probe as the reporter system. LAMP, first described in the late 1990s, has steadily become an alternative to Polymerase Chain Reaction (PCR), due to lower costs, faster response times and higher amplification efficiency. The patented technology incorporates a passive gravity-flow cartridge with microchannels and pre-loaded reagents. The cartridge is filled with sample and inserted into a battery-powered hand-held reader that heats the reaction chamber to 60-65 C. An LED light source excites the fluorescent dye, and a photodiode detector collects the fluorescence in real-time. Data is transmitted to a smartphone via Bluetooth. Data from proof-of-concept design testing will be presented. Temperature control was achieved using feedback from four thermistors situated near the reaction volume. Results of heating and stability trials will be presented. The initial study used Lambda DNA, a temperate Escherichia coli bacteriophage, as the target analyte. In a subsequent study, bovine mastitis was simulated by dosing milk with Staphylococcus aureus bacteria. Detection times ranged from 30-60 minutes in early trials. Design tests using both liquid and lyophilized reagents will be presented. The current focus is in the agricultural sector, to provide "on-farm" detection of pathogens which can save time, effort and cost compared to the current process of shipping samples to a laboratory and waiting for results. This could lead to improved and timely decision making for the management of crops and livestock.
Funding: NSERC Discovery Grant to WW and a CIHR COVID-19 Grant to WW and AT.
Rapid, reliable, specific and sensitive detection of biomolecules are the four pillars of developing sensors especially for point-of-care devices where quick diagnostic times are key to providing first-aid, surgical interventions or emergency treatment. In my talk, I will discuss how sensing starting from simple molecules to complex biomolecules has been achieved in my group. We aim at exploring intrinsic properties of molecules such as their vibrations, color, current response and so on. We also try to achieve minimal invasivity by ensuring small amounts of body fluids are required for sensing.
Although detection of biomolecules can be a fairly straightforward process, sensing them in trace concentrations and also quantifying them can be an equally cumbersome task. Low concentrations of samples may not create a huge signal response thus becoming a limiting factor. Hence, I will give an overview of approaches used by my team to overcome these challenges. I will present a case-study of the sensing and quantification of a big biomolecule, Hemoglobin (Hb). Hb is a crucial component of blood responsible for oxygen transport. Hb disorders, such as sickle cell disease and β-thalassemia are prevalent genetic diseases which can lead to severe complications if not diagnosed and treated promptly.
Introduction: Malaria is a blood-borne parasitic disease with an estimated 247 million new cases and 619,000 deaths worldwide in 2021. While this represents a staggering problem at the global-health level, it is tragic among children under the age of five in sub-Saharan Africa where it is the number one cause of death. Even when treatments are available, they are less effective if not administered within a few days of symptoms, often non-specific and flu-like, and used only for positive cases due to limited availability and the risk of developing malarial resistance. Rapid diagnosis, often in remote regions, is a serious challenge. Malaria is diagnosed from images of parasites in stained red blood cells, requiring laboratory-grade microscopes and trained pathologists or technologists reviewing blood-smear slides. Unfortunately, the cost of transferring suspected cases to testing centres is prohibitive, even if the infrastructure exists, and it is similarly not feasible to distribute microscopes to remote settings in need.
Fourier ptychography is a new computational-optics technology with potential to produce high-quality images with resolution better than the diffraction-limit of light in high NA (low cost) optics having 1-µm resolution and more than 0.25-mm field-of-view, something that has been possible only with high-quality laboratory equipment. Our objective is to develop Fourier ptychography for practical implementation and use existing cell phone infrastructure to transmit images to regional facilities for diagnosis.
Methods: The ptychography system uses a Raspberry Pi computer and custom 225-LED matrix light source. FPM images are reconstructed using in-house Python software. The MTF was determined using a 1951 USAF resolution test pattern. A least-squares analysis was used to determine image modulation at each fundamental frequency plus the first two harmonic terms of a square wave. These were normalized to larger uniform regions of the pattern and combined to generate a pre-sampling MTF which extends beyond the sampling cut-off frequency imposed by pixel spacing.
Results: In the object plane of raw images, pixel size was 0.75 µm and the pupil function diffraction limit was 280 cycles/mm, increasing to 1200 cycles/mm in the FPM images. The 5% MTF frequency was 350 and 650 cycles/mm in raw and FPM images, respectively. A low-frequency drop of approximately 0.3 was observed in both raw and FPM images.
Conclusions: The limiting pre-sampling MTF frequency of the Fourier ptychography microscope was measured as 650 cycles/mm, approximately twice the expected diffraction limit. This corresponds to a resolution better than 1 µm and close to the sampling cut-off frequency of 670 cycles/mm imposed by pixel spacing.
The Deep Underground Neutrino Experiment (DUNE) will make measurements of neutrino oscillation probabilities as a function of neutrino energy at unprecedented levels of precision. In order to do this, DUNE must figure out how to instrument office-building-sized volumes of Liquid Argon using the Time Projection Chamber technique, and at the same time make reliable predictions for how a distribution of measured neutrino energies at these far detectors can be disentangled to determine an incoming neutrino flux, using measurements from a suite of near detectors. This talk will describe both the near and far detectors of the DUNE experiment, as well as the current and near term prototyping measurements that are being made to ensure success when DUNE starts operations.
Hyper-Kamiokande, the successor to Super-Kamiokande, is a third-generation ring imaging water Cherenkov detector in development in Japan. Serving as the far detector to the JPARC neutrino beam with a 295-kilometer baseline, it promises heightened sensitivity for precise oscillation parameter measurements. This presentation outlines Hyper-Kamiokande's current status, construction stages, and its role alongside the beamline and near detectors, the Intermediate Water Cherenkov Detector, ND280 and INGRID.
Hyper-Kamiokande aims to address key questions in neutrino physics, such as CP-violation in the lepton sector and the hierarchy of neutrino masses. Additionally, it holds potential for uncovering physics beyond the Standard Model, including the search for proton decay. This presentation emphasizes Hyper-Kamiokande's significance in advancing our understanding of neutrinos and exploring new frontiers in particle physics.
The Tokai to Kamioka (T2K) experiment is a long-baseline neutrino experiment. A proton beam generated in Tokai, on the east coast of Japan, collides with a fixed
graphite target which produces mesons which decay to neutrinos. A near detector suite located 280 meters from the target and a far detector, Super-Kamiokande, located 295 kilometers from the target on Japan's west coast, are $2.5^{\circ}$ off-axis from the incident proton beam direction to optimize neutrino oscillation sensitivity. The appearance of $\nu _e$ ($\bar\nu_e$) at the far detector from an initial $\nu _{\mu}$ ($\bar\nu_\mu$) beam can then be used to determine the mixing angles describing neutrino oscillations, as well as the CP-violating parameter $\delta _{CP}$.
As a neutrino experiment, T2K has a large reach in measuring not only the parameters of the neutrino mixing matrix, but also to measure neutrino cross-sections and search for exotic matter. This talk will describe the status of various measurements and searches, and outline recent hardware upgrades.
One recent upgrade is that of the Optical Transition Radiation (OTR) beam monitor, which was designed and built in Canada. Beam monitors in the T2K beam line can measure the primary proton beam position and width, which is key for flux predictions, and provide safety mechanisms to ensure the beam does not hit any critical components. The OTR beam monitor, being just before the T2K target, is crucial in fulfilling both of these purposes.
This talk will provide an overview of how the OTR beam monitor obtains and analyzes this important data for T2K. In addition, improvements that went into the new OTR which was installed in 2022 and includes titanium foils designed for higher intensity will be outlined. Finally, efforts at characterizing and reducing possible background from helium scintillation and secondaries being produced along the beam line.
The LEGEND collaboration is operating the LEGEND-200 experiment at the Gran Sasso National Laboratory (LNGS) in Italy and is designing the LEGEND-1000 experiment for baseline deployment at LNGS with SNOLAB as a backup site. The experiment use high-purity enriched Ge-76 crystals in a direct search for neutrinoless double beta decay. The goal for LEGEND-200 is to be sensitive to a double beta decay half life of $10^{27}$ years; for LEGEND-1000, $10^{28}$ years, which will probe the inverted hierarchy space. Results from LEGEND-200 will be discussed as well as status and design preparations for LEGEND-1000.
A predominantly electric E&m storage ring, with weak superimposed magnetic bending, is shown to be capable of storing two different particle type bunches, such as helion (h) and deuteron (d), or h and electron ($e^-$), co-traveling with different velocities on the same central orbit. Rear-end collisions occurring periodically in a full acceptance particle detector/polarimeter, allow the (previously inaccessible) direct measurement of the spin dependence of nuclear transmutation for center of mass (CM) kinetic energies (KE) ranging from hundreds of keV up toward pion production thresholds. With the nuclear process occurring in a semi-relativistic moving frame, all initial and final state particles have convenient laboratory frame KEs in the tens to hundreds of MeV.
The rear-end collisions occur as faster stored bunches pass through slower bunches. An inexpensive facility capable of meeting these requirements is described, with several nuclear channels as examples. Especially noteworthy are the $e^{+/-}$-induced weak interaction triton (t) $\beta$-decay processes, t + $e^+ \rightarrow$ h + $\nu$ and h + $e^- \rightarrow$ t + $\nu$. Experimental capability of measurement of the spin dependence of the induced triton case is emphasized. For cosmological nuclear physics, the experimental improvement will be produced by the storage ring's capability to investigate the spin dependence of nuclear transmutation processes at reduced kinetic energies compared to what can be obtained with fixed target geometry.
Processes involving quartic electroweak gauge couplings have become experimentally accessible for the first time in the run-2 dataset of the LHC. In this talk, recent observations of multi-boson production in various channels by the ATLAS experiment are presented, and their measured cross-sections are reported. This includes measurements of Wyy, WZy and WWW production. Moreover, differential measurements of Zyy production are highlighted. The results are used to constrain dimension-eight operators affecting quartic electroweak couplings in an Effective Field Theory framework.
Recently, the CMS and ATLAS collaborations have announced the results for Higgs decay into a lepton pair and a photon, through subprocess H->Zγ. This semi-leptonic Higgs decay receives loop induced resonant as well as non-resonant contributions. To probe further features coming from these contributions, we argue that the polarization of the final state leptons is also an important parameter. We show that the contribution from the interference of resonant and non-resonant terms plays an important role when the polarization of final state lepton is taken into account, which is negligible in the case of unpolarized leptons. For this purpose, we have calculated the polarized decay rates and the longitudinal, normal and transverse polarization asymmetries. We find that these asymmetries purely come from the loop contributions and are helpful to further investigate the resonant and non-resonant nature of this decay proess. We observe that for final state electron, the longitudinal decay rate is highly suppressed around 60 GeV region when the final lepton spin is -1/2, dramatically increasing the corresponding lepton polarization asymmetries. Furthermore, we analyze another observable, the ratio of decay rates of different lepton flavours. Therefore, the precise measurements of these observables at CMS and ATLAS can provide a fertile ground to test not only the Standard Model (SM) but also to examine the signatures of possible new physics (NP) beyond the SM.
Higgs boson production in association with a vector boson provides direct access to the Higgs boson's couplings to vector bosons given knowledge of the other branching fractions of the Higgs. The values of these couplings are predicted by measurements of the electroweak coupling strengths and the vacuum expectation value of the Higgs field, and so measurements of associated production provide stringent tests of the Standard Model of Particle Physics. In this talk, I will present a measurement of associated production of Higgs bosons ($VH$) decaying to pairs of $W$ bosons ($H\rightarrow WW^\ast$) with the ATLAS detector. The measurement utilizes 139 fb$^{-1}$ of proton-proton collision data collected by ATLAS at centre-of-mass energy 13 TeV during the Large Hadron Collider's Run 2. The corresponding analysis is performed in several categories across 2-, 3-, and 4-lepton final states and utilizes a diverse set of machine learning algorithms for signal extraction. The $VH$ production cross sections times the $H\rightarrow WW^\ast$ branching ratio, measured both inclusively and in the context of the Simplified Template Cross Section Framework, are reported and found to agree with their Standard Model expectations. To date, this is the most precise measurement of $VH$ production in the $H\rightarrow WW^\ast$ decay channel ever performed and a near observation of the process at 4.6$\sigma$ above the background-only hypothesis.
While the usual LRSM is widely studied and well explored, there are alternate possibilities within the left-right models which are not given enough attention. In this talk we shall focus on charged Higgs arising in different left-right models like the LRSM and ALRM, and prospects to identify them at the LHC. With additional quarks and neutral leptons present in ALRM, compared to LRSM, having direct interaction with charged Higgs bosons, their signatures at the LHC can be quite distinguishable. In particular, the mass ranges not allowed in the case of LRMS are still be possible in ALRM. We shall give an overview of charged Higgs searches in at the LHC, and present detailed analysis of possibilities within ALRM.
Isolated gravitational systems, such as stationary vacuum black holes, are described in general relativity by spacetimes whose spatial hypersurfacess asymptotically approach flat Euclidean space. The geometric and physical invariants characterizing these solution are very well understood. In five dimensions, one can also consider vacuum solutions whose spatial slices asymptotically approach a gravitational instanton geometry, such as the Eguchi-Hanson and Euclidean Schwarzschild instantons. The asymptotic three-sphere at infinity is replaced with a (possibly trivial) circle bundle over a two-sphere. I will discuss invariants and black hole mechanics for families of solutions of this type obtained by Chen and Teo.
Drawing upon the canonical quantization of general relativity (GR) in dimensions higher than two, using the Dirac constraint formalism, we propose the loss of covariance as an intrinsic property of the theory. This loss manifests in the first-order Einstein–Hilbert action, where besides first-class constraints, second-class constraints emerge, giving rise to non-standard ghost fields that disrupt the covariance of the path integral. We explore canonical quantization via the path integral calculation for the equivalent Hamiltonian formulation of GR, where only first-class constraints are present. Despite this, covariance is still compromised due to the loss of diffeomorphism invariance and the introduction of non-covariant constraints in the path integral. However, we find that covariance is restored as a symmetry in the weak limit of the gravitational field, allowing for perturbative calculations. Consequently, we establish that the breakdown of space–time is inherent to GR itself, suggesting its characterization as an effective field theory (EFT). Moreover, we propose that this breakdown occurs non-perturbatively in the strong field limit of the theory. While covariance is preserved in the constraint quantization of non-Abelian gauge theories like the Yang–Mills theory, our results indicate a unique departure in the context of canonical gravity formalism and EFT approach. These findings align with GR singularity theorems, extending their scope to imply breakdowns at the strong field limit, such as those encountered in black holes. In contrast to the asymptotic safety program, our findings support emergent theories of space–time and gravity without necessitating thermodynamics, such as the entropic gravity program. Through the lens of EFT, our results underscore the necessity of new degrees of freedom or principles in the non-perturbative sector of the full theory, where covariance as a symmetry is breached in the high-energy (strong field) regime of GR.
Approach: Although quantum field theories (QFTs) represent a powerful formalism, there is no guarantee that any given QFT proposal will apply to our particular universe. Although a QFT framework has many benefits, by itself, it nevertheless offers a speculative road to quantum gravity. By contrast, here I describe an alternative ontology-driven approach which prioritizes defining an ontology consistent with existing theory. The situation is reversed vis-à-vis QFT: the mathematical machinery is distinctly primitive, whereas the physical motivation and connection to existing theory potentially much stronger.
Proposal: I will describe an approach to quantum spacetime originating in an effort to reconcile the multiple conflicting viewpoints of an ensemble of inertial observers. Exploiting the space-time symmetry of the Lorentz boost, a multi-space, single-universe ontology arises. This relativistic spacetime ontology in turn exhibits striking quantum-like properties, but still within a fully classical setting. It then becomes possible to map classical relativistic features onto quantum behaviours to arrive at an integrated quantum spacetime. The result is specific spacetime support for quantum phenomena including superposition, indeterminacy and nonlocality. The spacetime roots of this approach leads to a specific spacetime mechanism for quantum superposition, which, by postulate, is the fundamental basis (reducing to momentum superposition in simpler cases).
Quantum Gravity: The claim is that an understanding of quantum spacetime is a necessary pre-requisite for quantum gravity, in parallel with the manner SR underpins GR. How does this proposal relate to other QG approaches? This proposal features: strong physical motivation; no Lorentz violations; a strong relational character; a spacetime superposition principle operating within a single universe. This QG groundwork is experimentally testable because the emergence of quantum behaviours from flat spacetime directly addresses the measurement problem. Since this proposal is agnostic on Planck-scale discrete structure, it would be of definite interest to investigate potential compatibility with other QG approaches.
For further details: https://orcid.org/0000-0002-9736-7487
We generalize the main result of 1 to Lovelock theory. We find there exists sphere saddle metrics for a partition function at a fixed spatial volume in Lovelock theory. Those stationary points take exactly the same forms as in 1. The logarithm of $Z$ corresponding to a zero effective cosmological constant indicates the Bakenstein-Hawking entropy of the boundary area and the one corresponding to a positive effective cosmological constant points to the Wald entropy of the boundary area. We also find zeroth order phase transitions between different vacua.https://arxiv.org/abs/2212.10607
With insight from examples and physical arguments, the Tolman-Ehrenfest
criterion of thermal equilibrium for test fluids in static spacetimes is
extended to local thermal equilibrium in conformally static geometries.
The temperature of the conformally rescaled fluid scales with the inverse
of the conformal factor, reproducing the evolution of the cosmic microwave
background in Friedmann universes, the Hawking temperature of the
Sultana-Dyer cosmological black hole, and a heuristic argument by Dicke.
[Based on V. Faraoni and R. Vanderwee 2023, Phys. Rev. D 107, 06407 (arXiv:2301.09021)]
We will introduce the details and concepts for the space to ground quantum links, and introduce the main technologies required for establishing space QKD including quantum sources, photon detectors, and photon encoding / decoding techniques. We will give an overview on the state of art and mention some of the previous and upcoming quantum space missions, and discuss the big vision outlook for how space-based quantum technologies can help achieve global quantum networking.
Tired of being an independent particle in the nuclear physics community and want to behave collectively like your favorite nucleus? Time to come out of your shell-model and have some fun and coexist with other members (i.e. shapes) with networking and trivia. Show off your trivia knowledge and be the AGB star of the show! Even if your trivia knowledge might just be a perturbation, please come by and help (weakly) break of the symmetry of the congress and interact with fellow bosons.
(There will be nuclear physics/general trivia presented by Robin Coleman (University of Guelph), in a relaxed atmosphere networking event).
https://cap.ca/wp-content/uploads/2024/05/CAP_HS_Teachers_Workshop_Program2024.pdf