Welcome to the CAP2024 Indico site. This site is being used for abstract submission and congress scheduling. Abstracts are still being accepted for post-deadline poster submissions, until May 6, 2024. Questions can be directed to programs@cap.ca. The Congress program is available by selecting "Timetable" in the left menu. Congress registration is now available, with early registration closing at 23h59 ET on Monday, May 6. You can access the fees and link to register by selecting the "registration" button in the left menu.
Bienvenue au site web Indico pour ACP2024. Ce site servira à la soumission de résumés et à la préparation de l'horaire. Les résumés sont encore acceptés pour les soumissions d'affiches après la date limite, jusqu'au 6 mai 2024. Les questions peuvent être adressées à programs@cap.ca. Le programme du congrès est disponible en sélectionnant "Timetable" dans le menu de gauche. L'inscription au congrès est maintenant disponible, l'inscription anticipée se terminant à 23h59 ET le lundi 6 mai. Vous pouvez accéder aux tarifs et au lien pour vous inscrire en sélectionnant le bouton "inscription" dans le menu de gauche.
|
What is the purpose of an introductory physics lab? Often instructional labs are structured such that students perform experiments to observe or discover classic physics phenomena. In this talk, I’ll present data that questions this goal and argues for transforming labs to focus instead on the skills and understandings of experimental physics. I’ll provide several examples of experimentation-focused labs and research on their efficacy for students’ skill development.
SNOLAB is a world-class underground science facility - operated fully as a cleanroom - 2km deep underground in VALE's active Creighton Mine in Sudbury Ontario. The program focuses around Neutrino Science and Dark Matter Searches, but also includes life science projects and new initiatives around Quantum Technology. In addition, SNOLAB has a number of analytical capabilities, such as ICPMS and low background technologies - germanium counters and radon mitigation. This presentation will give an overview over the SNOLAB science and point out some new initiatives.
The NEWS-G experiment uses spherical proportional counters (SPC) to probe for low mass dark matter. An SPC is a metallic sphere filled with gas with a high-voltage anode at its centre producing a radial electric field. The interaction between a dark matter particle and a nucleus can cause ionization of the gas, which leads to an electron avalanche near the anode and a detectable signal.
The latest NEWS-G detector, S-140, is a copper sphere of 140 cm of diameter, which took 10 days of data with methane at the LSM, and is now taking data with various gases at SNOLAB. The LSM campaign brought forward some interesting new techniques to build upon and a few issues to try to mitigate for the future of the detector and data analysis in SNOLAB.
This talk will describe the NEWS-G experiment, present the latest results from the LSM data and discuss the progress on data taking and analysis at SNOLAB.
The NEWS-G experiment at SNOLAB uses spherical proportional counters, or SPCs, to detect weakly interacting massive particles (WIMPs), which are a prime candidate for dark matter. Interactions within the gas-filled sphere create a primary ionization. The signal from the resulting electrons is passed through a digitizer and this generates raw pulses that are observed as time-series data. However, these signals have electronic noise and some signals are non-physics pulses. I will discuss the use of machine learning techniques for removing noise from different pulse shape types, as well as rejecting non-physics pulses in the data. There is a large amount of data available which is used to train and test neural networks. Models are trained on this data, and subsequently can be applied to real data once fully trained. These models can potentially denoise and clean data more efficiently and with less error than traditional pulse processing, making them an important tool for the NEWS-G experiment.
The Scintillating Bubble Chamber (SBC) collaboration is combining the well-established technologies of bubble chambers and liquid noble scintillators to develop a detector sensitive to low-energy nuclear recoils with the goal of a GeV-scale dark matter search. Liquid noble bubble chambers benefit from excellent electronic recoil suppression intrinsic in bubble chambers with the addition of energy reconstruction provided by scintillation signals. The detector to be operated at SNOLAB is currently in development, featuring 10 kg of xenon-doped liquid argon superheated to 130 K at 1.4 bar. Surrounding the active volume are 32 FBK VUV-HD3 silicon photomultipliers to detect the emitted scintillation light. Deploying at SNOLAB allows for excellent cosmogenic suppression from exposure to 6010 m.w.e. of overburden, however, radiocontaminants embedded in the rock become a major source of background. Monte Carlo simulations in GEANT4 were performed to study the imposed background event rate from both the high energy gamma-rays and fast neutrons in the cavern environment. This talk aims to discuss the development of external shielding around SBC to suppress the background flux with the goal of a quasi-background-free low-mass (< 10 GeV/c2) WIMP dark matter search.
The highest energy range (∼MeV) of the solar neutrino spectrum are dominated by 8B neutrinos produced in the pp-chain in the Sun and hep neutrinos. Previous work by R.S. Raghavan, K. Bhattacharya, and others predicted the neutrinos above 3.9 MeV can be absorbed by 40Ar producing an excited state of 40K. These neutrinos can be identified by the detection of the gamma rays produced as the excited 40K state from the neutrino absorption deexcites. A search for this process relies on a detailed understanding of the background namely the radiogenic background from neutron capture and the cosmogenic background from muons interacting with material surrounding the detector. Above around 10 MeV, just past the end of the neutron capture spectrum, the expected neutrino signal dominates the background so the search for this process relies on a highly accurate background model to identify excess events that can be attributed to neutrino absorption.
We propose to search for this process using 3 years of data from the DEAP-3600, a liquid argon (LAr) direct dark matter detection experiment designed to detect WIMP-nucleon scattering in argon. DEAP-3600’s ultra-low background and high sensitivity could make it possible to make the first observation of this neutrino absorption process in
LAr.
Our universe is expected to emerge from an era dominated by quantum effects, for which a theory of quantum gravity is necessary. Loop Quantum Gravity, in its covariant formulation, provide a tentative yet viable framework to perform reliable computations about the physics of the early universe. In this talk I will review the strategy to be follow to apply the spinfoam formalism to cosmology. I review in particular the most recent results concerning the definition of the primordial vacuum state from the full theory, and the computation of primordial quantum fluctuations. I consider the singularity resolution mechanism in this framework and the modelling of a quantum bounce. Finally, I discuss the effective equations that are obtained in the semiclassical regime of this theory.
I will describe recent work on gravitational collapse of dust using effective equations.
Solutions of these equations exhibit formation of horizons, with a shock wave emerging as the horizons evaporate. The lifetime of a black hole turns out to be proportional to the square of its mass.
Although black holes have recently been detected through gravitational wave observations and intensively studied through the past decades, we are far away from a complete understanding of their life cycle. In this presentation I'll show a loop quantum gravity-based model of star collapse in which the classical central singularity is replaced by a quantum bounce happening when the star energy density becomes planckian. Immediately after the bounce a shockwave of matter arises carrying all the initial star mass, that then slowly moves outward. The shockwave requires a time proportional to the square of the original star mass to reach the black hole horizon and when this happens, the horizon disappears. This signals the end of the black hole, while the outgoing shockwave becomes visible to external observers. This picture is robust as it holds for a wide range of initial data, in particular including non-marginally trapped configurations.
Arguments from general relativity and quantum field theory suggest that black holes evaporate through Hawking radiation, but without a full quantum treatment of gravity the endpoint of the process is not yet understood. Two dimensional, semi-classical theories of gravity can be useful as toy models for studying black hole dynamics and testing predictions of quantum gravity. Of particular interest are non-singular black holes, since quantum gravity is expected to resolve the singularities that are pervasive in general relativity. This talk will present a general model of evaporating black holes in 2D dilaton gravity, with a focus on a Bardeen-like regularized black hole model. I will discuss results from numerical simulations including the dynamics of the apparent horizons and additional trapped anti-trapped regions formed by backreaction.
Non-perturbative quantum geometric effects in loop quantum cosmology (LQC) result in a natural bouncing scenario without any violation of energy conditions or fine tuning. In this work we study numerically an early universe scenario combining a matter-bounce with an ekpyrotic field in an LQC background setting.
We explore this unified phenomenological model for a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) universe in LQC filled with one scalar field mimicking dust and another scalar field with a negative exponential, ekpyrotic-like potential.
The dynamics of the homogenous background and the power spectrum of the comoving curvature perturbations are numerically analyzed with various initial conditions. By varying the initial conditions we consider different cases of dust and ekpyrotic field domination in the contracting phase. We use the dressed metric approach in LQC to numerically compute the primordial power spectrum of the comoving scalar and tensor curvature perturbations.
This presentation will delve into the latest advancements in X-ray imaging techniques and technologies, with a focus on cutting-edge hardware developments. Key topics will include X-ray computed tomography (CT), X-ray tomosynthesis, multi-energy X-ray imaging, cone-beam computed tomography (CBCT) and real-time X-ray imaging for interventional procedures. The discussion will then shift to explore emerging techniques and technologies in the field, such as photon-counting computed tomography, phase-contrast imaging, cold-cathode X-ray tubes, and multi-layer energy-selective X-ray detectors. Attendees will gain a comprehensive understanding of both the current capabilities and future directions of X-ray imaging technology.
Angioplasty is an interventional procedure for blood vessel stenosis where a catheter is navigated to the obstruction under fluoroscopy to place a permanent wire stent on the blockage to force it open. Clear stent visualization is critical to ensure a stent has not collapsed or fractured, which could lead to re-stenosis and even more severe complications. Overlapping anatomic structures make stents and vessels difficult to visualize non-invasively. Work by Yamamato et al. used the maximum pixel value across a set fluoroscopy frames to create a synthetic mask, but soft tissue motion was too severe and the method did not succeed. We plan to use dual energy subtraction x-ray imaging (DES) to eliminate soft tissue in conjunction with processing techniques similar to Yamamato et al. to enhance visualization of wire stents without catheterization. We created a MATLAB simulation to calculate the nickel signal-to-noise ratio (SNR) for a range of x-ray parameters to determine the optimal settings for DES. We then did a proof-of-concept experiment using an anthropomorphic chest phantom with a nitinol stent overlaid using x-ray settings optimized in the simulation. The stent was shifted to simulate cardiac motion, and a set of DES images were acquired to create the synthetic mask. A prototype, ultra low noise CMOS detector and kV switching generator were installed in our facilities for the first ever testing and experimentation of this novel technique. Quantification of this equipment was performed using an in-house software to generate the detector MTF, DQE, and waveforms of the kV switching techniques. Simulation results revealed parameters to optimize the nickel SNR per unit dose, and material suppression using weighted DES calculations removed soft tissue. Waveform measurements showed that step kV switching could be achieved within 1 millisecond, achieving consecutive DES images at a rate of 30 frames per second. DES imaging allowed for successful mask creation so that all background structures were suppressed and only the stent was visible. By using DES imaging for this technique, soft tissue motion is eliminated and allows for a digitally subtracted image of the stent alone. With the use of advanced prototype equipment, this technique may improve confidence in the diagnosis of collapsed and fractured stents in real time, non-invasively.
Zinc and selenium are essential elements that are necessary for a human’s health. Researchers have found that deficiencies in these elements can significantly affect the human body. In this study, I analyzed nail clippings from mothers and their infants from New Zealand to observe zinc and selenium concentration levels over the first year postpartum. Several biomarkers, including nails, were collected at three, six, and twelve months postpartum. Every mother had two sample cups prepared with nail clippings (one with big toenail clippings and one with other toenail clippings), each containing four samples labeled MB and MO, respectively. Each mother had a corresponding infant with one sample cup prepared with four nail-clipping samples labeled I. This study used portable X-ray fluorescence to examine the nail samples from the 12-month visit. These results were then compared to the 3-month and 6-month visits. The average zinc XOS concentration in this study (3rd visit) for the MB, MO, and I was 115 ppm, 96 ppm, and 84 ppm, respectively. The average zinc total area ratio (TAR) for the 3rd visit for MB, MO, and I was 1.51%, 1.35%, and 1.45%, respectively. Selenium TAR results for the 3rd visit for MB, MO, and I was 0.022%, 0.021%, and 0.023%, respectively. Several significant differences were found when comparing the three visits. Between the first and third visits, infant zinc concentrations significantly decreased for XOS (p=0.035) and TAR (p=0.014). Several significant differences were also found in selenium concentration between visits for MB, MO, and I. Selenium is often below the detection limit for XOS concentration reporting and would benefit from additional measurement time, such as 3 minutes. A correlation was found between the concentrations of mothers' big toenails and their other toenails.
A magnetometer that has a high temporal (≤1 ns) and a high spatial (≤1 mm) resolution, a large magnetic field range (0–0.5 T), and that does not perturb the magnetic field requires an innovative and unprecedented design. Such a magnetometer is key, for example, to measure magnetic fields produced by transcranial magnetic stimulation (TMS) coils used to neuromodulate the brain in the treatment of various psychological and neurological disorders, such as major depressive disorder, Parkinson’s disease, etc. TMS coils placed against the head of a patient produce rapid and intense magnetic field pulses that induce electric fields in the brain, stimulating or inhibiting neural activity for therapeutic applications. With time-resolved magnetic field measurements, time resolved electric fields can be calculated. Various TMS studies investigate the therapeutic impact of varying the frequency, intensity, and burst count of the pulse, but are limited in studying the time-resolved pulse shape and its ability to neuromodulate. To date, only peak electric fields generated by the coils are measured. Electric field or magnetic field pulse shape can be inferred from the current applied but have not been verified. Since neuron action potentials have temporal pulse shapes unique to their neural task, an important but unanswered question in TMS research to date is how the TMS temporal pulse shape impacts the efficacy of the therapy.
In this work, we present the design and construction of a fiber-based magnetometer (ENOMIS) based on the magneto-optic Kerr effect and Fabry-Perot interferometry. Our solution is based on a nickel and dielectric material multilayer deposited onto the tip of an optical fiber. Kerr rotation of 0.4° typical of air-nickel interfaces does not provide a significant SNR for resolving the typical 1-µs-wide TMS pulses in a single acquisition. Our results show that the Fabry-Perot nanoscale multilayer cavity theoretically can increase the Kerr rotation by over 1000 times. Other studies achieve good SNR at fast and ultrafast time scales, but are limited to small magnetic field ranges, unlike the 0 – 0.5 T range presented in this work. Temporal resolution of ~1 ns is limited by instrumentation used here, whereas the theoretical limit of the sensor is ~100 ps. This work compares modeled enhancement results to the experimental prototype results.
The tin isotopic chain with its magic 50 proton closed shell is a benchmark for models of nuclear structure. While the neutron-rich tin nuclei around the magic 82 neutron shell play an important role in the rapid-capture nuclear process, the mid-shell region of the tin isotopes can display collective phenomena known as shape coexistence [1]; for example, in $^{116}$Sn$_{66}$ deformed bands based on 2 particle – 2 hole excitations across the proton 50 shell gap exist [2,3]. Furthermore, at energies below the particle threshold, a new phenomenon called Pygmy Quadrupole Resonance (PQR) have been recently observed in $^{124}$Sn below 5 MeV [4]. Coupled with theoretical calculations, the new excitation mode was interpreted as a quadrupole-type oscillation of the neutron skin. This study prompted investigations for corresponding states in the neighboring $^{118,120}$Sn nuclei populated using thermal neutron capture of $^{117,119}$Sn(n,g).
Thermal neutron capture of $^{117,119}$Sn populates states in $^{118,120}$Sn at the neutron separation energy of about 9 MeV. The capture states in these experiments consist of 0$^+$ and 1$^+$ spins, ideal for populating subsequent 2$^+$ states which could be attributed to the PQR predicted to exist in the 3-5 MeV range.
In the experiments performed at the Institut Laue-Langevin in Grenoble, France, a continuous high-flux of thermal neutrons of 10$^8$ s$^{-1}$ cm$^{-2}$ from the 57 MW research reactor was used for capture reactions on enriched odd-A Sn targets. Gamma-ray transitions from excited states in nuclei of interest were detected by the Fission Product Prompt gamma-ray Spectrometer (FIPPS) [5] consisting of eight large n-type high purity germanium (HPGe) clover detectors and augmented with eight additional Compton-suppressed HPGe clovers from INFN Horia Hulubei, in Bucharest, Romania for enhanced gamma-ray efficiency and additional angular coverage used to produce angular correlations for spin assignments. In addition, 15 fast response LaBr$_3$(Ce) were used to allow for fast timing measurements of nuclear states using the centroid-shift method as described in [3].
Preliminary results from the $^{117,119}$Sn(n,g)$^{118,120}$Sn experiments will be presented highlighting the newly observed levels within the 3-5 MeV energy range of interest for PQR and lifetimes of excited states in $^{120}$Sn.
[1] K. Heyde and J. L. Wood. Rev. Mod. Phys., 83, (2011).
[2] J.L. Pore et al., Eur. Phys. J A 53, 27, (2017).
[3] C. M. Petrache et al., Phys. Rev. C 99, 024303 (2019).
[4] M. Spieker et al., Phys. Lett. B 752, 102 (2016).
[5]. C. Michelagnoli et al., EPJ A 193, 04009, (2018).
Motivated by fundamental symmetry tests, a measure of large electric dipole moment (EDM) would represent a clear signal of the violation of the CP symmetries. This observation highlights the imbalance in the matter and antimatter observed in our Universe. Since the best theory for particle physics: the Standard Model (SM) of particles predicts an EDM lower ($10^{-30}$) than the experimental reach, it is necessary to explore physics beyond the SM, models at the nucleus level such as Schiff moment theoretical model that predicts more accurate EDM. The strengths E2 and E3 that connect the ground state of $^{199}Hg$ to its excited state are useful to obtain EDM which in comparison to other species previously measured, provides one of the most precise upper limits on an atomic EDM (order of $10^{-28}$). Performing an experiment for $^{199}Hg$ is very challenging. As such, several experiments on $^{198}Hg$ and $^{200}Hg$ at the Maier-Leibnitz Laboratorium of the Ludwig-Maximilians Universität München have been conducted. To extract matrix elements E2 and E3 for 198Hg from the data collected, a deuteron beam bombarded the target of the compound of $^{198}Hg^{32}S$ producing scattering particles that were separated and detected using the quadruple three-dipole (Q3D) magnetic spectrograph. Very high-statistics data sets were collected from this reaction, resulting in considerable new states, angular distributions, therefore spin and parities assignments for new states, and cross sections. We also provide additional insight into the distribution of the matrix elements of $^{199}Hg$.
Details of the analysis of the $^{198}Hg(d,d’)$ reaction to date will be given.
Nuclei away from the line of stability have been found to demonstrate behavior that is inconsistent with the traditional magic numbers of the spherical shell model. This has led to the concept of the evolution of nuclear shell structure in exotic nuclei, and the neutron-rich calcium isotopes are a key testing ground of these theories; there have been conflicting results from various experiments as to the true nature of a sub-shell closure for neutron-rich nuclei around $^{52}$Ca. An experiment was performed at the ISAC facility of TRIUMF; $^{52}$K, $^{53}$K, and $^{54}$K were delivered to the GRIFFIN gamma-ray spectrometer paired with the SCEPTAR and the ZDS ancillary detectors for beta-tagging, as well as DESCANT for neutron-tagging. Using this powerful combination of detectors, we combine the results to construct level schemes for the isotopes populated in the subsequent beta-decay. Preliminary results from the analysis of the gamma, beta, and neutron spectra will be presented and discussed in the context of shell model calculations in neutron-rich nuclei.
Many outstanding fundamental topics in nuclear physics are addressed in the NSERC Subatomic Physics Long Range Plan. For several of these critical research drivers, such as " How does nuclear structure emerge from nuclear forces and ultimately from quarks and gluons?", gamma-ray spectroscopy is the investigative technique of choice. However, analysis of data from large-scale gamma-ray spectrometers is often a bottleneck for progress due to the extremely complex nature of the decays of excited nuclear states. In some cases, thousands of individual gamma rays must be analyzed in order to construct excited state decay schemes. To date, this is largely done laboriously by hand with the final result depending on the skill of the individual performing the analysis.
This project aims to develop an efficient machine-learning algorithm to perform the analysis of large spectroscopic data sets, initially concentrating on the analysis of gamma-gamma coincidence matrices. The essence of this research lies in its multi-pronged approach, enabling a rigorous comparison of two dominant machine learning paradigms: supervised and unsupervised techniques. The ultimate goal is to determine the most effective framework for solving problems of this nature and, if applicable, to subsequently enhance the chosen framework by integrating quantum computing, harnessing the power of qubits and quantum operations to overcome the computational restrictions inherent in classical computing.
Research on the learning and teaching of physics has been done in university physics departments for more than 50 years. Unfortunately, much of this work has been done in the United States and there are structural and cultural differences between the US and Canadian higher education systems. In this talk I will present an overview of PER work recently done at the University of Waterloo including our revision of undergraduate laboratory courses to refocus them on experimental process skills, as well as our efforts to bring EDI related principles to collaborative groupwork in our first-year physics courses. I will make a case for why PER work like this should be supported in Canada, what that support could look like and how you can get involved.
Examining the motivations and influences impacting undergraduate student program choice not only assists physics departments in recruitment efforts but also enables the development of curricula tailored to meet the needs and interests of students. In our 2003 first-year physics courses at the University of Guelph, science majors participated in a survey exploring the diverse motivations and influences shaping their choice of undergraduate program. Two decades later, we have conducted the same survey, to assess whether the perspectives of undergraduate students have evolved. We incorporated additional questions to delve into the development of student physics identity at various points in their educational journey. We will discuss comparisons between students from 2003 and 2023, with attention given to gender and majors in the physical- and biological-sciences.
We will discuss the two most recent iterations of a Physical Science in Contemporary Society course, a senior-level physics course at the University of Toronto that encourages physics students to explore how physics and society influence each other. A different instructor taught each iteration of the course while an education PhD student acting as a “critical friend” assisted in the handover of principles between the iterations. These principles included ungrading, student-led instruction, and student-defined final projects.
In the course, student groups are encouraged to select their topics for in-class facilitations on the topic and final projects that may take on different formats. Some topics explored include “gender bias in physics careers,” “physics funding and politics,” and “invention’s effects on society.” The students were asked to prepare an in-class facilitation where they should avoid lectures and instead use active learning techniques to engage their classmates on the topic. Each facilitation week finished with a 500-word reflective writing assignment (six in total) where the students had to discuss the topics presented that week, link them to another example outside the classroom, and reflect on their learning from the facilitation.
This course used ungrading as the assessment practice for the students’ facilitations and the reflective essays. Ungrading involves giving students feedback without numerical grades on their assignments, facilitating learning and inclusion. Students are then included in the discussions to determine final grading decisions based on demonstrated growth. The students’ writing abilities in both course iterations also showed dramatic improvement through the use of ungrading and feedback-focused assessments by the teaching assistants.Students, despite some initial reluctance to the purpose and design of the course, praised the course’s usefulness and were surprised by how much it changed their understanding of physics.
There has been noted concern regarding the retention, academic success, and motivation of students in STEM courses, especially physics. Additionally, problem solving is a highly valued 21st Century workforce skill in Canada (Hutchison, 2022) that recent graduates seem to lack (Cavanagh, Kay, Klein, & Meisinger, 2006; Deloitte & The Manufacturing Institute, 2011; Binkley et al., 2012; Finegold & Notabartolo, 2010). The aim of our project is to address these concerns by implementing novel cognitive strategies – retrieval practice – in physics instruction and assess its impact on students’ academic performance and attitudes of physics learning. Our objectives are: 1) Develop problem solving materials based on retrieval practice. 2) Implement these materials in a first year physics course and prepare teaching assistants to facilitate learning using these materials. 3) Assess the impact of these interventions on success in the course as well as attitudes and approaches to problem solving. Here, we will describe the development of course materials promoting retrieval practice, our implementation strategies, and present student success findings from a first year physics course.
We show theoretically that a modulated longitudinal cavity-qubit coupling can be used to control the path taken by a multiphoton coherent-state wavepacket conditioned on the state of a qubit, resulting in a qubit-which-path (QWP) entangled state [1]. We further show that QWP states have a better potential sensitivity for quantum-enhanced phase measurements (characterized by the quantum Fisher information), than either NOON states or entangled coherent states having the same average number of photons. QWP states can generate long-range multipartite entanglement using strategies for interfacing discrete- and continuous-variable degrees-of-freedom. Entanglement can therefore be distributed in a quantum network via QWP states without the need for single-photon sources or detectors.
[1] Z. M. McIntyre and W. A. Coish, arXiv:2306.13573 (to appear in Phys. Rev. Lett.)
We investigate and compare a number of different strategies for rapidly estimating the values of unknown Hamiltonian parameters of a quantum system. Rapid and accurate Hamiltonian parameter estimation has applications in quantum sensing, quantum control, and quantum computing. We show that an adaptive Bayesian method based on minimizing the Shannon entropy in each shot of a measurement sequence can successfully predict multiple unknown parameters more efficiently than a simple non-adaptive protocol. The adaptive protocol can be directly applied to ongoing experiments on spin qubits in double quantum dots, where multiple parameters (e.g.: exchange and magnetic fields) must be continuously estimated for good performance.
Non-Gaussian operations are essential for most bosonic quantum technologies. Yet, realizable non-Gaussian operations are rather limited in type and generally suffer from accuracy-duration tradeoffs. In this work, we propose to use quantum signal processing to engineer non-Gaussian operations. For systems dispersively coupled to an auxiliary qubit, our scheme can generate a new type of non-linear phase gate. Such a gate is an extension of the selective number-dependent arbitrary phase (SNAP) gate, but an extremely high accuracy can be achieved within a reduced, fixed, excitation-independent interaction time. Our versatile formalism can also engineer operations for a variety of tasks, e.g. processing rotational symmetric codes, entangling qudits, deterministically generating multi-component cat states, and converting entanglement from continuous- to discrete-variable.
Atomic and solid-state spin ensembles are promising quantum technological platforms, but practical architectures are incapable of resolving individual spins. The state of an unresolvable spin ensemble must obey the condition of permutational invariance, yet no method of generating general permutationally-invariant (PI) states is known. In this work, we develop a systematic strategy to generate arbitrary PI states. Our protocol involves first populating specific effective angular momentum states with engineered dissipation, then creating superposition through a modified Law-Eberly scheme. We illustrate how the required dissipation can be engineered with realistic level structure and interaction. We also discuss possible situations that may limit the practical state generation efficiency, and propose pulsed-dissipation strategies to resolve the issues. Our protocol unlocks previously inaccessible spin ensemble states that can be advantageous in quantum technologies, e.g. more robust quantum memory.
Antimicrobial peptides (AMPs) are of growing interest as potential candidates that may offer more resilience against antimicrobial resistance than traditional antibiotic agents. In this article, we perform the first in silico study of the synthetic $\beta$ sheet-forming AMP GL13K. Through atomistic simulations of single and multi-peptide systems under different conditions, we are able to shine a light on the short timescales of early aggregation. We find that isolated peptide conformations are primarily dictated by sequence rather than charge, whereas changing charge has a significant impact on the conformational free energy landscape of multi-peptide systems. We demonstrate that the loss of charge-charge repulsion is a sufficient minimal model for experimentally observed aggregation. Overall, our work explores the molecular biophysical underpinnings of the first stages of aggregation of a unique AMP, laying necessary groundwork for its further development as an antibiotic candidate.
Soft colloids are microscopic particles that, when dispersed in a solvent, can adjust their size and shape in response to changes in local environment. Typical examples are microgels, made of loosely crosslinked networks of polymer chains, that respond to changes in concentration by deswelling and faceting. Practical applications of microgels include drug delivery, chemical and biosensors, and photonic crystals. Within a coarse-grained model of elastic particles that interact via a hertzian pair potential and swell according to the Flory-Rehner theory of polymer networks, we explore the response of microgels to two fundamental types of crowding. First, we investigate the influence of nanoparticle crowding on microgel swelling by extending the Flory-Rehner theory from binary to ternary mixtures and adapting polymer field theory to model the entropic cost of nanoparticle penetration. Second, we examine the impact of particle compressibility on liquid-solid phase transitions in microgel suspensions. In both studies, we perform Monte Carlo simulations to model equilibrium properties of single particles and bulk suspensions [1]. Novel trial moves include random changes in microgel size and shape and in nanoparticle concentration. Our results demonstrate that particle softness and penetrability can profoundly affect single-particle and bulk properties of soft colloids in crowded environments. In particular, we find that addition of nanoparticles can significantly modify microgel swelling and pair structure and that particle compressibility tends to suppress crystallization. Our conclusions have broad relevance for interpreting experiments on soft matter and guiding the design of smart, responsive materials.
[1] M. Urich and A. R. Denton, Soft Matter 12, 9086 (2016).
Supported by National Science Foundation (DMR-1928073).
Soft solids play an important role in stretchable electronics, cellular membranes and water collection. Upon introduction of a liquid contact line, soft solids can deform substantially causing changes to geometry and dynamics. On the nanoscale, the deformation at the liquid/solid contact line is a capillary ridge. We study these capillary ridges for a system which consists of a thin polymer film in the melt state atop an elastomeric poly(dimethylsiloxane) (PDMS) film. We use a thorough washing procedure to create our PDMS films which creates a true elastomer composed of only a crosslinked network. Our bilayer polymer films sit atop a solid silicon substrate. The liquid polymer layer dewets on the soft elastomer PDMS base. We vary the thickness of the underlying elastomer film, which changes the effective stiffness, therefore changing the size of the capillary ridge. We use atomic force microscopy to directly measure the shape of the capillary ridge in our system.
The phase behavior of binary blends of AB diblock copolymers of compositions f and 1-f is examined using field theoretic simulations (FTSs). Highly asymmetric compositions (i.e., f ≈ 0) behave like homopolymer blends macrophase separating into coexisting A- and B-rich phases as the segregation is increased, whereas more symmetric diblocks (f ≈ 0.5) microphase separate into an ordered lamellar phase. In self-consistent field theory, these behaviors are separated by a Lifshitz critical point at f= 0.2113. However, its lower critical dimension is believed to be four, which implies that the Lifshitz critical point should be destroyed by fluctuations. Consistent with this, the FTSs find that it transforms into a tricritical point with a lower critical dimension of three. Furthermore, the highly swollen lamellar phase near the mean-field Lifshitz critical point is transformed into a bicontinuous microemulsion (BμE), consisting of large interpenetrating A- and B-rich microdomains. The BμE has been previously reported in ternary blends of AB diblock copolymer with its parent A- and B-type homopolymers, but in that system the homopolymers have a tendency to macrophase separate from the microemulsion. Our alternative system for creating BμE should be less prone to this macrophase separation.
Phase change materials (PCMs) are materials that can change their optical properties by switching between different phases in response to external stimuli, such as temperature, light, or electric field. This makes PCMs promising for tunability and reconfigurability of nanophotonic devices, including switches, modulators, and sensors. PCMs can be classified into two categories. The first category includes chalcogenide materials like Ge2Sb2Se4Te1 (GSST) and Ge2Sb2Te5 (GST), which change phase without altering their physical state but exhibit variations in their optical characteristics. The second category comprises materials such as gallium-based liquid metals (Ga-based LMs) and their alloys, such as Ga-In, Ga-Ag, and Ga-In-Sn, where both the physical state and optical properties undergo changes during phase transitions. The Ga-based LMs are particularly noteworthy due to their low melting points, allowing for solid-liquid phase transitions at room temperature. In this talk, we show how hybridizing PCMs with plasmonic materials like gold (Ag) or silver (Ag) enhance their functionality and performance in applications requiring precise control over optical properties. We also show how the phase transition of the PCMs can be actively controlled by the light absorption of the hybrid nanostructure, and how this phase transition affects the optical responses of the nanostructure, such as absorption, scattering, and extinction cross-sections. We also investigate induced photothermal process, heat transfer mechanism, and electric field enhancement of the hybrid nanostructure, as functions of the laser wavelength and intensity variations. We employ a self-consistent approach that couples electromagnetism with thermodynamics, employing numerical simulations to study the interactions between light and material properties. The findings demonstrate that hybrid nanostructure can achieve remarkable tunability and reconfigurability of its optical properties.
The development of coherent XUV radiation sources is leading to significant advancements in imaging and ultrafast studies. High harmonic generation (HHG) is one technique used to generate laser based coherent ultrashort XUV pulses but is relatively inefficient. This process is normally carried out in the beamwaist of a focused laser and, because of the limited intensity range for efficient HHG, can only generate a small amount of energy per pulse. One strategy to increase the XUV pulse energy is to use a high-power laser and have the HHG process occur upstream of focus. This focal cone HHG (FCHHG) process also has the advantage of creating a focusing XUV radiation beam which can be useful in many applications.
We present modeling results and the initial experimental results for the development of such a FCHHG beamline at the University of Alberta. A 15TW Ti:Sapphire laser is used to generate harmonics through a gas target positioned upstream from focus allowing for a high energy XUV beam to be created in the optimum intensity regime. The fundamental laser is focused with a long focal length lens to the gas target placed at varying positions from focus. The resulting XUV spectra and energy yield are examined as well as other diagnostics such as interferometry of the gas target. Based on previous studies into this FCHHG technique the wavefront of the driving laser will significantly impact the quality of the resulting harmonics. Thus, the wavefront quality is examined and the impact on the XUV generation is studied.
Identifying a means of efficiently separating XUV from the pump laser is important for applying such high energy XUV beams. One technique to achieve such separation is by means of non-colinear HHG which we are starting to explore. Results of the modeling and experimental investigations will be presented.
The terahertz (THz) frequency band, lying between the microwave and infrared regions of the electromagnetic spectrum, has enabled significant developments in a variety of fields such as wireless communications, product quality control, and condensed matter research. To improve the photonics systems used for these applications, intense efforts are being made to develop faster and more sensitive THz detectors. Conventional detection schemes relying on semiconductor devices fail at frequencies above 1 THz due to limited electronic response time and thermal fluctuations at room temperature. The highest sensitivity THz detection schemes presently available, such as superconducting tunnel junctions and single-quantum dot detectors, require cryogenic operation, making them expensive and cumbersome to use. Here, we demonstrate a high-sensitivity room-temperature detection scheme for THz radiation based on parametric frequency upconversion of the THz radiation to higher frequencies (in the near-infrared (NIR)), preserving the spatial, temporal, and spectral information of the THz wave. The upconverted photons, generated by the mixing of a THz pulse with a NIR pulse in a nonlinear optical crystal, are spectrally resolved using a monochromator and a commercial single-photon detector in the NIR. With this technique, we can detect THz pulses with energy as low as 1.4 zJ (1 zJ = 10-21 J) at a frequency of 2 THz (or a wavelength of 150 µm) when averaged over only 50k pulses. This corresponds to the detection of about 1.5 photons per pulse and a noise-equivalent power of 1.3 × 10-16 W/Hz1/2. To demonstrate potential applications of our system, we perform spectroscopy of water vapor between 1 and 3.7 THz with a spectral resolution of 0.2 THz. Our technique offers a fast and sensitive alternative to current THz spectroscopy techniques and could notably be used in future wireless communication technologies.
With many regions of the electromagnetic spectrum already being allocated for wireless communications in mobile, satellite and military sectors, there is a growing need to exploit new frequency regions. The terahertz (THz) band, which lies between the microwave and infrared regions, serves as a possible solution to achieve high data transfer rates at Terabytes/sec (Tbps). For transmission in atmospheric conditions, water vapour molecules attenuate the THz signal in certain frequency regions, primarily due to rotational resonances. There are a few spectral windows with negligible absorption, with some allowing signal propagation over several meters and others over several hundreds of meters. The short distance propagation windows can be used for secure communications in a small area with limited possibilities of eavesdropping. The latter can be used for transferring data over relatively long distances in turbulent atmospheres. We study the propagation distance of different spectral bands and investigate their potential for one of the above-mentioned applications. Our study relies on a nonlinear optical technique to achieve sensitive detection of THz signals. We demonstrate a parametric up conversion process allowing all information contained within a THz signal to be retrieved with a commercial optical detector sensitive to near-infrared light. Our optical configuration combines a monochromator and a single-photon avalanche diode to achieve spectral resolution up to 3 THz with a <0.2 THz resolution and an unprecedented detection sensitivity. These results pave the way towards the development of 6G wireless communication relying on new spectral bands above 1 THz, enabling higher data transfer rates and increasing the security of local networks.
Join the Canadian Journal of Physics team (including Editors-in-Chief Robert Mann (UWaterloo) and Marco Merkli (MUN), and Journal Development Specialist Jocelyn Sinclairr) to discuss current trends and horizons in academic publishing. This workshop will touch on current trends and horizons in academic publishing, including in peer review, open science and open access, research integrity and ethical publishing standards. Open discussion to follow, please pre-purchase (through Congress registration) or bring your lunch!
At CAP 2023, the Canadian Journal of Physics hosted a discussion around Open Access and its current and future impacts on publishing in physics. You can read a summary of the information presented and the following discussion in the attached document.
Silicon photomultipliers (SiPMs) are single-photon sensitive light sensors. The excellent radio-purity and high gain of SiPMs along with a high VUV detection efficiency make them ideal for low-background photon counting applications, such as in neutrino-less double beta decay and dark matter experiments employing noble liquid targets. The Light only Liquid Xenon (LoLX) experiment is an R&D liquid xenon (LXe) detector located at McGill University. LoLX aims to perform detailed characterization of SiPM performance, and to characterize the light emission and transport from LXe to inform future detectors. During Phase-1 of operations, LoLX employed 96 Hamamatsu VUV4 SiPMs in a cylindrical geometry submerged in LXe. Photons detected by a SiPM trigger an avalanche process in the individual photodiodes within the SiPM. The avalanche produces near infra-red photons that are emitted and can transport across the detector to other SiPMs which may produce correlated pulses on other channels, a process known as SiPM external crosstalk (eXT). With the Phase-1 LoLX detector, we performed measurements of SiPM external crosstalk in LXe with similar geometric acceptance as future planned experiments. In this presentation, we will present the measurement of SiPM eXT detection within LoLX, with comparisons to GEANT4 eXT simulations informed by ex-situ measurements of SiPM photon emission characteristics.
Searches for neutrinoless double beta decay conducted with Xe-136 can be improved by detecting the decay's daughter, the Ba-136 ion. This technique offers complete rejection of the residual radioactive background, but its practical implementation remains challenging. At Carleton University, Ba ion tagging R&D is being conducted using a cryogenic liquid xenon setup. As a proof-of-concept, untargeted ion extraction tests are being carried out in argon gas using radioactive ions captured and extracted using a thin capillary probe into an analysis chamber and then detected using a passivated implanted planar silicon detector. To better understand the experimental results, a Monte Carlo simulation of this process has been developed. This talk will present the design considerations, apparatus and procedures used, as well as discuss and compare the experimental results and simulations.
The Light-only Liquid Xenon (LoLX) experiment at McGill University, in collaboration with TRIUMF, examines liquid xenon (LXe) for its potential in detecting rare physical events using Silicon photomultipliers (SiPMs). This research seeks to evaluate the long-term stability of Vacuum Ultraviolet (VUV)-sensitive SiPMs in LXe, understand LXe's optical properties, and develop new methods to separate Cherenkov and scintillation light. Outcomes will set benchmarks for SiPMs in LXe environments and enhance particle identification, aiding future rare event search experiment, such as nEXO, in achieving higher sensitivity.
LoLX2 is a 4 cm cube composed of two types of SiPMs, Hamamatsu VUV4 and FBK HD3, as well as a VUV-sensitive photomultiplier tube (PMT). In this phase of the study, we compare the performance of these two types of SiPMs to the PMT. The initial data acquisition is currently under analysis and will be discussed in this presentation.
The Milky Way’s (MW) most massive satellite, the Large Magellanic Cloud (LMC) has just passed its first pericenter approach. The presence of the LMC has a considerable impact on the position and velocity distributions of DM particles in the MW. This directly affects the expected DM annihilation rate, especially in the case of velocity-dependent annihilation models since the LMC may boost the relative DM velocity distributions. I will discuss the impact of the LMC using MW-LMC analogues in the Auriga magneto-hydrodynamical simulations.
We aim to provide the effect of accelerated frames in cosmology and identify the origins of thermalization in the evolution of the universe. We begin our discussion by discussing general relativity and cosmology, as well as their successes and failures, which leads to the need for quantum cosmology. We then discuss the canonical formulation of general relativity, which is the basis of quantum cosmology, and its issues. We constructed a wavefunction for the universe whose dynamics are governed by the Wheeler-Dewitt equation.
Semiclassical approximations simplify assumptions and approximations that bring the equation closer to a form that can be more easily analyzed. The WKB method is used to approximate the wave function.
We constructed a transformation that is similar to the Rindler transformation motivated by the Klein-Gordon equation in Minkowski spacetime. We performed the Bogoliubov transformation and obtained a result which suggested thermalization. However, we were not using creation and annihilation operators. To interpret this result, we calculated the density matrix and the square of the density matrix to see if the WKB state is a pure or mixed state. The result from the density matrix calculation suggested that the WKB state is a mixed state, which suggested that the result we obtained from the Bogoliubov transformation can be interpreted as thermalization.
We study the classical-quantum (CQ) hybrid dynamics of homogeneous cosmology from a Hamiltonian perspective where the classical gravitational phase space variables and matter state evolve self-consistently with full backreaction. We compare numerically the classical and CQ dynamics for isotropic and anisotropic models, including quantum scalar-field induced corrections to the Kasner exponents. Our results indicate that full backreaction effects leave traces at late times in cosmological evolution; in particular, the scalar energy density at late times provides a potential contribution to dark energy. We also show that the CQ equations admit exact static solutions for the isotropic, and the anisotropic Bianchi IX universes with the scalar field in a stationary state.
In quantum gravity it expected that the Big Bang singularity is resolved and the
universe undergoes a bounce. We argue that matter-gravity entanglement entropy
rises rapidly during the bounce, declines, and then approaches a steady state value
higher than before the bounce. These observations suggest that matter-gravity en-
tanglement is a feature of the macroscopic universe that there is no second law of
entanglement entropy.
Using quantum field theory, we calculate the total effect on the photon flux in the microwave background due to some photons being gravitationally scattered toward us and others being gravitationally scattered away from us. The scattering is produced by the density fluctuations which act like point masses in a FLRW background, which can be of either sign. The net effect of having masses of either sign is to give a Debye screening of the graviton.
Loop Quantum Cosmology offers a successful quantization of cosmological models using techniques adapted from Loop Quantum Gravity (LQG). However, the connection with LQG remains unclear, primarily due to the absence of the $SU(2)$ gauge symmetry, which is a fundamental aspect of LQG. We aim to address this issue by demonstrating that the Gauss constraint can always be reformulated into abelian constraints within the cosmological framework, indicating the inherent abelian nature of the model in the minisuperspace.
To overcome this challenge, we propose employing a symmetry reduction approach inspired by Yang-Mills theory. This approach compels us to leave the minisuperspace, but, on the other hand, it allows us to construct a classical cosmological sector for the theory within the LQG framework and provide an analogous quantization.
Since the derivation of a well-defined D→4 limit for 4D Einstein Gauss-Bonnet (4DEGB) gravity coupled to a scalar field, there has been interest in testing it as an alternative to Einstein’s general theory of relativity. Using the Tolman-Oppenheimer-Volkoff equations modified for 4DEGB gravity, we model the stellar structure of quark stars using a novel interacting quark matter equation of state. We find that increasing the Gauss-Bonnet coupling constant α or the interaction parameter λ both tend to increase the mass-radius profiles of quark stars described by this theory, allowing a given central pressure to support larger quark stars in general. These results logically extend to cases where λ<0, in which increasing the magnitude of the interaction effects instead diminishes masses and radii. We also analytically identify a critical central pressure in both regimes, below which no quark star solutions exist due to the pressure function having no roots. Most interestingly, we find that quark stars can exist below the general relativistic Buchdahl bound and Schwarzschild radius R=2M, due to the lack of a mass gap between black holes and compact stars in the 4DEGB theory. Even for small α well within current observational constraints, we find that quark star solutions in this theory can describe extreme compact objects, objects whose radii are smaller than what is allowed by general relativity.
Introduction: We have recently demonstrated$^1$ a compressed-sensing (CS)-based undersampling method capable of improving signal-to-noise ratio and image quality of low field images. An optimal choice of pulse sequence would reduce undersampling artefacts and improve image quality; in this work, different sampling patterns in k-space for the X-Centric$^2$ and Sectoral$^{3,4}$ sequences are investigated at high acceleration factors (AF = 7, 10, 14).
Method: The X-Centric sequence acquires each half of k-space separately, in the readout direction, reducing signal loss from diffusion and relaxation. Both halves normally acquire the same phase-encode lines in k-space (non-alternating), but they can also sample a unique set of lines (alternating). The Sectoral sequence splits a circular area of k-space into sectors (here, 64), and acquires each sector from the centre-out, oversampling the contrast-rich centre. The proposed sampling pattern consists of stopping each sector prematurely, ensuring the undersampling is confined to the edges of k-space.
In-vitro $^1$H MRI was performed at 73.5mT. Seven sets of 9 images each were acquired with X-Centric: one set per AF for each sampling pattern, and one fully-sampled set to be retrospectively undersampled using the proposed Sectoral sampling. The Fourier-transformed (FT) images were compared to the CS-based reconstructions using the structural similarity index (SSI); all images were 128px$^2$, FOV=8cm$^2$.
Results: The FT images acquired using X-Centric had SSI scores around 35%; however, the FT Sectoral images had a SSI score of 96% and virtually no artefacting, with only slight blurring. The CS reconstructions of all 3 sampling patterns had SSI scores around 87%, with Sectoral exhibiting fewer artefacts.
Conclusion: Although the CS reconstructions of all 3 proposed sampling patterns had similar SSI scores and artefacting, in line with our previous work, the direct FT images of Sectoral were free of artefacts, comparable to the fully-sampled images, even at AF=14 (only 7% of k-space): the artefacts in the CS image are likely due to over-fitting the reconstruction parameters. These results suggest that the proposed Sectoral sampling pattern is well suited for accelerated low field MRI.
References:
1 Perron, S. et al. ISMRM (2022); 2 Ouriadov, A.V. et al. MRM. (2017); 3 Khrapitchev, A. A., et al. JMR (2006); 4 Perron, S. et al. JMR (2022).
In this work we present the first low-field TRASE technique capable of encoding 2D axial slices without switching gradients of the main magnetic field (B0). TRASE is an MR imaging technique that utilizes phase gradients within the radiofrequency (RF) fields to achieve k-space encoding. In doing so, TRASE does not require as many technologies of the main magnetic field, significantly reducing the cost and size of the overall system. The TRASE encoding principle ideally requires two and four different RF phase gradient fields for 1D encoding and 2D imaging respectively. Preventing interactions between these RF transmit coils has been the primary challenge, especially for 2D imaging. To address this problem, we constructed a head sized TRASE coil pair capable of 1D encoding any transverse axis. By method of rotation, the encoding axis can be changed, allowing a full 2D k-space acquisition in a radial spoke fashion. This radial TRASE technique requires half the RF transmit coils and accompanying RF electronics than typical cartesian TRASE imaging. As a first demonstration of this technique, a head sized coil pair was constructed and experimentally verified on a uniform 8.65 MHz bi-planar permanent magnet with a constant B0 gradient used for slice-selection. Decoupling of the two transmit coils is performed geometrically and a parallel-transmit system (PTx) is presented as a method to reduce any residual coupling. This work demonstrates that 2D slice-selective imaging is feasible without the use of any B0 switched gradients.
Introduction: Ventilation defects in the lungs (1), characterized by impaired airflow and reduced gas exchange, can arise from various factors such as small airway obstructions, mucus accumulation, and tissue damage(2). Hyperpolarized 129Xe/3He lung MRI is an efficient technique used to investigate and assess pulmonary diseases and ventilation defects (3). Current methods for quantifying these defects rely on semi-automated-techniques (4), involving hierarchical K-means clustering (5) for 3He MR images and seeded region-growing algorithms (6) for 1H MR images. Despite their effectiveness, these methods are time-consuming. Deep Learning (DL) has revolutionized medical imaging, particularly in image segmentation (7). While Convolutional Neural Networks (CNNs) like UNet (8) are currently the standard, Vision Transformers (ViTs) (9, 10) have emerged as a compelling alternative. ViTs have excelled in various computer vision tasks (11), owing to their multi-head self-attention mechanisms that capture long-range dependencies with less inductive bias. SegFormer (12), a specific ViT architecture, addresses some of the limitations of CNNs, such as low-resolution output and inability to capture long-range dependencies. It also introduces a Positional-Encoding-Free design, making it more robust.
The purpose of this study is to explore the efficacy of SegFormer in the automatic segmentation and quantification of ventilation defects in hyperpolarized gas MRI. We aim to demonstrate that SegFormer not only outperforms semi-automated-techniques and CNN-based methods in accuracy but also significantly reduces the training time.
Methods: We collected data from 56 study participants, comprising 9 healthy individuals, 28 with COPD, 9 with asthma, and 10 with COVID-19. This resulted in 1456 2D slices segmented using MATLAB R2021b and the hierarchical K-means clustering method. The dataset was balanced, with an even distribution of data from each participant group across the training (80%), validation (10%), and testing (10%) sets. The code was implemented in PyTorch and executed on two parallel NVIDIA GA102, GeForce RTX 3090 GPUs. Proton and hyperpolarized slices were registered using a landmark-based image affine registration approach.
In our research, we utilized the SegFormer architecture that incorporates hierarchical decoding for enriched feature representation, employs overlapping patches to enhance boundary recognition, uses an MLP-head for pixel-wise segmentation mask creation, and integrates a Bottleneck Transformer for reduced computational demands. Segformer utilizes both coarse and fine-grained features in lung MRI. While coarse features distinguish lung from non-lung tissues, fine-grained ones enable precise boundary identification and early disease feature detection, enhancing overall MRI interpretation.
Results: In this study, the efficacy of SegFormer was assessed through various Mix Transformer encoders (MiT), with MiT-B0 offering rapid inference and MiT-B2 targeting peak performance. Without pretraining, SegFormer registered a Dice Similarity Coefficient (DSC) of 0.96 generally, and 0.94 for hyperpolarized gas MRI within the training dataset. Remarkably, with ImageNet (13) pretraining, SegFormer surpassed CNN-based counterparts while requiring fewer computational resources.
After ImageNet pretraining, MiT-B2 achieved a training DSC of 0.980 for Proton MRI and 0.974 for hyperpolarized gas MRI, with testing scores of 0.975 and 0.965, respectively, utilizing 24 million parameters. MiT-B0 recorded training DSCs of 0.973 (proton MRI) and 0.969 (hyperpolarized gas MRI), with test scores of 0.969 and 0.951. In contrast, the pretrained Unet++ with VGG 16 (14) backbone reported training DSCs of 0.964 and 0.953, and testing DSCs of 0.955 and 0.942, using 14 million parameters. The pretrained UNet with a ResNet 50 (15) backbone yielded training DSCs of 0.971 and 0.962, and test DSCs of 0.960 and 0.951, utilizing 23 million parameters.
These findings underscore SegFormer's excellence, especially the MiT-B2 configuration, in segmenting and quantifying ventilation defects in hyperpolarized gas MRI. The SegFormer's fewer learnable parameters also led to a reduced training time in contrast to the CNN-based models, without compromising on performance. DSC results are tabulated in Table 1. Case studies for proton MRI segmentation can be seen in Figure 1, with hyperpolarized gas MRI cases in Figure 2, and a VDP value comparison across select cases presented in Figure 3.
Discussion and Conclusion: Our study underscores the effectiveness of SegFormer in hyperpolarized gas MRI for segmenting and quantifying ventilation defects. SegFormer not only outperformed UNet and Unet ++ with various backbones in DSC but also excelled in training time efficiency. SegFormer's implicit understanding of spatial context, without traditional positional encodings, is particularly promising for medical imaging. However, our study is limited to a specific patient cohort, warranting further validation for broader applicability. In conclusion, SegFormer presents a transformative approach for efficient and precise quantification of ventilation defects in hyperpolarized gas MRI. Its superior performance in both accuracy and computational efficiency positions it as a promising tool for broader clinical applications in hyperpolarized gas MRI.
References:
1. Altes TA, Powers PL, Knight-Scott J, et al.: Hyperpolarized 3He MR lung ventilation imaging in asthmatics: preliminary findings. J Magn Reson Imaging 2001; 13:378–384.
2. Harris RS, Fujii-Rios H, Winkler T, Musch G, Melo MFV, Venegas JG: Ventilation Defect Formation in Healthy and Asthma Subjects Is Determined by Lung Inflation. PLOS ONE 2012; 7:e53216.
3. Perron S, Ouriadov A: Hyperpolarized 129Xe MRI at low field: Current status and future directions. Journal of Magnetic Resonance 2023; 348:107387.
4. Kirby M, Heydarian M, Svenningsen S, et al.: Hyperpolarized 3He Magnetic Resonance Functional Imaging Semiautomated Segmentation. Academic Radiology 2012; 19:141–152.
5. MacQueen J: Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. Volume 5.1. University of California Press; 1967:281–298.
6. Adams R, Bischof L: Seeded region growing. IEEE Transactions on Pattern Analysis and Machine Intelligence 1994; 16:641–647.
7. Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W: Deep Neural Networks for Medical Image Segmentation. Journal of Healthcare Engineering 2022; 2022:1–15.
8. Ronneberger O, Fischer P, Brox T: U-Net: Convolutional Networks for Biomedical Image Segmentation. 2015.
9. Dosovitskiy A, Beyer L, Kolesnikov A, et al.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. 2020.
10. Al-hammuri K, Gebali F, Kanan A, Chelvan IT: Vision transformer architecture and applications in digital health: a tutorial and survey. Vis Comput Ind Biomed Art 2023; 6:14.
11. Thisanke H, Deshan C, Chamith K, Seneviratne S, Vidanaarachchi R, Herath D: Semantic segmentation using Vision Transformers: A survey. Engineering Applications of Artificial Intelligence 2023; 126:106669.
12. Xie E, Wang W, Yu Z, Anandkumar A, Alvarez JM, Luo P: SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. 2021.
13. Deng J, Dong W, Socher R, Li L-J, Kai Li, Li Fei-Fei: ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE; 2009:248–255.
14. Simonyan K, Zisserman A: Very Deep Convolutional Networks for Large-Scale Image Recognition. 2015.
15. He K, Zhang X, Ren S, Sun J: Deep Residual Learning for Image Recognition. 2015.
Diffusion magnetic resonance imaging (dMRI) is a method that sensitizes the MR signal to water molecule diffusion, probing tissue on a microstructural level not attainable with traditional MRI techniques. While conventional dMRI has proven useful in many research areas, more advanced techniques are necessary to further characterize tissue microstructure at spatial scales not available with conventional dMRI. Encoding diffusion using an oscillating gradient spin echo (OGSE) sequence increases sensitivity to smaller spatial scales (<10 µm), and diffusional kurtosis imaging (DKI) provides a comprehensive representation of the dMRI signal, increasing sensitivity to microstructure. While combining these techniques may allow for probing cellular length scales with high sensitivity, generating the large b-values (strength of diffusion weighting) required for DKI is challenging when using OGSE, and DKI maps are often confounded by noise. In this work, we present a method that combines an efficient diffusion encoding scheme and a fitting algorithm utilizing spatial regularization to address these challenges and provide robust estimates of DKI parameters. DKI data was acquired in 8 mice on a 9.4 Tesla scanner using an OGSE sequence with b-value shells of 1,000 and 2,500 s/mm2 (each with a 10-direction scheme which maximizes b-value), TE/TR=35.5/15,000 ms, 4 averages. For comparison, in one mouse we acquired the same dataset but using a commonly used 40-direction scheme, TE/TR=52/15,000 ms, no averaging. We compared our implementation of spatial regularization with a commonly used denoising technique in dMRI, Gaussian smoothing on diffusion-weighted images (DWIs) prior to fitting. We show that using the efficient 10-direction scheme results in much higher signal-to-noise ratio in non-DWIs (30.6 vs 11.4) and improved DKI map quality compared to the 40-direction protocol. Spatial regularization was shown to outperform Gaussian smoothing in terms of contrast preservation both qualitatively and quantitatively. The presented method allows for DKI fitting when using OGSE sequences by addressing key challenges when combining their use, and we showed the advantages of the various elements over conventionally used methods. This pipeline will allow for investigation of normal and pathological brain microstructure at cellular and sub-cellular spatial scales with high sensitivity.
Magnetic particle imaging (MPI) is an emerging tracer-based imaging modality that employs the use of magnetic excitation to detect superparamagnetic iron oxide (SPIO) particles. MPI signal is only generated from SPIO and thus there is no signal from tissue. As well, the signal is linearly quantitative with SPIO concentration so the number of SPIO-labeled cells can be calculated from images. The sensitivity and resolution of MPI depend heavily on the type of SPIO used and the imaging parameters. Lower gradient field strength, higher drive (excitation) field amplitude and signal averaging are known to increase the MPI signal, however, the degree to which these changes improve SPIO and cellular sensitivity has not been tested experimentally. Our goal was to test the effects of changing various MPI imaging parameters on the MPI signal strength and cellular detection limits.
Experiments were performed on a MomentumTM MPI scanner (Magnetic Insight Inc.). SPIO (ProMag) samples were imaged using an advanced user interface which allows editing of pulse sequences to change the parameters. 2D images were acquired to compare 2 gradient field strengths, 2 drive field amplitudes, and signal averaging. Stem cells were labeled by overnight incubation with ProMag and collected to create samples of 100K to 1K cells. 2D images were acquired to compare the 2 gradient field strengths and the 2 drive field amplitudes. An in vivo pilot experiment was performed where cell pellets of 50K, 25K, 10K, and 5K cells were injected subcutaneously into the back of nude mice. MPI was performed using the optimal parameters as determined from the in vitro cell sample experiments.
The mean MPI signal of the SPIO samples was 1.7 times higher using the low gradient field strength compared to the high strength and 4.2 times higher for the high drive field strength compared to low showing improved sensitivity but also lower resolution. As well, a low gradient field strength and a high drive field amplitude produced higher signal from SPIO-labeled cells. The highest cellular sensitivity (1K cells) was achieved using a low gradient field strength and a high drive field amplitude. Signal averaging increased the signal-to-noise ratio by approximately the square-root of the number of averages. When using a 12cm FOV to image the whole mouse the 25K and 5K cells could be clearly visualized but the lower cell numbers were faint. This is the result of the known dynamic range limitation in MPI. With a 3D acquisition (35 projections) the 10K and 5K cell injections could also be detected.
To conclude, in this study we showed that MPI imaging parameters can be adjusted to improve cell detection limits in vitro and in vivo. Further improvements to our in vivo detection limit are expected as MPI-tailored SPIOs are developed.
Molecular imaging techniques can be used to track tumour cell proliferation, metastasis, and viability. Tumour cells labelled with superparamagnetic iron oxide (SPIO) and transfected with a luciferase reporter gene can be dually tracked using magnetic particle imaging (MPI) and bioluminescence imaging (BLI). MPI is highly sensitive as signal is generated directly from SPIO. This allows for direct quantification of iron mass and cell number. BLI specifically detects live cells. In this study, we directly compared the cellular detection limits of BLI and MPI in vitro and in vivo for the first time. Murine 4T1 cancer cells were labelled with SPIO and transfected with luciferase. For the in vitro study, cells were serially diluted at a 1:2 ratio from 51,200 to 100 cells. BLI images were acquired until each sample reached peak radiance (20 min scan). MPI images were then acquired using a 2D high sensitivity scan (5.7 T/m gradient strength, 20 mT drive field amplitude, 2 min scan). For samples that could not be detected with 2D MPI, 3D images were acquired (30 min scan). For the in vivo study, 6400 cells were injected subcutaneously on the back of three nude mice. Each mouse was imaged with BLI until peak radiance was reached (30 min scan). Then, each mouse underwent 2D and 3D MPI using the high sensitivity scan mode. In vitro, we detected as few as 100 cells with BLI and as few as 3200 cells with 2D MPI. 3D imaging improved the in vitro MPI detection limit to 800 cells. In vivo, 6400 cells were detected using both modalities. However, tissue attenuation prevented the detection of 6400 cells with BLI when mice were imaged in the supine position. Although BLI detected fewer cells in vitro, MPI sensitivity is expected to improve over time with the development of MPI-tailored SPIO. Future work will aim to further assess the in vivo cellular detection limits of BLI and MPI by using lower cell numbers.
Explosive stellar events, such as X-ray bursts, novae, and supernovae, play a pivotal role in synthesizing the chemical elements observed in our galaxy and on Earth. The field of nuclear astrophysics seeks to unravel the mysteries behind the origin of the chemical elements and understand the underlying nuclear processes governing the evolution of stars. Particularly, the investigation of radiative capture reactions, involving the fusion of hydrogen or helium and subsequent emission of gamma rays, is crucial for the understanding of nucleosynthesis pathways in stellar environments.
Continuous advancements in accelerated rare isotope beam production offer a unique opportunity to replicate and study reactions occurring inside stars in the laboratory. However, many astrophysically significant reactions involve radioactive isotopes, thus presenting challenges for beam production and background reduction. Furthermore, direct measurements of radiative capture cross sections are extremely challenging due to the vanishingly small cross sections in the astrophysically relevant energy regime.
To address these challenges, dedicated facilities, such as the DRAGON (Detector of Recoils And Gammas Of Nuclear reactions) recoil separator, TUDA, the TRIUMF UK Detector Array for charged particle detection as well as the EMMA (ElectroMagnetic Mass Analyser) recoil mass spectrometer situated at the TRIUMF-ISAC Radioactive Ion Beam Facility have been designed to experimentally determine nuclear reaction rates of interest for nuclear astrophysics with inverse kinematics methods.
In this contribution I will outline the achievements and latest advances of the nuclear astrophysics program at TRIUMF, and present recent highlights from studies utilizing radioactive and high-intensity stable ion beams. Our findings contribute to a deeper understanding of astrophysical processes and pave the way for future breakthroughs in nuclear astrophysics research.
Neutron star mergers are an ideal environment for rapid (r-process) neutron captures to take place that lead to the production of neutron-rich nuclei far from the valley of stability. This is one encouraging site to investigate for where abundances of the heaviest elements in our Solar System and beyond are thought to have come from. We explored the r-process regime in mergers through the testing of various mass models, fission yields, and astrophysical conditions; covering three distinct hydrodynamic simulations, some of which make use of more than 1000 tracer particles. We considered elemental abundance ratios involving the key indicators Barium, Lanthanum, and Europium, ultimately aiming to investigate the spread in these ratios that the r-process can accommodate, with current conclusions discussed here. Further, we compared to stellar data, drawn from literature results compiled by JINAbase, for metal-poor stars. This work has allowed us to gain a better understanding about the production of elemental abundances in the universe and to further test the expected bounds of known nucleosynthesis process regimes.
The equation of state of ultra-dense matter, which gives a relation between microscopic and macroscopic quantities of ultra-dense objects and describes the core of the most energetic events of the universe, remains incompletely understood, particularly under extreme conditions such as high temperatures (i.e. in the order of ~10 MeV). In order to compute the hydrodynamic simulation of a binary neutron star merger, the choice of an equation of state is required, and this choice will influence the evolution of the system. For instance, the spectrum of neutrinos emitted during this event and that we can detect on Earth will be different for a different equation of state. Therefore, binary neutron star merger’s neutrinos carry information about the equation of state of ultra-dense matter; their number as well as the shape of their predicted spectrum can be compared to detection in neutrino observatories. However, binary neutron star mergers are rare, and neutrinos are hard to detect. Rather than focusing on the neutrinos coming from a single event, this study suggests examining the contribution of binary neutron star mergers to the diffuse neutrino background. This comparative analysis between theoretical predictions and observed data will allow to constrain the equation of state of ultra-dense matter for use in simulations.
Nuclear pairing, i.e., the tendency of nucleons to form pairs, has important consequences to the physics of heavy nuclei and compact stars. While the pairing found in nuclei typically happens between identical nucleons and in spin-singlet states, the exotic spin-triplet and mixed-spin pairing phases have also been hypothesized. In this talk, I will present new investigations confirming the existence of these novel superfluids, even at the face of the antagonizing nuclear deformation, at regions that can be experimentally accessible. These results also provide general conclusions on superfludity in deformed nuclei. These exotic superfluid phases can modify proposed manifestations of pairing in nuclear collisions and have clear signatures in experiments in spectroscopic quantities and two-particle transfer direct reaction cross sections.
Measurement and uncertainty are important concepts that show up across a standard physics curriculum, from laboratory instruction to quantum mechanics courses. Little work, however, has examined how students reason about uncertainty beyond the introductory level and has generally focused on a single perspective: students' procedural reasoning about measurements. Our team has developed new ways of looking at students' reasoning about measurement and uncertainty that span these contexts, and also explore students' ideas about sources of uncertainty, predictive reasoning about measurements, and ideas about the existence of "true values". I will present our work exploring the interesting variability in student reasoning across these perspectives, classical and quantum mechanics contexts, and introductory and upper-division students.
Laboratory courses are a fundamental part of physics education with proficiency in scientific writing being one of their key learning outcomes. While research has been conducted into how to teach this skill in various STEM field, no such effort has been reported for physics. We attempt to address this by measuring the impact of the (WIT) program on student self-reported confidence in a variety of skills that characterize scientific writing. This program, pioneered at the University of Toronto, of has been successfully implemented in several departments within the faculty of Arts and Science, most recently including junior laboratory courses (Practical Physics I & II) at the Department of Physics. The course structures have been adjusted to allow for review and resubmission of the laboratory report, allowing students space to practice and improve, simultaneously to the development and compilation of writing resources, teaching assistant training, and focus on feedback. Initial results of the study show improvement but lead to the conclusion that further work and refinement is needed, especially when it comes to providing feedback and curating the repository of resources.
Schöllkopf and Toennies first demonstrated the existence of Helium dimer by making use of matter -wave interference (Journal of Chemical Physics, 104, 1155 (1996)). The concept of a molecule being comprised of two helium atoms is perhaps a surprise to students, based on their secondary-level chemistry knowledge. The process used by Schöllkopf and Toennies to demonstrate the existence of Helium dimer made use of several physics concepts that are already appreciated by beginner physics learners. Specifically, these are diffraction phenomena, and the de Broglie matter wave relationship. Also, the Heisenberg uncertainty principle can be used to reason about the controversy regarding the existence of the Helium dimer. Our work aims to bridge the experiment carried out by Schöllkopf and Toennies with the physics knowledge already made available to students. We also introduce an analogy between the Helium atoms and molecules using frequency doubled light, as second harmonic light has half the wavelength of its fundamental counterpart, much like Helium atoms have half the mass of the molecules, and thus half the wavelength if the atoms and molecules are travelling at the same speed. The van der Waals bond itself is the conduit to presenting the application of the concepts already appreciated by physics learners. Our presentation introduces to other physics educators the video lessons and instructional materials that we have created to strengthen the link between pedagogy of physics and a specific example from the research literature. Ultimately, this presentation will take listeners on a similar learning journey to that of our target audience of formally educated physics students, and potentially general enthusiasts of physics learning. We hope that this will result in further conversation about “declassifying” interesting physics experiments in a way that can extend physics pedagogy to lifelong learning outside the lecture hall or laboratory classroom.
Multiple choice questions are a common valuable teaching and evaluation tool in large-enrolment introductory physics classes across North-American universities. However, they do not provide students with the opportunity to construct and formulate their own ideas. It is desirable to enrich student experience with the activities that reduce the reliance on multiple choice questions, while providing students with the additional opportunities to collaborate on analyzing more open-ended scenarios and, preferably, with some real-life content. Case studies were developed for use in the introductory physics courses for science students. The case studies scenarios target important concepts of the introductory physics curriculum and are focused on common students’ misconceptions. The case studies based on a real-life scenario can captivate students' imagination and increase the engagement with the material. The talk will focus on a case study that explores a real-life example of air resistance: a record-setting jump from the stratosphere completed by the Austrian skydiver Felix Baumgartner on October 14, 2012. Baumgartner fell to Earth from an altitude of 39,045 meters, after reaching the elevation in a helium balloon. He managed to break the existing world records for the highest “freefall” as well as the highest manned balloon flight. He also became the first person to break the sound barrier in “freefall”, reaching a maximum speed of 1,357.6 km/h while moving through the stratosphere. The video recording and the data from the fall (the elevation and the speed versus time) are available as open-source information. Guided by a series of questions, the students analyze the data set from the event.
Reservoir Computing (RC) is a simple and efficient machine learning (ML) framework for learning and forecasting the dynamics of nonlinear systems. Despite RC's remarkable successes—for example, learning chaotic dynamics—much remains unknown about its ability to learn the behavior of complex systems from data (and data alone). In particular, real physical systems typically possess multiple stable states—some desirable, others undesirable. Distinguishing which initial conditions go to "good" vs. "bad" states is a fundamental challenge with applications as diverse as power grid resilience, ecosystem management, and cell reprogramming. As such, this problem of basin prediction is a key test RC and other ML models must pass before they can be trusted as proxies of large, unknown nonlinear systems.
Here, we show that there exist even simple physical systems which leading RC frameworks utterly fail to learn unless key information about the underlying dynamics is already known. First, we show that predicting the fate of a given initial condition using traditional RC models relies critically on sufficient model initialization. Specifically, one must first "warm-up" the model with almost the entire transient trajectory from the real system, by which point forecasts are moot. Accordingly, we turn to Next-Generation Reservoir Computing (NGRC), a recently-introduced variant of RC that mitigates this requirement. We show that when NGRC models possess the exact nonlinear terms in the original dynamical laws, they can reliably reconstruct intricate and high-dimensional basins of attraction, even with minimal training data (e.g., a single transient trajectory). Yet with any features short of the exact nonlinearities, their predictions can be no better than chance. Our results highlight the challenges faced by data-driven methods in learning the dynamics of multistable physical systems and suggest potential avenues to make these approaches more robust.
A three-component description of nonlinear body waves in porous media is presented. The processes observed and described here have been patented and applied commercially to oil production and groundwater remediation. It is shown here that even if the correct nonlinear equations are used, three-component wave descriptions of porous media cannot be constructed solely from the equations of motion for the components. This is because of the introduction of the complexity of multiple scales into this nonlinear field theory. Information about the coupling between the components is required to obtain a physical description. It is observed that the fields must be coupled in phase and out of phase, and this result is consistent with the description of three- and n-body gravitational fields in Newtonian gravity and general relativity.
Korteweg-de Vries (KdV) is a useful partial differential equation (PDE) that models the evolution of waves in shallow water with weak dispersion and weak nonlinearity. Kadomtsev-Petviashvili (KP) equation can be thought of as an extension of KdV to two spatial dimensions. As a result, in addition to containing the weak nonlinearity and weak dispersion, it is also weakly two-dimensional. Despite the elegance of these integrable models, finding solutions analytically and numerically, although possible, is still challenging. More recent advances in machine learning, specifically, physics-informed neural networks (PINNs), allow us to find solutions in a novel way by utilizing the PDE in the network’s loss function to regularize the network parameters. We show how to use PINNs to find soliton solutions to the KdV and KP, compare the results to the analytical solutions and present the hyperparameters used.
In a prediction market, traders buy and sell contracts linked to the outcome of real-world events such as “Will Donald Trump be Re-Elected President on November 5, 2024”. Each contract (share) pays the bearer 1 dollar if the event happens by the given date, and expires worthless (0 dollars) otherwise. Because contracts trade between 0 and 1 dollar, the price at any given time represents the aggregate investors perceived likelihood of a given event’s outcome (e.g. 0.63 dollars = a 63% probability). In addition, these prices fluctuate quickly in response to new information – such as revealed scandals, political successes or failures, and economical changes – thereby representing a change in investor opinion. Due to this probability analog, most prediction market literature focuses on how accurate these “crowdsourced” assessments are in predicting final outcomes. Yet little attention has been paid as to how investor interactions and the flow of information can push the price of a contract toward (or away) from an accurate price.
Here, we use an approach rooted in statistical physics and information theory to analyze statistical trends linked to investor behaviors within prediction markets. We analyze over 4,800 unique contracts from a popular online prediction market – PredictIt – covering a wide range of events; including election outcomes, legislative votes, and career milestones of politicians. Our novel technique uncovers striking universal patterns not only in contract price and trade volume fluctuations, but also where these fluctuations occur in time. Moreover, we find that these universal patterns persist regardless of the heterogeneous nature of our dataset. Our findings suggest that the interactions between investors that give rise to price dynamics in prediction markets can be embedded in a relatively low-dimensional space of variables. This work opens the door to mechanistic modeling of apparently high-dimensional socio-financial systems, and offers a new way of analyzing economic data.
Real networked systems are fundamentally vulnerable to attacks that fragment the system via removal of a handful of key nodes, akin to percolation transitions in lattice systems. Though the problem of optimally attacking a network is NP hard [1], deep reinforcement learning is often able to learn near-optimal solutions to similar problems on disordered topologies (graphs) [2,3]. This raises the question: "Does there exist a strategy to mitigate such an attack?" Here, we address this problem by casting network attack/defense as a two-player, zero-sum game. Specifically, we consider an attacker, who aims to fragment the network---reducing its largest connected component below a specified threshold---with a minimum number of node removals and a defender, who obfuscates the network by strategically hiding links before the attacker makes its decisions [Figure 1]. In this game, concealed links---which are invisible to the attacker---introduce a novel layer of strategic complexity, potentially providing a strategy to defend networks against attacks.
In our findings, the defender's strategic concealment consistently increases the complexity and uncertainty of the attacker's task. The more links the defender is allowed to conceal, the more challenging it becomes for the attacker to effectively fragment the network [Figure 1]. At low concealment percentages, the defender's actions can successfully confound the attacker relative to heuristics like random concealment. However, the diminution in attacker performance is sublinear; only when essentially all network structure is hidden does the attacker perform no better than random. Our results suggest that network weaknesses are inferrable even with only partial topological information available These results shed light on defense mechanisms that are (in)effective at maintaining network robustness. In conclusion, our study underscores the vital role of strategic planning in network defense, providing a new perspective into enhancing network resilience to malicious AI-equipped agents.
The dynamics of a polymer in solution are affected by hydrodynamics. It has often been assumed that these affects are mostly long range and therefore should be less significant in confined environments. However, there are a growing number of experiments on polymers in micro- and nano-fluidic devices where the hydrodynamic flow field is an essential part of the nonequilibrium dynamics of the system and cannot be ignored. My group has created, and maintains, a package for the open-source molecular dynamics package LAMMPS for simulations of particles in a fluid that includes full hydrodynamics which we use to study these systems. We demonstrate how the interaction between a polymer and fluid flow in a nano-fluidic device can be used to unfold and stretch out a polymer’s configuration. This, in-turn can be exploited to maximize the probability of single-file translocation. In contrast, in a different configuration, we show how the flow around a pushed polymer can result in a compacted configuration and coexistence between a jammed and unjammed state for a long polymer.
Phytoglycogen (PG) is a glucose-based polymer with a dendritic architecture that is extracted from sweet corn as a soft, compact, monodisperse, 22 nm radius nanoparticle. Our recent model for a PG particle in solvent (water), based on dynamical self-consistent field theory (dSCFT), was successful in producing a dendrimer with a core-chain morphology, radius, and hydration, in close agreement with observations [1]. However, this model assumed, for simplicity, that the solvent distribution around the particle was spherically symmetric. This prevented us from studying heterogeneous structures on the particle surface. In this talk, we extend our dSCFT model, and consider a fully three-dimensional solvent distribution. We compare the new predictions for the morphology, radius, and hydration of PG to our earlier results. Motivated by experimental investigations of chemically modified versions of PG, we discuss preliminary results for the surface structures produced by the association of small, hydrophobic molecules with PG.
[1]: Morling, B.; Luyben, S.; Dutcher, J. R.; Wickham, R. A. Efficient modeling of high-generation dendrimers in solution using dynamical self-consistent field theory (submitted).
Computer simulations are used to characterize the entropic force of one or more polymers tethered to the tip of a hard conical object that interact with a nearby hard flat surface. Pruned-enriched-Rosenbluth-method (PERM) Monte Carlo simulations are used to calculate the variation of the conformational free energy, $F$, of a hard-sphere polymer with respect to cone-tip-to-surface distance, $h$, from which the variation of the entropic force, $f\equiv |dF/dh|$, with $h$ is determined. We consider the following cases: (1) a single freely-jointed tethered chain, (2) a single semiflexible tethered chain, and (3) several freely-jointed chains of equal length each tethered to the cone tip. The simulation results are used to test the validity of a prediction by Maghrebi {\it et al.} (EPL, {\bf 96}, 66002(2011); Phys. Rev. E {\bf 86}, 061801 (2012)) that $f\propto (\gamma_\infty-\gamma_0) h^{-1}$, where $\gamma_0$ and $\gamma_\infty$ are universal scaling exponents for the partition function of the tethered polymer for $h=0$ and $h=\infty$, respectively. The measured functions $f(h)$ are generally consistent with the predictions, with small quantitative discrepancies arising from the approximations employed in the theory. In the case of multiple tethered polymers, the entropic force per polymer is roughly constant, which is qualitatively inconsistent with the predictions.
The study of organic solar cells is intriguing from a fundamental point of view because of the very short lifetime of excitons -strongly-correlated electron-hole pairs- in these devices. While the origin of short exciton lifetime is still an open scientific problem, it is now apparent it is somewhat linked to strong electron-phonon coupling, which is also depending on the dielectric function of the excitonic environment in these devices. The photoactive layers of organic solar cells is made by polymers, small organic molecules, or their combination. To date, photoconversion efficiencies (PCEs) approaching 20% have been reported for binary organic photovoltaics by modulating the exciton recombination processes, which allows for enhanced electron-hole separation. Here we will show that tunable electron transfer is possible between poly[2-(3-thienyl)ethyloxy-4-butylsulfonate]-sodium (PTEBS, a water soluble organic polymer) and bathocuproine (BCP).a small organic molecule. We demonstrate PTEBS:BCP electron transfer through quenching of the photoluminescence of PTEBS in the presence of BCP in aqueous acidic solutions, and in thin films fabricated from these solutions. As UV-visible spectroscopy shows only moderate changes of the optical band gap of PTEBS depending on the pH of the starting solution, the dramatic change in PTEBS:BCP electron transfer when the pH of the solutions change from basic to acidic is assigned to the increase of exciton Bohr radius at lower pH (of 4 or more). We also corroborated this effect by direct measurements of the dielectric constant of PTEBS which is shown to decrease at increasing pH, where electron spin resonance (ESR) measurements on PTEBS, show increasing free carrier concentrations in the polymer chain. All of these data has been used to design organic solar cells with PTEBS:BCP as the active layer, C60-fullerene as electron transport layers and Nickel Oxide as hole-blocking layers, with energy levels matching, respectively the conduction and valence band of PTEBS. OPV photoconversion efficiency (PCE) is about 2.8% for PTEBS:BCP active layers prepared from acidic water solutions, while dropping to significantly lower values (PCE < 0.5%) from basic solutions. Therefore, our study presents among the best organic photovoltaics obtained to date from water-based polymer solutions, and highlights the importance of the dielectric environment and exciton dissociation at the donor-acceptor interface in designing high-quality organic solar cells.
Hyperspectral infrared (IR) images contain a large amount of spatially resolved information about the chemical composition of a sample. However, the analysis of hyperspectral IR imaging data for complex heterogeneous systems can be challenging because of the spectroscopic and spatial complexity of the data. We implement a deep generative modeling approach using a β-variational autoencoder to learn disentangled representations of the generative factors of variance in our large data set of IR spectra collected on crosslinked polyethylene (PEX-a) pipe. We identify three distinct factors of aging and degradation learned by the model and apply the trained model to high-resolution hyperspectral IR images of cross-sectional slices of unused virgin, used in-service, and cracked PEX-a pipe. By mapping the learned representations of aging and degradation to the IR images, we extract detailed information on the physical and chemical changes that occur during aging, degradation, and cracking in PEX-a pipe. This study shows how representation learning by deep generative modeling can significantly enhance the analysis of high-resolution IR images of complex heterogeneous samples.
We report an improved variational upper bound for the ground state energy of H$^-$ using Hylleraas-like wave functions in the form of a triple basis set having three distinct distance scales. The extended precision DQFUN of Bailey, allowing for 70 decimal digit arithmetic, is implemented to retain sufficient precision. Our result exceeds the previous record [1], indicating that the Hylleraas triple basis set exhibits comparable convergence to the widely used pseudorandom all-exponential basis sets, but the numerical stability against roundoff error is much better. It is argued that the three distance scales have a clear physical interpretation. The new variational bound for infinite nuclear mass is -0.527 751 016 544 377 196 590 814 478 a.u. [2]. New variational bounds are also presented for the finite mass cases of the
hydrogen, deuterium and tritium negative ions H-, D- and T-, including an interpolation formula for the mass polarization term.
[1] A. M. Frolov, Euro. J.Phys. D 69, 132 (2015).
[2] E. M. R. Petrimoulx, A. T. Bondy, E. A.Ene, Lamies A. Sati, and
G. W. F. Drake, Can. J. Phys. in press (2024).
In this work, the Bragg Scattering for metallic nanohybrid made of an ensemble of metallic nanorods doped in a substrate. Such substrate can be any suitable gas, liquid and solid. Moreover, a theory was developed to describe the relation between an external incident laser intensity and Bragg scattered light intensity. When the external laser was applied to the metallic nanohybrids, the photons from the laser will interact with the surface polaritons in the nanorods and produced surface plasmon polaritons (SPPs). On the other hand, the incident photon induced dipoles in the ensembled nanorods, so the nanorods can interact with each other via dipole-dipole interactions (DDI). The developed theory involved the coupled-mode formulism based on Maxwell’s equation with the presence of SPP and DDI fields and analytical expressions for the SPP/DDI coupling constants were obtained in a similar manner as [1]. It is found that, the intensity of Bragg scattering would depend on the susceptibility induced by SPP and DDI field. The susceptibility was calculated by the quantum mechanical density matrix method [2]. Combining these methods, an analytical expression for the Bragg scattering intensity as a function of incident laser intensity. Next, the theory was used to compare with the experimental data for a nanohybrids made of doping Au-nanorods into water. A decent agreement between the theoretical model and experimental data was observed. Later, several numerical simulations were performed to investigate the effects of SPP/DDI coupling, laser detuning and phase factor. The theoretical model was used to predict the Bragg intensity due to different parameters. The Bragg scattering intensity was found to be enhanced by higher SPP/DDI coupling constant. Such an enhancement can be interpreted by the extra coupling mechanism from the SPP and DDI polaritons with acoustic phonons. On the other hand, the peaks for the Bragg scattering intensity can be split into many peaks due to SPP/DDI coupling and the phase constant. Such a splitting of the peaks can be explained by the Bragg factor in the theory. In conclusion, the enhancement effect can be used to fabricate new nano-sensors, and the splitting effect can be used to design new nano-switches where the peaks can be interpreted as the ON position.
Reference:
[1] Singh, M.R. and Black, K., J. Phys. Chem. C. 122, 26584-26591 (2018).
[2] . Singh, M. R., Electronic, Photonic, Polaritonic and Plasmonic Materials, Wiley Custom. Toronto, 2014.
One of the major discoveries resulting from the invention of the laser was the existence of nonlinear optical processes: phenomena only described by nonlinear dependencies of a material’s electric polarization on the electric field of incident light. Two of these processes are second harmonic generation (SHG) and third harmonic generation (THG), which are frequency-doubling and frequency-tripling processes respectively. Metallic nanoparticles (MNPs) are a promising host for these effects as they exhibit surface plasmon resonance which can enhance the harmonic generation signals. In this project, we develop a theory for SHG and THG in nanohybrids of gold, aluminum, and copper sulfide MNPs. We utilize a semi-classical theory in which the coupled-mode formalism of Maxwell’s equations is used to describe the input and output light and the quantum mechanical density matrix formulation is used to calculate the nonlinear susceptibilities of the material. This theory agrees with recent experiments. Furthermore, a hybrid system including quantum dots is considered, where the harmonic generation signals are further enhanced by the dipole-dipole interaction between the MNPs and quantum dots. The enhanced harmonic generation in MNPs allows for a wide array of potential applications spanning several areas of science and technology including photothermal cancer treatments in nanomedicine.
Stimulated Raman spectroscopy in the femtosecond (1 fs = 1$\times 10^{-15}$~s) regime provides a versatile route to measuring the dynamics of molecules on the timescale at which they occur. A tunable and broadband probe pulse allows for detecting molecular signatures across a wide range of energies (frequencies). We develop a novel method for generating the probe pulse that results in the broadest and most tunable probe pulse reported to date.
Four-wave mixing (FWM) occurs when two pump photons ($\omega_p$) amplify a signal photon ($\omega_s$) to create an idler ($\omega_i$): $\omega_p+\omega_p=\omega_s+\omega_i$. We show that at high intensities, FWM can be extended to include the nonlinear response of the gain medium. We exploit the $\chi^{(3)}$ (Kerr) nonlinearity of materials to amplify broad spectra. We use the resulting amplified spectrum as the probe pulse for stimulated Raman scattering. The benefits of this approach are twofold. First, there is an inherent tunability of the amplified spectrum, defined by the phase-matching condition. Second, we generate Raman frequencies that span the terahertz, fingerprint, and OH-stretching regimes in a single shot.
We prove the usefulness of our method by measuring the methyl stretching mode of 1-decanol, shown in Fig. 1.
Nipun Vats, ADM of the Science and Research Sector at ISED, will discuss the various ways in which policy intersects with science and the role different factors play in helping to shape science policy discourse within the federal government.
Current MPP for Kingston and the Islands, former MP, and former party leadership candidate, Ted Hsu, answers your questions about what to do, and what not to do, in order to get the attention of elected officials.
Ted Hsu, actuel député de Kingston et des Îles, ancien député et ancien candidat à la direction du parti, répond à vos questions sur ce qu'il faut faire et ne pas faire pour attirer l'attention des élus.
Liquid scintillators are a commonly used detection medium for particle and rare-event search detectors. The vessels containing the liquid scintillator are often made of transparent acrylic. In the case of a UV-emitting scintillator, to make the scintillation light observable, the acrylic can be coated with a wavelength shifter like 1,1,4,4-tetraphenyl-1,3-butadiene (TPB). Another coating of particular interest is Clevios, a conductive material that, when in thin films, is optically transparent. The high conductive properties of Clevios makes it a useful material for use in Time Projection Chambers (TPC) as transparent electrodes. Additionally, the optical transparency of the material allows scintillation light to pass through, meaning Clevios is a good candidate for dual-phase detectors.
Materials used in the construction of the detector can emit fluorescent or scintillation light that can produce higher background signals, and modify the pulse shape of events. The fluorescent properties of Clevios have been studied as function of temperature and compared to the known fluorescence of acrylic and TPB. I will present the experimental methodology and the results of this study.
Radon is one of the most troublesome backgrounds in dark matter and neutrino detectors. Nitrogen is commonly used in cover gas systems at SNOLAB, such as in the SNO+ detector. To determine the concentration of radon in them, a method of extraction and counting has been developed with the help of radon traps at cryogenic temperatures. I present our methodology and the progress made on understanding the efficiency of an activated charcoal trap at high gas flow rates and on varying extraction parameters.
High-purity germanium detectors are used in the search for rare events such as neutrinoless double-beta decay, dark matter and other beyond Standard Model physics. Due to the infrequent occurrence of signal events, extraordinary measures are taken to reduce background interactions and extract the most information from data. An efficient signal denoising algorithm can improve measurements of pulse shape characteristics, resulting in better energy resolution, background rejection and event classification. It can also help identify low-energy events where the signal-to-noise ratio is small.
In this work, we demonstrate the application of Cycle Generative Adversarial Network (CycleGAN) with deep convolutional autoencoders to remove electronic noise from high-purity germanium p-type point contact detector signals. Built on the success of denoising using a convolutional autoencoder, we show that CycleGAN applied on autoencoders allows for more realistic model training conditions. This includes training with unpaired simulated and real data, as well as training with only real detector data without the need of simulation.
Aerogel threshold Cherenkov counters are developed to identify pions and muons in the range of 240-980MeV/c for the T9 beam test facility at CERN PS East Hall. These counters are part of the Water Cherenkov Test experiment (WCTE) particle identification system. The WCTE is a test-beam
experiment to test the design and capabilities of the photosensor system under development for the Hyper-Kamiokande Intermediate Water Cherenkov Detector. In this talk, I will cover the WCTE goals, the T9 beam monitor system and particle identification with a focus on aerogel threshold Cherenkov counters. Results obtained from a beam test using prototypes of the T9 beam monitor system in the summer of 2023 will be also presented.
The High Energy Light Isotope eXperiment (HELIX), a multistage balloon-borne detector, aims to measure the composition of light cosmic-ray isotopes up to 10 GeV/n. One of the primary scientific objectives of HELIX is to study the propagation of cosmic rays in our galaxy by measuring the ratio of Be_10 and Be_9 fluxes. The detector's first stage, which will measure cosmic rays with energies up to 3 GeV/n, is scheduled to launch in the summer of 2024 from Kiruna, Sweden. To obtain information about the isotopic composition, the detector must measure particle properties, such as mass, energy, charge, and velocity with high precision.
For particles that exceed 1 GeV/n, HELIX will utilize a Ring Imaging Cherenkov (RICH) detector to measure the velocity of incident particles. The RICH detector employs 10cm x10cm x1cm aerogel tiles with a refractive index of 1.15 as a radiator. To distinguish between the mass isotopes of Beryllium, a 2.5% mass resolution is required. This requirement mandates a comprehensive understanding of the refractive index as a function of the aerogel tile's position.
This presentation proposes a novel method to measure the refractive index of aerogel tiles based on Optical Coherence Tomography (OCT). The OCT method uses an interferometer to obtain micrometer-level depth resolution. In this talk, I will present the results of measuring the refractive index of aerogel with the OCT method.
The TRIUMF Ultracold Advanced Neutron (TUCAN) collaboration is building a surface coating facility at the University of Winnipeg. The primary purpose of this facility is to prepare ultracold-neutron (UCN) guides to transport UCNs from the TUCAN source to the TUCAN Electric Dipole Moment (EDM) experiment. UCN losses during the transport can be minimized by the application of special coatings. The facility specializes in providing diamond-like-carbon (DLC) coatings onto the inside of tubes using a high-power excimer laser and a custom vacuum-deposition chamber. This facility provided DLC-coated UCN guides for the LANL UCNA experiment in the 2000s and was moved from Virginia Tech to Winnipeg in June 2023. The first DLC guide samples are expected to be made in the spring of 2024 where coating properties will be assessed from various surface science tools. This talk will discuss the progress of the facility setup and the surface science results of the coated samples.
We adapt a machine-learning approach to study the many-body localization transition in interacting fermionic systems on disordered 1D and 2D lattices. We perform supervised training of convolutional neural networks (CNNs) using labelled many-body wavefunctions at weak and strong disorder. In these limits, the average validation accuracy of the trained CNNs exceeds 99.95%. We use the disorder-averaged predictions of the CNNs to generate energy-resolved phase diagrams, which exhibit many-body mobility edges. We provide finite-size estimates of the critical disorder strengths at $W_c\sim2.8$ and $9.8$ for 1D and 2D systems of 16 sites respectively. Our results agree with the analysis of energy-level statistics and inverse participation ratio. By examining the convolutional layer, we unveil its feature extraction mechanism which highlights the pronounced peaks in localized many-body wavefunctions while rendering delocalized wavefunctions nearly featureless.
The one-body density matrix (ODM) for a d-dimensional non-interacting Fermi gas can be approximately obtained in the semiclassical regime through different $\hbar$-expansion techniques. One would expect any method of approximating the ODM should yield equivalent density matrices which are both Hermitian and idempotent to any order in $\hbar$. The method of Grammaticos and Voros does ensure these properties for any order of $\hbar$. Meanwhile, the Kirzhnits and Wigner-Kirkwood methods do not yield these properties when truncated, which would suggest these methods provide non-physical ODM’s. Here we show explicitly, for arbitrary $d\geq1$-dimensions through an appropriate change into symmetric coordinates, that each of the methods are not only identical but also Hermitian and idempotent. This change of variables resolves the inconsistencies between the various methods discussed in previous literature. We show that the non-Hermitian and non-idempotent behaviour of the Kirzhnits and Wigner-Kirkwood methods is an artifact of performing a non-symmetric truncation to the semiclassical $\hbar$-expansions.
The Triamond lattice is the only maximally isotropic lattice where three links meet at each vertex, and for technical reasons, that provides an elegant bookkeeping method for quantum field theories on a lattice. Considering that until now, most researchers have not attempted to simulate Hamiltonians in three spatial dimensions, this work is an important step toward large-scale simulation on quantum computers. Specifically, we studied the geometry of the Triamond lattice, derived its Hamiltonian, and calculated the ground state of the unit cell of this lattice by imposing the periodic boundary condition on each face of the unit cell.
Analyzing the long term behaviour of solutions to a model gives insight on the physical relevance and numerical stability of the solutions. In our work, we consider the formulation presented by Blyth and Părău (2019), in which they derive the water-wave problem exclusively in terms of the free-boundary of a cylindrical geometry, and use it to solve for periodic travelling waves on the surface of a ferrofluid jet. We use this formulation to compute travelling waves in various parameter regimes and analyze their stability using the Fourier-Floquet-Hill method — presenting both; our methodology and the numerical stability results of the solutions. This stability analysis technique is an approach generalizable to a wide range of physically-motivated problems, making it a useful method for analyzing the viability of models.
Dirac crystals are zero-bandgap semiconductors in which the valence and conduction bands are linear over the crystal momentum (and, therefore, non-dispersive) in the proximity of the Fermi level at the Brillouin zone boundary. They are therefore the quantum material analogue of the Dirac cone of light in special relativity. To understand a number of different properties of 2D Dirac crystals (including their electron-related lattice thermal conductivity) demands models that consider the interaction between valence electrons and acoustic phonons beyond perturbation theory in these strongly correlated quantum systems. It is commonly assumed that the exceptionally high thermal conductivity of two-dimensional (2D) Dirac crystals is due to nearly ideal phonon gases. Therefore, electron-phonon collisions, when present, may control the thermal transport. Nonetheless, their accurate description beyond first-order collisions has seldom been undertaken. The Fermi level, and therefore the concentration of conduction electrons in 2D Dirac crystals can be tuned by many forms of doping, which also controls the acoustic phonon scattering rate by electrons.
Here, we are using a modified formulation of the Lindhard model for electron screening by phonons in strongly correlated systems to demonstrate that a proportional relationship exists between electron-lattice thermal conductivity and the phonon scattering rate, for bands of electrons and phonons that are linear over the crystal momentum. Furthermore, although the phonon scattering rate is usually calculated in the literature only at the first-order degree (i.e., with EP-E and E-EP processes consisting of 2 electrons and 1 phonon) we are here presenting an accurate expression for the phonon scattering rate and the electron-phonon interaction that is calculated at the higher order, where electron-in/phonon-in, electron-out/phonon-out (EP-EP) processes are also considered. We show that, even at temperatures as small as 300 K, the EP-EP* become critical in the accurate determination of the phonon scattering rates and, therefore the electron-lattice thermal transport. Collectively, our work points at the necessity of an accurate description of the electron-phonon interaction to comprehensively understand the electron-related lattice properties of strongly-correlated 2D Dirac crystals.
Rigorous derivations of the approach of individual elements of large isolated systems to a state of thermal equilibrium, starting from arbitrary initial states, are exceedingly rare. We demonstrate how, through a mechanism of repeated scattering, an approach to equilibrium of this type actually occurs in a specific quantum system.
In particular, we consider an optical mode passing through a reservoir composed of a large number of sequentially-encountered modes of the same frequency, each of which it interacts with through a beam splitter. We analyze the dependence of the asymptotic state of this mode on the assumed stationary common initial state of the reservoir modes and on the transmittance τ = cos λ of the beam splitters. These results allow us to establish that at small λ such a mode will, starting from an arbitrary initial system state, approach a state of thermal equilibrium even when the reservoir modes are not themselves initially thermalized.
Magnetotactic bacteria are ubiquitous motile single-cell organisms that biomineralize magnetic nanoparticles, allowing them to align with the Earth’s magnetic field and navigate their aquatic habitats. We are interested in the swimming mechanism of one particular type of magnetotactic bacteria, Magnetospirillum magneticum, which has a helical body and use two helical flagella to move up and down magnetic field lines. We take advantage of both the helical shape of the cell and the possibility to align them with magnetic fields to precisely measure both their translational and rotational motions from phase microscopy images. This allows us to precisely measure both the translational and rotational friction coefficients of these micron-size chiral particles, and from them calculate the propulsion forces exerted by the body and the flagella of the cell. Our results suggest that for this bacterial species cell body rotation significantly contributes to cellular propulsion.
We examine the kinetic process of an anionic A-block from an ABA triblock copolymer hopping between the solvophillic, cationic A-domains of an ABA triblock copolymer membrane. One motivation is to use this toy model to provide insight into the nature of a rapid, charge-mediated reconstitution mechanism observed for anionic membrane proteins reconstituted into cationic ABA triblock copolymer membranes. We use dynamic self-consistent field theory (dSCFT) to efficiently simulate this interacting, many-chain system, and we introduce screened electrostatics by coupling the Poisson-Boltzmann equation into the dSCFT equations. We equilibrate membranes by imposing the condition of isotropic stress and find that, under this condition, the area per A block is an increasing function of the charge on the A block. dSCFT enables us to track the position of each polymer bead, and to observe rapid hopping events as the anionic A-block traverses the solvophobic membrane mid-block. By measuring many such events, we create a probability distribution for the time interval between hops. We will present results for the behaviour of this distribution as we change the charge per A-block, and the charge asymmetry between A blocks. Our results could suggest whether it is direct charge interactions, or indirect effects like softening of the membrane, that are mainly responsible for modifications to the free-energy barrier to A-blocks hopping across the membrane.
Solid-state nanopore sensors continue to hold great potential in addressing the increasing worldwide need for genome sequencing. However, formation and translocation of folded conformations known as haripins poses readability and accuracy challenges. In this work, we investigate the impact of applying a pressure-driven fluid flow and an opposing electrostatic force as an approach to increase single-file capture probability. By optimizing the balance between forces, we show that the single-file capture can be amplified up to almost 95%. We find two mechanisms responsible for the increase in the single-file capture probability.
Introduction: Endothelial cells (ECs) form the innermost lining of blood vessels and can sense and respond, via mechanotransduction, to local changes in wall shear stress (WSS) imposed by blood flow. Blood flow through a vessel can become disturbed when passing through bifurcations or plaque-burdened regions, which disrupts the direction and magnitude of WSS experienced by cells. ECs in these regions show activation of pro-inflammatory phenotypes, manifesting in the development and progression of atherosclerosis. The earliest cell responses to these flow disturbances – particularly the mechanisms by which ECs sense and respond to variations in direction and magnitude of WSS – are not well understood. Excessive increases in reactive oxygen species (ROS) generation within endothelial cells are an early indicator of a disruption of homeostasis and are thought to accelerate the progression of vascular diseases such as atherosclerosis and diabetes. It is hypothesized that ECs will exhibit indications of oxidative stress and damage within minutes of being exposed to WSS disturbances.
Methods: A novel microfluidic device has been designed and fabricated (from polydimethylsiloxane Sylgard-184) for recapitulating the various forms of WSS observed in regions of disturbed flow within the vasculature. It consists of a small channel for fluid to pass over cultured ECs with two opposing jets to create varying levels of bi-directional and multi-directional WSS scrubbing. ECs cultured in this device are grown to confluence and loaded with a ROS dye (5 μM CM-H2DCFDA). Cells are imaged with a confocal inverted microscope (Nikon Ti2-E) while applying disturbed-flow WSS.
Results: Within 30 minutes of being exposed to disturbed flow, ECs exhibited 65% signal increases in ROS, with detectable changes beginning at just 10 minutes. Notably, a differential response was seen for different types of WSS scrubbing, where regions with higher magnitude mean stress and more multidirectional WSS patterns correlated with larger increases in ROS generation.
Conclusion: The results of this experiment will contribute to the understanding of the differential response of endothelial cells to differential forms of WSS. The characterization of EC responses to varying flow patterns is essential in strengthening the link between blood flow dynamics and atherosclerotic development.
The microcirculation serves to deliver oxygen (O2) to tissue as red blood cells (RBCs) pass through the body’s smallest blood vessels, capillaries. Imaging techniques quantify O2 present in capillaries but lack effective modalities quantifying O2 entering tissue from capillaries. Thus, mathematical simulation has been used to investigate how O2 is distributed locally over a variation of metabolic demands, and to investigate mechanisms regulating capillary blood flow to meet such metabolic tissue O2 demands. Being present throughout the microcirculation, RBCs have been hypothesized as potential candidates initiating signals at the capillary level that are transmitted upstream to arterioles thereby altering capillary blood flow. It has been found that RBC deformation, as well as oxyhemoglobin desaturation, can cause release of adenosine triphosphate (ATP). It has been theorized that as RBCs deform with local blood flow, released ATP modulates upstream vessel diameter, but requires a model to systematically investigate. At baseline, RBCs possess unique shapes formed from a balance between the phospholipid bilayer membrane’s surface tension, surrounding fluid osmolarity, and the curvature-dependent Canham-Helfrich-Evans (CHE) energy. To investigate how RBCs deform with blood flow stresses, a novel algorithm for red blood cell (RBC) equilibrium geometry was developed as the first step of a quantitative model for RBC-ATP release. This condensed matter theory model relied on the developing coordinate-invariant computational framework of discrete exterior calculus (DEC). Using this algorithm, several RBC geometries were observed at different surface tension (area) and osmolarity (volume) constraints. First seen throughout literature, the algorithm was able to be expressed as an implicit system, and utilized a Lie-derivative based vertex drift method to ensure the RBC meshes were well-behaved throughout deformation. The algorithm was shown to be highly stable, quantified through tracking the RBC membrane energy. Equilibrium geometries were shown to agree with literature in vivo observations, and qualitatively reproduced phenomena seen with in vivo experiments where RBCs are subjected to solutions of varying osmolarity. Future work will allow investigation of how RBCs behave under flow stresses to simulate combined shear-O2-dependent ATP release.
When force is applied to tissue in a healthcare setting, tissue perfusion is reduced in response to the applied force; it is perfusion that is important in assessing tissue health and potential injury from the force 1,2. Traditional means of measuring force involve quantifying the mechanical strain or electrical responses of a sensor; these techniques do not necessarily correspond to the physiological responses to the applied force.
It is also known that contact force is a confounding issue in reflectance type optical measurements of tissue, such as Near Infrared Spectroscopy (NIRS) and Photoplethysmography (PPG) 3-6. We propose that the signal from reflectance type optical measurements can be used to predict sensor contact force, due to the physiological response of the underlying perfused tissue.
There is a complex relationship between the reflected optical signals and the underlying physiological response; there is no simple biophysical model to apply. Because of this, we used machine learning to explore this relationship. We used a PPG sensor to collect reflected optical data from the index finger from a participant (n=1). The applied force was also measured simultaneously with a load cell. We collected 240,000 data points with a range of 0 to 10 N of applied force.
While many models worked well to estimate the applied force, we decided on using the random forest model. We were able to achieve an accuracy between the machine learning predictions and the measured ground truth with a median absolute error of 0.05 N and an R2 score of 0.97. From this, we have determined that it is possible to predict the amount of applied force on a vascularized tissue from reflected optical signals. This has potential applications in neurosurgery or robotic surgeries, where careful sensing of the amount of applied force on delicate tissues may reduce injuries.
[1] Roca, E., and Ramorino, G., “Brain retraction injury: systematic literature review,” Neurosurg Rev, 46(1), 257 (2023).
[2] Zagzoog, N., and Reddy, K. K., “Modern Brain Retractors and Surgical Brain Injury: A Review,” World Neurosurg, 142, 93-103 (2020).
[3] Chen, W., Liu, R., Xu, K. et al., “Influence of contact state on NIR diffuse reflectance spectroscopy in vivo,” Journal of Physics D: Applied Physics, 38(15), 2691 (2005).
[4] May, J. M., Mejia-Mejia, E., Nomoni, M. et al., “Effects of Contact Pressure in Reflectance Photoplethysmography in an In Vitro Tissue-Vessel Phantom,” Sensors (Basel), 21(24), (2021).
[5] Reif, R., Amorosino, M. S., Calabro, K. W. et al., “Analysis of changes in reflectance measurements on biological tissues subjected to different probe pressures,” J Biomed Opt, 13(1), 010502 (2008).
[6] Teng, X. F., and Zhang, Y. T., “The effect of contacting force on photoplethysmographic signals,” Physiol Meas, 25(5), 1323-35 (2004).
The goal of the ALPHA experiment at CERN is to perform high precision comparison between antihydrogen and hydrogen to test the fundamental symmetries that underpin the Standard Model and General Relativity. For decades, there have been many speculations about the gravitational behaviour of antimatter. The ALPHA collaboration has developed the ALPHA-g apparatus to measure the gravitational acceleration of antihydrogen. We have recently shown, directly for the first time, that the antihydrogen gravitational acceleration is compatible with the corresponding value for hydrogen [Nature 621, 716 (2023)]. To push antihydrogen research into an entirely new regime, new techniques, such as anti-atomic fountains and anti-atom interferometers, must be developed. The HAICU experiment at TRIUMF in Vancouver aims to use laser-cooled hydrogen atoms [Nature 592, 35 (2021)] to do just that. In this talk, we will report our first measurement on antihydrogen gravity with ALPHA-g and discuss the status of development towards an atomic hydrogen fountain and atomic hydrogen interferometer with HAICU.
The ALPHA-g experiment at CERN aims to perform the first-ever direct measurement of the effect of gravity on antimatter, determining its weight to within 1% precision. At TRIUMF, we are working on a new deep learning method based on the PointNet architecture to predict the height at which the antihydrogen atoms annihilate in the detector. This approach aims to improve upon the accuracy, efficiency, and speed of the existing annihilation position reconstruction. In this presentation, I will report on the promising preliminary performance of the model and discuss future development.
The development of new techniques to trap and cool antimatter is of interest for fundamental studies that use antimatter as a testbed for new physics. The HAICU experiment, which is in its initial phase at TRIUMF, ultimately aims to cool and trap antihydrogen in such a way that quantum effects used in the precision measurements of normal atoms could also be exploited for measurements on antihydrogen. One such precision measurement technique is the “atomic fountain”, which is the focus of HAICU. Following a brief overview of the HAICU experimental setup, this talk will focus on the technical challenges and procedures associated with the construction and testing of a “Bitter” style electromagnet that will be used to confine neutral hydrogen in the first stage of the experiment.
The proposed nEXO experiment is a tonne-scale liquid xenon (LXe) time projection chamber that aims to uncover properties of neutrinos via the neutrinoless double beta decays ($0\nu\beta\beta$) in the isotope Xe-136. The observation of $0\nu\beta\beta$ would point to new physics beyond the Standard Model and imply lepton number violation, indicating that neutrinos are their own antiparticle. The nEXO detector is expected to be constructed at SNOLAB in Sudbury, Canada, with a projected half-life sensitivity of $1.35\times10^{28}$ years. The collaboration has been pursuing the development of new technologies to further improve upon the detection sensitivity of nEXO, such as Barium (Ba)-tagging. This extremely challenging technique aims to extract single Ba ions from a LXe volume. Ba-tagging would allow for an unambiguous identification of true $\beta\beta$-decay events, and if successful would result in an impactful improvement to the detection sensitivity. Groups at McGill University, Carleton University, and TRIUMF are developing an accelerator-driven ion source to implant radioactive ions inside a volume of LXe. Additional extraction and detection methods are under development by other groups within the nEXO collaboration. In the first phase of this development, ions will be extracted using an electrostatic probe for subsequent identification using $\gamma$-spectroscopy. In this contribution, I will provide a status update on the commissioning of the Ba-tagging setup at TRIUMF and present results on ion extraction efficiency simulations using an electrostatic probe.
Potassium-40 (40K) is one of the largest sources of natural radioactivity we are exposed to in daily life. It is the only isotope decaying by electron capture, beta- and beta+. The KDK collaboration has carried out the first measurement of the electron capture to ground state of 40Ar and found a branching ratio of IEC0 =(0.098±0.025)% [1,2]. In order to confirm theoretical predictions on EC/beta+ ratio, the KDK+ collaboration will remeasure the even smaller beta+ decay branch that has not been studied since the 1960s [3]. This will be done by dissolving potassium in a liquid scintillator vessel surrounded by a sodium iodide detector. Triple coincidences between the scintillation caused by the positron and two back-to-back 511keV gammas from its annihilation will be used to distinguish the signal from the background. We will present the work that consists of optimizing the compatibility of potassium with a liquid scintillator, as well as the design of the experimental setup to carry out the measurement.
[1] M. Stukel et al. (KDK Collaboration), “Rare 40K decay with implications for fundamental physics and geochronology”, Phys. Rev. Lett. 131, 052503 (2023).
[2] L. Hariasz et al. (KDK Collaboration), “Evidence for ground-state electron capture of 40K”, Phys. Rev. C 108, 014327 (2023).
[3] D. W. Engelkemeir et al., “Positron emission in the decay of K40”, Phys. Rev. 126, 1818 (1962).
A number of recent $\beta$-decay studies of neutron-rich rubidium isotopes which utilised Total Absorption Spectroscopy (TAS) revealed significant discrepancies in $\beta$-feeding probabilities from High Resolution Spectroscopy (HRS) studies performed over 40 years ago. These discrepancies can be attributed to the $pandemonium$ effect which was a significant challenge in spectroscopy studies performed with early generation Ge(Li) detectors. Given their large cumulative yields from nuclear fission and the large $Q_{\beta}$ values, incorrect $\beta$-feeding patterns of these isotopes have a significant impact for reactor physics.
While TAS studies are free of and the measured $\beta$-feeding probabilities are confidently considered robust, the method is a largely insensitive probe into the nature of these levels and much key spectroscopic information is missed.
We report results of a new $\beta$-decay study of $^{92}$Rb with the GRIFFIN spectrometer at TRIUMF providing complementary data to recent TAS studies. These results significantly expand the known level scheme of $^{92}$Sr with over 180 levels and 850 $\gamma$-ray transitions identified providing one of the most complex decay schemes across the nuclear chart. As $^{92}$Rb has a $0^-$ ground state and large $Q_{\beta}$ value, the decay populates numerous high-lying $1^-$ levels associated with the Pygmy Dipole Resonance (PDR) which is responsible for an enhancement of $E1$ strength below the neutron separation energy at the low-energy tail of the Giant Dipole Resonance. The PDR is interpreted as an out-of-phase oscillation between the neutron-skin and an isospin saturated core. From this, the PDR can be connected to the symmetry term of the nuclear binding energy and the nuclear equation of state. This interpretation however, is a matter of debate.
As the underlying nature of the PDR remains uncertain, $\beta$-decay offers an alternative probe compared to often employed Nuclear Resonance Fluorescence method and provide further complementary data.
Perimeter Institute’s Education and Outreach team has developed a suite of innovative, world-class resources that have been used by millions of students around the world. These resources go take standard science curriculum topics and connect it to the open questions in physics from quantum mechanics to cosmology using a hands-on, collaborative approach. A core piece of our success has been engaging teachers in every step of resource development and offering extensive training through a network of teachers who have attended Perimeter workshops. This presentation will touch on key aspects of our teacher training process and share some insights gathered over 20 years of introducing novel topics into high school physics classrooms.
Project-based courses have a huge pedagogical potential. They provide an opportunity for students to integrate knowledge acquired in previous courses, and in various disciplines. By working on a project over a few months, students can inquire, formulate plans, hypothesize, develop and evaluate solutions. They have to make some decisions on the information that should be acquired and how to apply it. The process leads them toward a deeper understanding of concepts and principles necessary to realize the project.
This talk will look at the case study of a course I developed in the last five years, aimed at introducing cegep students to the world of multidisciplinary research. It is given in the last semester before entering university and intertwines physics, chemistry, biology, math and psychology in projects studying brain behavior. The students went through the complete process of an experiment: literature search, choice of research question and hypothesis, writing of a letter of informed consent and design, execution, analysis, and dissemination of the results of the experiment. To captivate their interest over a full term, they were given some control on the choice of project and on the way to proceed. However, as they had never done anything this extensive before, they needed a fair amount of guidance. A structured framework was designed to lead them through the several steps of the process. In teams of three or four, they investigated a hypothesis of their choice involving a cognitive process by using a behavioral task and a simple and portative system recording electroencephalograms. In so doing, they learned about the production and transmission of electric fields in the brain and how it relates to the cognitive process studied. They tested their hypothesis on some thirty to sixty participants, usually other students from the cegep.
This presentation will focus on the learning of physical concepts, their relation to the chemistry and physiology of the brain and their application in a realistic situation. It will describe some best practices that were developed to “teach” this course. The course has been given five times so far and has been refined each time. Hopefully, these best practices will inspire other professors interested in a holistic approach to physics education.
In this presentation, I discuss the efforts at the University of Waterloo in developing upper year inquiry lab materials based on the SQILabs as designed by Dr. Natasha Holmes and Dr. Carl Wieman (Physics Today 71 (1), 38–45 (2018)).
In our work, we have proposed a set of experiments that make use of ultrafast lasers to situate students in an environment in which they can test their agency as it relates to learning in the physics lab. We have begun preliminary analysis on the impact of these labs on undergraduate students through the use of qualitative methods. We hope to use these methods to develop quantitative assessments to evaluate the impact these upper year inquiry labs have on student learning and engagement with experimental physics.
These experiments were designed based on the results of the replication studies of the Sense of Agency Survey and the Physics Lab Inventory of Critical Thinking, both completed at the University of Waterloo. These surveys were originally developed and validated by Dr. Natasha Holmes et al.
This work will have an accompanying set of manuscripts that will be available upon request, and hopefully published in the near future.
Purpose: For many students, a visual aid of the material presented in a physical curriculum is quintessential for a good understanding. In many cases, a simple diagram is sufficient and the student is able to intuit the impact of the various parameters.
In more complex topics, the role of each parameter can be difficult to perceive.
Method: This interface, completely built in Python, aims to present graphical representations of physical phenomena. Whether used by a student or an instructor, it is possible to modulate the parameters and see the impacts on the whole process.
For instance, topics currently available include refraction through multiple parallel interfaces, wavefunctions of basic quantum mechanical potentials, Riemann integrals, operations on complex numbers, attenuation law for photons in medical physics, 1D and 2D convolutions and their Fourier representations, and more.
The tool is freely available and presently available in both English and French.
Results: This tool was used in various class contexts, both as a tutor and an instructor. Students were able to use it by themselves and develop a better qualitative understanding of the physical processes. Instructors also have the possibility of creating precise diagrams quickly, which can alleviate the workload when preparing material.
The tool allows users to view complex phenomena without the need to resort to programming skills, which is necessary when discussing specific topics, such as Fourier transforms, convolutions, and filter. This permits the presentation of the material to group of students who are not yet able to work everything in detail, but might be interested in qualitative aspects. Although developed with the curriculum of physics in mind, it should be of interest to students in other neighbouring disciplines, such as engineering, computer sciences, and mathematics.
It was easily usable to create new material quickly.
Future Work: The whole project is still actively in development. In the short term, modules for classical mechanics and electromagnetism will be included.
This presentation will describe the pilot offering of a program that aims to develop both cognitive and psychomotor skills in students between the ages of 10 and 14 by exploring physics through the use of hand tools and general woodworking techniques. Weekly activity sessions taking place over a six-month period allowed students to work their way through the concepts of force, pressure, friction, torque, mechanical advantage and other topics in elementary-level physics, and culminated in each student creating a useful object of their own design. The development of lessons and activities was guided not only by student interest and the skills necessary to complete a project, but also by shortcomings in student knowledge, understanding and capability that presented themselves in each session. In this way, each new activity attempted to overcome an obstacle discovered in previous activities. This presentation will briefly describe how concerns about student safety and behaviour are addressed, as well as some of the traditional methods of manual training (or educational handwork) that help to continually inform the development of this project.
Glass-formers represent an important family of natural and manufactured materials ubiquitous in nature, technology, and our daily lives. Approaching their glass transition temperatures ($T_g$) makes them resemble solids lacking long-range structural order, similar to liquids. Careful detection of the glass transition and accurate measurement of the $T_g$-value constitute fundamental steps in both fully resolving the enigma of this phenomenon and making application-oriented choices and advancements for glass-formers. Given the complexities of experimental synthesis and characterization, modern computer simulation methods based on the application of chemically realistic models can play a pivotal role in tackling the glass transition. Based on our previous studies of polymeric systems [1,2], here we will cover common approaches to evaluating the $T_g$-value from simulations and discuss their pros and cons. We will then introduce promising machine learning (ML) methods that may permit exploration of molecular patterns of the glass transition, fully utilizing available microscopic details within complex high-dimensional datasets from simulations. Finally, we will overview our progress in the development of a novel framework that fuses atomistic computer simulations and several ML methods for computing $T_g$ and studying the glass transition in a unified way from various molecular descriptors for glass-formers.
[1] A.D. Glova, S.G. Falkovich, D.I. Dmitrienko, A.V. Lyulin, S.V. Larin, V.M. Nazarychev, M. Karttunen, S.V. Lyulin, Scale-dependent miscibility of polylactide and polyhydroxybutyrate: molecular dynamics simulations, Macromolecules, 51, 552 (2018)
[2] A.D. Glova, S.V. Larin, V.M. Nazarychev, M. Karttunen, S.V. Lyulin, Grafted dipolar chains: Dipoles and restricted freedom lead to unexpected hairpins, Macromolecules, 53, 29 (2020)
We are studying stable polystyrene (PS) glasses prepared by PVD (physical vapour deposition) with N up to ~12. These glasses have fictive temperatures as low as Tg -20 K with respect to its supercooled liquid line, and a kinetic stability of down to deposition temperatures of ~ 0.84*Tg. Employing increased surface dynamics, vapour deposition can yield an effectively packed amorphous material in a layer-by-layer pattern. In our lab, recently we have started determining the elastic modulus of PS films via atomic force microscopy (AFM). We examined the elastic modulus of PS, with a film thickness of ~ 100 nm, as a function of Mn (i.e., 11,200, 60,000 and 214,000 kg/mol), if the molecular size impacts the mechanical properties of the PS films. We observed a decrease in the magnitude of elastic modulus for PS as moving down to lower Mn. We also studied PS film with Mn = 214,000 kg/mol as a function of annealing time, annealed at the temperature of Tg + 20 K. The non-destructive nature of AFM allows us to determine the moduli of as-deposited glass, the supercooled liquid, and ordinary glass from a single sample. We will explore the mechanical properties of stable polymer vapour-deposited glasses of PS as a function of stability (down to Tg – 20 K) and the film thickness (50 nm – 200 nm). We expect to observe an increase in the elastic modulus (i.e., 20 - 30%) of the stable polymer vapour deposited glasses of PS compared to the ordinary glass of PS with the same N.
We measure the isothermal rejuvenation of stable glass films of poly(styrene) and poly(methylmethacrylate). We demonstrate that the propagation of the front responsible for the transformation to a supercooled-liquid state can serve as a highly localized probe of the local supercooled dynamics. We use this connection to probe the depth-dependent relaxation rate with nanometric precision for a series of polystyrene films over a range of temperatures near the bulk glass transition temperature.
The analysis shows the spatial extent of enhanced surface mobility and reveals the existence of an unexpected large dynamical length scale in the system.
The results are compared with the cooperative-string model for glassy dynamics. The data reveals that the film-thickness dependence of whole film properties arises only from the volume fraction of the near-surface region. While the dynamics at the middle of the samples shows the expected bulk-like temperature dependence, the near-surface region shows very little dependence on temperature.
When continuum materials with cohesive forces are perturbed from an equilibrium configuration, they relax over time tending toward the lowest energy shape. We are interested in studying the physics of a similar ageing process in a two-dimensional granular system in which individual particle rearrangements can be directly observed. We present an experiment in which a two-dimensional raft of microscopic cohesive oil droplets is elongated then allowed to relax back to a preferred shape. As the droplet raft is gently confined by a curved meniscus, we can study the relaxation toward equilibrium for hours to days. Over sufficiently long times, coalescence plays a crucial role introducing disorder in the system through local defects, and promotes particle rearrangements. Varying the size of droplets and strength of cohesive forces, we investigate the geometry and dynamics of short- and long-term structure ageing due to large scale relaxation and local coalescence events.
Granular systems can be great analogies to the molecular structures of materials and introducing an intruder to the system can provide novel insight into their dynamics. Here, we study the response of a disordered bi-disperse two-dimensional aggregate of oil droplets to a moving ferrofluid droplet which acts as a controlled intruder. The frictionless and cohesive oil droplets form a compact two dimensional disordered aggregate. The mobile ferrofluid droplet is controlled with a localised magnetic field and as the intruder is moved through the aggregate, the intruder forces rearrangements within the aggregate. The speed of the intruder, disorder of the 2D aggregate, and the adhesion between the oil droplets is controlled, and we probe the extent of the rearrangements caused by the intruder as it moves through the aggregate.
Collective properties of granular materials are determined by both interparticle forces and packing fraction. The conical shape of piles of granular material, like a pile of sand is dependant on the interparticle friction and is characterized by the angle of repose of the pile. Surprisingly, we observe formation of conical piles for aggregates of frictionless particles. Our model system is composed of monodisperse oil droplets that are frictionless but cohesive. Previous studies on this system have shown that aggregation of the droplets against an unbounded barrier resembles a liquid puddle rather than a sand pile: rather than growing taller as more droplets are added to the aggregate, a characteristic height is reached after which the aggregate just spreads. In contrast, when the barrier is bounded, we see that the aggregate exhibits a conical growth pattern reminiscent of sand piles. We systematically measure the angle of repose across varying cohesion strengths and droplet sizes and present a theory that explains our findings.
γ and β radiation emitted from fission and activation products in the U$O_2$-fuel matrix decay to insignificant levels after 1000 years, leaving 𝛂 particles as the primary source of radiation. 𝛂 radiation induces 𝛂 radiolysis of water, a well-known key contributor to the oxidative dissolution of the U$O_2$-fuel matrix. Extensive studies have been conducted to investigate the effect of water radiolysis on fuel dissolution in the unlikely event of used fuel container failure. In contrast, this study explores the direct impact of residual 𝛂 radiation on the solubility of uranium fuel in solid state. Controlled doses of 𝛂 radiation (at 40 keV and 3000 keV) are applied to uranium fuel to investigate nuclear and electronic interactions near the surface and bulk for varying irradiation damage (in DPA). The goal is to replicate the hypothetical tailed radiation dose rate expected for uranium fuel in deep geological repositories (DGR) simulated for 1000 years; and investigate possible effects of the irradiation damage on the uranium fuel in solid state. X-ray photoelectron spectroscopy (XPS) analysis was used to track changes in U$O_{(2+x)}$ oxidation states before and after irradiation. The results reveal a reduction of U$O_{(2+x)}$, with an increased percentage of U(IV) states alongside reduced percentages of U(V) and U(VI) states. Our findings suggest that prolonged exposure of uranium fuel to 𝛂 radiation in simulated DGR conditions, without container failure, decreases the availability of U(VI), the soluble form of uranium (in the U$^{VI}O_2$$^{2+}$ state). This outcome does not raise additional safety concerns regarding nuclear waste containment. Changes in the oxidation states after irradiation in vacuum will be compared to the changes induced by irradiation in the aqueous environment in the next steps.
Zinc and cadmium compounds are indispensable to critical sectors such as corrosion control, energy, and manufacturing. In applications ranging from coatings to battery electrodes and photovoltaic devices, the ability to precisely characterize different zinc and cadmium compounds is essential. This ability aids our understanding of changes in surface chemistry, surface mechanics, and material properties. X-ray photoelectron spectroscopy (XPS) has been repeatedly demonstrated as a powerful analytical tool to achieve such speciation, provided there is sufficient quality reference data available. Typically, speciation is achieved by analyzing shifts in photoelectron binding energies, and occasionally, Auger electron kinetic energies. Due to overlapping main photoelectron binding energies in many zinc and cadmium compounds, Auger electrons and the modified Auger parameter are also crucial for reliably detecting changes in chemical state. Despite zinc and cadmium's prevalence in surface applications, there is a notable scarcity of high-quality XPS reference data for these compounds beyond the metals and oxides. The available data often lacks the breadth and reliability required for precise chemical state analyses, with inconsistencies, uncertainties, and issues of reproducibility. Existing literature also frequently overlooks Auger signals and Auger parameters, despite their proven utility.
In this presentation, recent work to extend upon previously published XPS data and curve-fitting procedures will be detailed for a wide range of high-purity zinc- and cadmium-containing compounds. This will include a summary of current literature data, with careful exclusion of any sources that contain issues related to reliability. A summary of novel XPS data collected for forty unique zinc and cadmium materials including photoelectron binding energies, Auger kinetic energies, Auger parameters, and counterion binding energies will also be highlighted. Lastly, the applicability of curve-fitting Auger signals to analyze unknown mixed-species systems that contain zinc or cadmium will also be showcased.
The growth of nanomaterials in a biphasic system is an intriguing physical diffusion process where two immiscible, or partially miscible, phases are used to disperse two distinct precursors that merge at the interface, leading to the directional growth of crystals. In our method for the synthesis of spirocyclic nanorods, an aqueous phase (containing hydroxylated molybdenum disulfide nanosheets and thioglycolic acid) is interfaced with a butanol phase containing ninhydrin. The diffusion of these two phases one into another creates a system where the synthesis of spirocyclic nanorods occurs. Using advanced imaging techniques such as electron and atomic force microscopy, we show that this process allows for the controlled synthesis of nanorods with specific length and diameter depending on the concentration of precursors and diffusion-promoting additives, thus making it a promising approach for nanomaterial growth applications. Surface chemical features were examined using FTIR, UV-visible spectroscopy, Raman spectroscopy, X-ray photon spectroscopy (XPS), and Atomic Force Microscopy (AFM). Our method for growing spirocyclic organic nanorods was applied to fabricate nanorod sensors capable to the detection of a variety of proteinogenic amino acids, pointing at the unique physico-chemical properties of our system.
Understanding of the effect of active layer morphology on the operation of photovoltaics is crucial to the development of higher efficiency devices. A particular parameter with a complex dependence on local environment is the mobility of photogenerated charge carriers, upon which carrier extraction is highly dependent, and therefore overall device performance. Bulk device photo-carrier mobility is available through several single-point measurements, and cross-sectional mobility mapping with sub-micron scale resolution is achievable on moderately thin film devices. However, nano-scale resolution lateral imaging of intrinsic optoelectronic properties has only extended as far as surface photovoltage based measurements, which garner recombination information, and are speculative on carrier dynamics. Here, we present a novel integration of scanning near-field optical microscopy (SNOM) with charge extraction by linearly increasing voltage (CELIV) for direct mobility mapping, acquired in conjunction with atomic force microscopy (AFM) topography scans. By utilizing near-field illumination and nano-probe charge extraction via a conducting cantilever, our technique is both photonically and electronically localized, offering improved resolution and eliminating incidental measurement of delocalized material properties. This technique allows for measurements on a range of photoactive samples, as measurements on exposed active layer surfaces of PN homojunctions allows for investigation of morphological influence on free charge extraction, and measurements on bulk heterojunction samples allows for correlation of charge extraction to phase interface morphology. Freedom to change extraction voltage polarity and DC offset allows for variability in probed carrier type and device operation mode. This helps us achieve a versatile method for direct measurement of photogenerated charge dynamics in photovoltaic devices with nano-scale resolution.
Chaotic classical systems exhibit extreme sensitivity to small changes in their initial conditions. In a spin chain, chaos can be tracked not only in time, but in space. The propagation of small changes in the initial conditions results in a “light cone” bounding the spatial region and time interval when the trajectories have diverged. For nearest-neighbour interactions, the light cone produced is linear, defining a “butterfly velocity” that characterizes the speed at which chaos propagates. Realistic systems are more complicated, and can include interactions beyond immediate neighbours. We examine how more realistic, longer-range interactions affect the spread of chaos in spin chains, and how the light cone is modified by their presence. Using a classical analogue of the out-of-time-ordered correlator (OTOC), we measure the decorrelation of the two spin chains in time and space, modifying the equations of motion to incorporate further neighbor interactions. We explore two cases: exchange interactions with exponential and power-law decays. For the exponentially decaying case, we find the slope of the front at long times is modified even for very small interactions, but there is a critical decay constant below which we recover the nearest neighbour result. For the power-law case, the front becomes logarithmic at long-times, independent of the power-law exponent. We demonstrate that this behavior emerges from the superposition of nearest-neighbor linear cones with the initial disturbances giving rise to an envelope defining the front of the modified light cone. Finally, we discuss potential future directions in understanding chaotic behaviour in higher-dimensional classical systems and realistic interaction terms, such as anisotropy.
We theoretically investigate Weyl superconductivity in quasicrystals. Weyl superconductivity is a topological phase in three-dimensional crystals with topologically protected point nodes in the Brillouin zone called Weyl nodes, at which the Chern number changes its value [1]. Quasicrystals (QCs) are materials whose structure is aperiodic with a long-range order. As they lack translational symmetry and hence the Brillouin zone, the Chern number cannot be defined for their topological characterization. Accordingly, a theory of Weyl superconductivity has not been established for QCs in spite of recent extensive studies on quasicrystalline topological phases.
We extend the concept of Weyl superconductors to periodically stacked, two-dimensional quasicrystalline topological superconductors. To visualize this new concept, we examine quasicrystalline Weyl superconductivity realized in layered Ammann-Beenker and Penrose quasicrystals with spin-orbit coupling under an external magnetic field. We calculate the Bott index in real space as a reliable topological invariant [2] to characterize quasicrystalline Weyl nodes [3]. In the presence of surface boundaries, zero-energy Majorana surface modes emerge between two Weyl nodes in momentum space corresponding to the stacking direction. We find that the Majorana zero modes are decomposed into an infinite number of components resolved in momentum in the direction along surfaces within each layer. The distribution forms quasiperiodic arcs, which we call aperiodic Majorana arcs. We show that, in layered Ammann-Beenker (Penrose) quasicrystals, the position of the aperiodic Majorana arcs is characterized by the silver (golden) ratio associated with the quasicrystalline structure.
[1] T. Meng and L. Balents, Phys. Rev. B 86, 054504 (2012).
[2] R. Ghadimi, T. Sugimoto, K. Tanaka, and T. Tohyama, Phys. Rev. B 104, 144511 (2021).
[3] A. G. e Fonseca et al., Phys. Rev. B 108, L121109 (2023).
Majorana-based quantum computing harnesses the non-Abelian exchange statistics of Majorana zero modes (MZMs) in order to perform gate operations via braiding. It is paramount that braiding protocols keep a given system within its ground state subspace, as transitions to excited states lead to decoherence and constitute a “diabatic error.” Typical braiding protocols are envisioned on networks of superconducting wires where MZMs are shuttled by using electric gates to tune sections of a wire (“piano keys”) between topologically trivial and non-trivial phases. The focus of our work is to further study the diabatic error, defined as the transition probability to excited states, as MZMs are shuttled using piano keys through a single wire. Previous work has established that the behavior of the error can be adequately captured by Landau-Zener physics [1] and that the use of multiple piano keys may be optimal in reducing the error in certain situations [2]. We extend upon these works and consider MZM transport through superconducting wires which are disordered and subjected to external noise. We numerically calculate the diabatic error for these cases and, in particular, we demonstrate how disorder and noise change the optimal piano key picture presented in Ref. [2].
[1] B. Bauer, T. Karzig, R. V. Mishmash, A. E. Antipov, and J. Alicea, SciPost Phys. 5, 004 (2018)
[2] B. P. Truong, K. Agarwal, T. Pereg-Barnea, Phys. Rev. B 107, 104516 (2023)
Many topologically non-trivial systems have local topological invariants which cancel over the full Brillouin zone. Yet such systems could be platforms for non-abelian physics, for example nodal superconductors potentially hosting Majorana modes. Experimentally distinguishing signatures from local non-trivial topology to similar trivial features is not a clear-cut process. Our work extends the method developed by Dutreix et al., which detects the local Berry phase of the Dirac cones in graphene. Here extended to a general Hamiltonian with chiral symmetry, the method is applicable to nodal superconductors. We have found that for two Dirac cones with a difference in topological winding there exists a theoretical ideal impurity and STM tip for which Friedel oscillations capture that winding difference. This information is accessible directly in the complex phase of the Fourier transformed local density of states. We have further derived conditions for when a physical impurity can capture the winding difference. As a proof-of-concept, we applied the conditions to the topological nodal superconductor predicted in monolayer NbSe$_2$ under an in-plane field. Furthermore, we have predicted an experiment where STM can detect the winding of each of the 12 nodes. We conclude that this method of designing impurity scattering can be a powerful tool to determine local topological invariants and superconducting symmetries in 2D systems.
Stabilizer codes are the most widely studied class of quantum error-correcting codes and form the basis of most proposals for a fault-tolerant quantum computer. A stabilizer code is defined by a set of parity-check operators, which are measured in order to infer information about errors that may have occurred. In typical settings, measuring these operators is itself a noisy process and the noise strength scales with the number of qubits involved in a given parity check, or its weight. Hastings proposed a method for reducing the weights of the parity checks of a stabilizer code, though it has previously only been studied in the asymptotic regime. Here, we instead focus on the regime of small-to-medium size codes suitable for quantum computing hardware. We provide both a fully explicit description of Hastings's method and propose a substantially simplified weight reduction method that is applicable to the class of quantum product codes. Our simplified method allows us to reduce the check weights of hypergraph and lifted product codes to at most six, while preserving the number of logical qubits and at least retaining (in fact often increasing) the code distance. The price we pay is an increase in the number of physical qubits by a constant factor, but we find that our method is much more efficient than Hastings's method in this regard. We benchmark the performance of our codes in a photonic quantum computing architecture based on GKP qubits and passive linear optics, finding that our weight reduction method substantially improves code performance.
Multi-qubit parity checks are a crucial requirement for many quantum error-correcting codes. Long-range parity checks compatible with a modular architecture would help alleviate qubit connectivity requirements as quantum devices scale to larger sizes. In this work, we consider an architecture where physical (code) qubits are encoded in stationary degrees of freedom and parity checks are performed using state-selective phase shifts on propagating light pulses, described by coherent states of the electromagnetic field. We optimize the tradeoff between measurement errors, which decrease with measurement strength (set by the average number of photons in the coherent state), and the errors on code qubits arising due to photon loss during the parity check, which increase with measurement strength. We also discuss the use of these parity checks for the measurement-based preparation of entangled states of distant qubits. In particular, we show how a six-qubit entangled state can be prepared using three-qubit parity checks. This state can be used as a channel for controlled quantum teleportation of a two-qubit state, or as a source of shared randomness with potential applications in three-party quantum key distribution.
Atomic and solid-state spin ensembles are promising platforms for implementing quantum technologies, but the unavoidable presence of noise imposes the needs for error correction. Typical quantum error correction requires addressing specific qubits, but this requirement is practically challenging in most ensemble platforms. In this work, we propose a quantum error correction scheme for error correction without individual spin resolution. Our scheme encodes quantum information in superposition of excitation states, even though they are fundamentally mixed. We show that our code can protect against both individual and collective errors of dephasing, decaying, and thermalization. Furthermore, we illustrate how our scheme can be implemented with realistic interaction and control. We also exemplify the application of our formalism in robust quantum memory and loss-tolerant sensing.
Motivation: The significant progress that quantum theory has made in recent years, has occurred despite the conspicuous absence of any consensus interpretation of quantum mechanics, and in particular on the measurement problem, which is essentially Wheeler’s question: Why the quantum? The resolution of debate surrounding this issue would likely pay dividends in experimental quantum science. For example, a better understanding of the measurement process may allow design of longer lasting coherences.
Fundamental Basis of Superposition: From spacetime considerations (see references), the fundamental basis for quantum superposition is proposed to be spacetime superposition of spaces related by the Lorentz boost. In many scenarios this is equivalent to momentum superposition. Although quantum systems can be represented in many different forms (momentum basic, position basis, energy basis etc.) the definition of a fundamental basis renders these alternatives no longer equivalent. For example, although an electron in an atomic orbital may be in an energy eigenstate, it is seen as fundamentally as in a persistent state of momentum superposition.
Measurement Criterion: Measurement (operation of the probabilistic Born rule) is interpreted as any process which asks a quantum system an unanswerable momentum question, i.e., a question demanding a more specific momentum answer than the momentum superposition can deterministically provide. Measurement is an attempt to extract non-existing momentum information. If no deterministic answer is available, but some answer is demanded, then an indeterministic symmetry-breaking process must occur. An example is any diffraction experiment in which the final screen interrogates the lateral momentum of the diffracted particle. Conversely, entanglement occurs when quantum systems interact in a manner not making such demands upon each other.
Experimental Implications: The definition of a fundamental basis dictates the types of quantum system that may exist (superselection). A specific measurement criterion distinguishes probabilistic vs. entangling interactions. Both have experimental implications.
References: For further details: https://orcid.org/0000-0002-9736-7487
In the past 30 years, telescopes in space and on the ground have discovered thousands of extrasolar planets, providing us with a representative sample of the worlds that orbit other stars in our galaxy for the first time. However, our knowledge of these planets is limited to no more than a few datapoints for each one by the vast distances that separates us. Yet, though these places live mainly in our mind’s eye, we can construct remarkably accurate pictures of the processes which dominate their environments. We can do this because of our understanding of planetary processes that we have gained through 62 years of robotic solar system exploration. This hard-won experience, like a celestial Rosetta Stone, allows us to translate our sparse information about the exoplanetary realm into the language of our familiar solar family of planets. However, unlike the famous artifact, we can still write new chapters to the translation. Exoplanets tell us about the full diversity of worlds and their circumstances while robotic space exploration missions consider a single representative world from that set up close. Thus, exoplanetary astronomy and solar system exploration are disciplines in dialogue. By deeply interrogating our nearest neighbors we can expand our understanding of planets everywhere.
Those who lead industry and educational institutions and particularly those who teach need to acknowledge that their own STEM education is characterized by (1) the exclusion of non-Whites from positions of power, which almost completely erases Indigenous theories and contributions to STEM; (2) the development of a White frame that organizes STEM ideologies and normalizes White racial superiority; (3) the historical construction of a curricular model based on the thinking of White elites, thus disregarding minoritized cultures that contributed to STEM globally; and (4) the assertion that knowledge and knowledge production are neutral, objective, and unconnected to power relations. STEM education and occupations were designed to attract White men who are heterosexual, able-bodied, middle class, and upper class, and, more recently, some East Asian groups designated as acceptable. Therefore, the curriculum and products of this culture contribute to an inhospitable environment for students, faculty, and employees who do not fit these criteria.
The subsequent segment of the presentation aims to delineate an innovative STEM curriculum that eminently acknowledges and validates the racial identities and firsthand experiences of students who have been historically relegated to the periphery of mainstream education. The centrality of this curriculum lies in its unabashed focus on pressing social matters, utilizing these as the pivotal catalyst around which STEM education is designed and delivered. The significance of this curricular approach guides the shift away from a traditional, monocultural lens of teaching STEM, which often inadvertently buttresses systemic barriers, towards a more culturally responsive and socially conscious pedagogical design. By locating the lived experiences and racial identities of marginalized students at the paradigm’s core, the curriculum serves to affirm their voices and perspectives, thereby fostering a more inclusive and equitable educational environment.
Further, by intertwining STEM learning with real-world social issues, the curriculum fosters the development of critical thinking and problem-solving skills, crucial competencies for the 21st-century workforce. It empowers learners to understand, engage with, and propose solutions to real-world challenges using STEM principles. Intrinsically, it instigates a more holistic understanding of STEM, one that transcends the conventional boundaries of textbook learning and plants the seeds for nurturing socially conscious, scientifically literate individuals. Therefore, this innovative, context-driven approach to STEM instruction not only serves as a powerful tool to counter educational exclusion and disparity, but it also equips students with the aptitude and motivation to apply learned concepts in addressing socially relevant issues, thereby redefining the landscape of meaningful and impactful education.
Nature appears to respect certain laws to exquisite accuracy, for example information never travels faster than light. These laws, codified in quantum field theory, underwrite the Standard Model of particle physics. Recently it is appreciated that this structure is so rigid that there is often a unique quantum field theory compatible with a few additional assumptions. This gives an important new tool to theorists: internal consistency enables precise calculations. I will describe my contributions to this vast effort, and what it teaches us about strongly interacting field theories that appear in two surprisingly related situations: critical phenomena and quantum gravity.
The PIENU experiment at TRIUMF has provided, to date, the most precise experimental determination of $R^\pi_{e/\mu}=\frac{\pi^+\rightarrow e^+(\gamma)}{\pi^+\rightarrow \mu^+(\gamma)}$, the ratio of pions decaying to positrons relative to muons. While $R^\pi_{e/\mu}$ is more than an order of magnitude less precise that the Standard Model (SM) calculation, the PIENU result is a precise test of the universality of charged leptons interaction, a key principle of the Standard Model (SM), constrains a large range of new physics scenario, and allows dedicated searches for exotics such as sterile neutrinos. I’ll go over a short overview of $R^\pi_{e/\mu}$ measurements and introduce the
next generation precision pion decay experiment in the making: PIONEER!
This newly proposed experiment aims at pushing the boundaries of precision on $R^\pi_{e/\mu}$ and expanding the physics reach by improving on the measurement of the very rare pion beta decay $π^+\rightarrow \pi^0 e^+ \nu$. This will provide a new and competitive input to the determination of $|V_{ud}|$, an element of the Cabibbo- Kobayashi-Maskawa (CKM) quark-mixing matrix.
Located at SuperKEKB, an asymmetric $e^{+} e^{-}$ collider and the world’s first super B-Factory, the Belle II experiment is searching for evidence of new physics at the precision frontier. Since recording of physics data commenced in 2019, SuperKEKB has claimed the record as the world’s highest luminosity particle collider while steadily approaching its target integrated luminosity of 50 ab$^{-1}$ —a factor of 40 times larger than the combined datasets of the previous B-Factory experiments! The unique, experimentally clean environment, coupled with enhanced detector performance and specialised dark sector triggers, allow Belle II to pursue a vast physics program. This talk will present highlights of recent Belle II physics results and also report on the ongoing activities of Canadian groups contributing to Belle II.
The strange quark is the lightest sea quark in the proton after the up and down quarks, and its production at the LHC is crucial for the understanding of proton internal structure and fragmentation processes. In this work, strange particles are reconstructed using minimum-bias data from $pp$ collisions at 13 TeV taken by the ATLAS detector. Their kinematic distributions and production cross-sections are studied. In particular, the $K_s$ and $\Lambda$ ($\overline{\Lambda}$) give clean signatures and high yield in the detector, while the $\Xi^{-}$ ($\overline{\Xi}^{+}$), despite its lower yield, could be a strong indicator of strangeness content as it contains two strange quarks. The reconstructed data samples are then compared with Monte Carlo samples to calculate particle detector acceptance and efficiency, to estimate the sensitivity of the data and to better understand strangeness production processes.
This study investigates the impact of vector-like quarks on rare B-decays, focusing on recent experimental searches. Vector-like quarks, an intriguing feature of many extensions beyond the Standard Model (SM), offer a unique avenue for probing physics BSM. We consider extending the standard Model by adding a vector-like isosinglet down-type quark. Experiments at LHCb and Belle II are actively studying rare B transitions like exclusive semileptonic B → Kνν̄ decays. Therefore, by analyzing the underlying quark b-> s semileptonic transitions, we investigate deviations from the Standard Model due to vector-like quarks, utilizing the latest experimental constraints on the model parameters.
Integral transform methods are a cornerstone of applied physics in optics, control, and signal processing. These areas of application benefit from physics techniques not just because the techniques are quantitative, but because the quantitative knowledge that physics generates provides concrete insight. Here, we introduce an integral transform framework for optimization that puts it on an analogous physical footing to problems in optics, control, and signals. We illustrate the broad applicability of this framework on example problems arising in additive manufacturing and land-use planning. We argue that this framework both enlarges the interface between physics and new areas of application, and it enlarges we consider physical systems.
Land-use decision-making processes have a long history of producing globally pervasive systemic equity and sustainability concerns. Quantitative, optimization-based planning approaches, e.g., Multi-Objective Land Allocation (MOLA), seemingly open the possibility of improving objectivity and transparency by explicitly evaluating planning priorities by land use type, amount, and location. Here, we primary show that optimization-based planning approaches with generic planning criteria generate a series of unstable “flashpoints” whereby tiny changes in planning priorities produce large-scale changes in the amount of land use by type. We give quantitative arguments that the flashpoints we uncover in MOLA models are examples of a more general family of instabilities that occur whenever planning accounts for factors that coordinate use on- and between-sites, regardless of whether these planning factors are formulated explicitly or implicitly. Building on this, our current research extends into the realm of environmental change, revealing that common features across non-convex optimization problems, like MOLA, drive hypersensitivity to climate-induced degradation, resulting in catastrophic losses in human systems well before catastrophic climate collapse. This punctuated insensitive/hypersensitive degradation–loss response, traced to the contrasting effects of environmental degradation on subleading local versus global optima (SLO/GO), suggests substantial social and economic risks across a broad range of human systems reliant on optimization, even in the absence of extreme environmental changes.
The advent of additive manufacturing techniques offers the ability and potential to (literally) reshape our manufactured- and built environment. However, key issues, including questions about robustness, impede the use of additive manufacturing at scale. In this talk, we present a high-performance code that extends topology optimization, the leading paradigm for additive manufacturing design, via a novel Pareto-Laplace filter. This filter has the key property that it couples the physical behaviour of actual, physical products to analogues of physical processes that occur in the space of possible design solutions. We show that the solution space "physics" gives insight into key questions about robust design.
In this talk, we explore solutions to models describing waves under ice generated by moving disturbances such as trucks moving on ice that is frozen on top of large bodies of water. We start by showing how the problem can be reformulated in surface variables, reducing the number of unknowns and resulting in a nonlinear integro-differential system of equations. To solve these equations, we use an iterative solver whose convergence is sped up by a novel hybrid preconditioner. Finally, we examine different regimes such as varying pressure distributions, heterogeneities in ice as well as a bottom topography, and present how these influence the types of solutions we obtain.
A new approach for operationally studying the effects of spacetime in quantum superpositions of semiclassical states has recently been proposed by some of the authors. This approach was applied to the case of a (2+1)-dimensional Bañados-Teitelboim-Zanelli (BTZ) black hole in a superposition of masses, where it was shown that a two-level system interacting with a quantum field residing in the spacetime exhibits resonant peaks in its response at certain values of the superposed masses. Here, we extend this analysis to a mass-superposed rotating BTZ black hole, considering the case where the two-level system co-rotates with the black hole in a superposition of trajectories. We find similar resonances in the detector response function at rational ratios of the superposed outer horizon radii, specifically in the case where the ratio of the inner and outer horizons is fixed. This suggests a connection with Bekenstein's seminal conjecture concerning the discrete horizon spectra of black holes in quantum gravity, generalized to the case of rotating black holes. Our results suggest that deeper insights into quantum-gravitational phenomena may be accessible via tools in relativistic quantum information and curved spacetime quantum field theory.
The behaviour of apparent horizons throughout a black hole merger process is an unresolved problem. Numerical simulations have provided insight to the fate of the two horizons. By considering marginally outer-trapped surfaces (MOTSs) as apparent horizon candidates, self-intersecting MOTSs were found in the merger process and play a key role in the merger evolution [arXiv:1903.05626]. A similar class of self-intersecting MOTSs have then been investigated in explicitly known black hole solutions, including the Schwarzschild solution [arXiv:2005.05350; 2111.09373; 2210.15685]. We present findings from our investigations of MOTSs in the maximally-extended Kruskal black hole spacetime [arXiv:2312.00769]. The spacetime contains an Einstein-Rosen bridge that connects two asymptotic regions. This allows for novel MOTSs that span both asymptotic regions with non-spherical topology, such as that of a torus. These MOTSs are comparable to those found in numerical simulations and have unexpected behaviour with regards to their stability spectrum.
One of the most important results in mathematical general relativity in the last half century is the inequality, conjectured by Penrose in 1973, that the mass inside a black hole has a lower bound determined by the area of the black hole's event horizon, and that the minimal case is realized by the Schwarzschild black hole. While a fully general proof of the conjecture does not yet exist, it has been proved in the cases of extrinsically flat spatial slices (Riemann-Penrose inequality) and in the general case under the assumption of spherical symmetry. We seek to extend the spherically-symmetric proofs of the conjecture to include electric charge (Einstein-Maxwell theory in $(n+1)$-dimension) in an anti-deSitter background, where the rigidity case of the inequality is now Reissner–Nordström AdS. In the future, our goal is to extend our proof to Gauss-Bonnet gravity. This is on-going work which is the subject of the author's PhD thesis.
Treating the horizon radius as an order parameter in a thermal fluctuation, the free energy landscape model sheds light on the dynamic behaviour of black hole phase transitions. Here we carry out the first investigation of the dynamics of the recently discovered multicriticality in black holes. We specifically consider black hole quadruple points in D = 4 Einstein gravity coupled to non-linear electrodynamics. We observe thermodynamic phase transitions between the four stable phases at a quadruple point as well as weak and strong oscillatory phenomena by numerically solving the Smoluchowski equation describing the evolution of the probability distribution function. We analyze the dynamic evolution of the different phases at various ensemble temperatures and find that the probability distribution of a final stationary state is closely tied to the structure of its off-shell Gibbs free energy.
We study the free evolution of dilute Bose-Einstein condensate (BEC) gases which have been initially trapped and released from various differently shaped confining potentials. By numerically solving the Gross-Pitaevskii equation and analytically solving the hydrodynamic Thomas-Fermi theory for each case, we find the presence of acoustic horizons within rarefaction waves which form in the outer edges of the BECs. We comment on the horizon dynamics, the formation of oscillations near the horizon, and connections to acoustic Hawking radiation.
We constructed a family of static, vacuum five-dimensional solutions with two commuting spatial isometries describing a black hole with a S^3 horizon and a 2-cycle `bubble' in the domain of outer communications. The solutions have been obtained by adding dipole and quadropole distortions to a seed asymptotically flat solution. We showed that the conical singularities in the undistorted geometry can be removed by an appropriate choice of the distortion.
Phytoglycogen (PG) is a naturally occurring polysaccharide produced as compact, highly branched nanoparticles in the kernels of sweet corn. Because PG is biocompatible, non-toxic and digestible, it is attractive for applications involving the delivery of bioactive compounds. In the present study, we evaluate the association of PG with the hydrophobic bioactive astaxanthin (AXT), which is a naturally occurring xanthophyl carotenoid with reported health benefits, e.g., acting as an antioxidant and anti-inflammatory agent. However, the extremely poor solubility of AXT in water presents challenges in realizing its full potential for improving human and animal health. In the present study, we describe a method to improve the effective solubility of AXT in water through its physical association with PG, i.e., without the use of added chemicals such as surfactants. We combine PG dispersed in water with AXT dissolved in acetone, evaporate the acetone, and lyophilize to remove the water. The result is a stable AXT-PG complex that can be readily redispersed in water, with aqueous dispersions of AXT-PG stable for long periods of time (several months at 4℃). Using UV-Vis spectroscopy, we characterize the absorbance due to different aggregation states of the AXT molecules in the AXT-PG complex and this has allowed us to determine the maximum loading of AXT onto PG to be ~ 10% by mass, with a corresponding maximum effective concentration of AXT in water of ~ 0.9 mg/mL. Our results demonstrate the promise of using PG as an effective solubilizing and stabilizing agent for hydrophobic compounds in water.
Purpose: With advancements in high dose rate radiotherapy techniques such as FLASH therapy, radiochromic films have been proposed as a key dosimeters due to their relative dose rate independence when used with standard read-out methods. Our group is interested in understanding the real-time behaviour of these materials in order to develop radiochromic optical probes for real-time dosimetry, with utility across a broad range of beam qualities and applications.
Methods: Three radiochromic formulations were made with 10,12-pentacosa diynoic acid (PCDA) and its lithium salt (LiPCDA), with varying Li+ ratios (PCDA, 635LiPCDA, and 674LiPCDA). The formulations, coated onto polyethylene, were irradiated within a custom real-time jig equipped with optical fibres for continuous data collection before, during and after irradiation. The light source is a tungsten halogen lamp, and the light transmitted through the film was collected by a CCD camera. The three radiochromic formulations, and commercial EBT-3 for benchmarking, were irradiated to 0-25 Gy with a 74 MeV proton beam (TRIUMF), a 6MV photon beam (clinical linear accelerator (LINAC), University Health Network), and an electron FLASH beam (decommissioned LINAC). The transmitted light was processed to calculate optical density around the main absorbance peak for each formulation.
Results: All in-house films and commercial EBT-3 showed an instant sharp increase in optical density with absorbed dose, including under FLASH conditions. For all three beam modalities, 635LiPCDA (comparable to current commercial products) exhibited the highest sensitivity, followed by 674LiPCDA, and PCDA (comparable to older products) respectively. As previously observed for commercial radiochromic films, all formulations demonstrated a lower response per dose when irradiated with protons due to quenching effects.
Conclusions: We demonstrate that LiPCDA crystals can be selectively grown to exhibit tailored dose responses. For the first time, we show that real-time response in standard proton beams and under electron FLASH conditions are characterized by an immediate sharp increase in optical density with absorbed dose, followed by an expected asymptotic shoulder due to post-exposure polymerization.
Hemoglobin (Hb), the cornerstone of oxygen transport in the body, holds crucial diagnostic significance for disorders like β-Thalassemia and sickle cell anemia. Conventional blood assays often grapple with issues of delays, cost, and accessibility. In this study, we unveil an innovative nano-biosensor leveraging surface-enhanced Raman spectroscopy (SERS), offering swift and real-time detection of iron-containing molecules, with a primary focus on Hb, the predominant iron-containing compound in blood. This detection could be used with minimal samples and great sensitivity.
Our sensor's foundation involves gold and silver thin film substrates, crafted through pulsed laser ablation and electrochemical deposition techniques, precisely tuned to resonate with 633 and 532 nm Raman lasers. Functionalization with a novel heteroaromatic ligand L, a derivative of alpha-lipoic acid and 2-(2-pyridine)imidazo[4,5,f]-1,10-phenanthroline, enables the creation of a highly selective Hb sensor. The sensing mechanism hinges on the coordination bonds formed between the phenanthroline unit of L and the iron center in the heme unit of the Hb protein.
Our sensor chip exhibits stability over a week, maintaining high sensitivity to Hb. Leveraging the characteristic SERS band of L observed at 1390 cm-1, associated with the porphyrin methine bridge, we discern fluctuations in intensity corresponding to varying concentrations of normal Hb. This dynamic information is harnessed to assess iron content, facilitating the diagnosis of iron excess or deficiency indicative of various diseases. Furthermore, the SERS spectra distinguish Fe2+/Fe3+ redox species, providing insights into the oxygen-carrying capacity of Hb. Validation through electrochemical SERS, utilizing silver nanofilm on ITO, scrutinizes changes in Fe2+/Fe3+, potentially enabling early diagnosis of health conditions manifesting alterations in the oxidative states of iron in Hb.
Distinctive SERS bands in the "fingerprints region" allow discrimination between normal Hb and abnormal Hb variants. Density Functional Theory – Molecular Dynamic (DFT-MD) calculations correlate the experimental vibrational peaks enhancing the robustness of our findings. This study lays a pioneering foundation for extending our approach towards developing a lateral flow assay, promising a rapid and accurate diagnosis of Hb disorders. Our nano-biosensor holds transformative potential, heralding a new era in hemoglobin analysis and associated disorder diagnostics.
Introduction: Sepsis is a life-threatening host response to an infection that disproportionately affects vulnerable and low-resource populations. Since early intervention increases survival rate, there is a global need for accessible technology to aid with early sepsis identification. Peripheral microvascular dysfunction (MVD) is an early indicator of sepsis that manifests as impaired vasomotion in the skeletal muscle--that is, low-frequency oscillations in microvascular tone independent of cardiac and respiratory events. Previous studies have used oscillations in hemoglobin content (HbT), oxygenation (StO2), and perfusion (rBF) as sensitive markers for vasomotion. These physiological parameters can be monitored non-invasively with near-infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS). The objective of this study was to use a hybrid NIRS/DCS system to continuously monitor peripheral and cerebral vasomotion in a rat model of early sepsis.
Methods: 14 Sprague-Dawley rats were used for this study. Control animals (n=4) received an intraperitoneal (IP) injection of saline, while the experimental group (n=10) received an IP injection of fecal slurry to induce sepsis. Optical probes were secured on the scalp and hind limb of animals for simultaneous NIRS and DCS measurements. Peripheral and cerebral HbT, StO2, and rBF were quantified from NIRS/DCS measurements using algorithms developed in MATLAB. Continuous wavelet transform was used to dynamically isolate low-frequency isolations from the three parameters. Two-way ANOVAs were used to investigate power of vasomotion in all three hemodynamic parameters for differences across condition (control, septic) and time (period 1 = 0.5 - 2 h, period 2 = 2 - 4 h, period 3 = 4 - 6 h).
Results: Power of peripheral vasomotion was significantly higher in septic animals as reflected in all three parameters during periods 2 and 3. Power of cerebral vasomotion was significantly higher in septic animals only in the HbT signal.
Conclusions: Optical spectroscopy can be used as a non-invasive tool to detect peripheral MVD. Importantly, our results suggest that while the brain is partly protected, the skeletal muscle is a consistent early diagnostic target for sepsis. Limitations include the use of homogenous animal model. Future work will seek to validate these techniques in ICU patients.
Introduction: A promising approach for detecting early-stage mild cognitive impairment (MCI) is identifying changes in cerebrovascular regulation prior to overt changes in cognition. Low-frequency oscillations (LFO) in cerebral perfusion and oxygenation, originating from neurogenic and myogenic regulation of hemodynamics, may be altered in patients with MCI. Previous work has shown increased LFO in oxygenation of Alzheimer’s and MCI patients compared to healthy, older adults. For this study, we hypothesized that MCI patients would exhibit increased power of LFO in cerebral (1) perfusion, (2) oxygenation, and (3) metabolic rate of oxygen (CMRO2) consumption.
Methods: 12 MCI (74 ± 6 years) and 8 cognitively intact control (CTL) participants (69 ± 7) were recruited. An in-house built diffuse correlation spectroscopy (DCS) and time-resolved near-infrared spectroscopy (trNIRS) system was used to record microvascular perfusion and oxygenation, respectively. Data were acquired from the forehead for 480 seconds during seated rest. DCS and trNIRS measurements were analyzed with custom scripts (MATLAB) to calculate relative changes in cerebral blood flow (rCBF), tissue oxygen saturation (StO2), and relative CMRO2 (rCMRO2). A continuous wavelet transform was used to decompose time courses into time-varying frequency components. The power of neurogenic (0.02-0.06 Hz) and myogenic (0.06-0.16 Hz) oscillations were isolated. Mann-Witney tests were used to compare MCI and CTL. Effect sizes are reported as Cohen’s d.
Results: MCI patients had lower neurogenic power in rCBF (p = 0.03, d = 0.89) but greater myogenic power in StO2 (p = 0.03, d=1.00). Although not significant, this pattern remained for myogenic power in microvascular perfusion (p = 0.09, d = 0.52) and neurogenic power in StO2 (p = 0.08, d=0.86). There were no differences in neurogenic or myogenic LFO power for rCMRO2 (both p ≥ 0.3, d = 0.16).
Discussion: Participants with MCI have lower oscillatory power in cerebral microvascular perfusion but greater power in cerebral oxygenation. Interestingly, these opposing responses counteract, resulting in similar metabolic oscillations which demonstrates potential adaptations that occur to support neural metabolism in people with MCI. Immediate future work will be to analyze macrovascular perfusion and blood pressure oscillations to understand systemic differences.
Cell migration is a fundamental process in various physiological scenarios such as cancer metastasis, wound healing, immune responses,and embryonic development. Among the environmental cues, physical factors especially the electric field (EF) have been widely demonstrated to guide the migration of various cell types. EF guided cell migration, termed ‘electrotaxis’, has been traditionally studied in vitro, using contact based direct current (DC) or alternating current (AC) EF by placing electrodes directly in the media. More recently non-contact AC EF guided electrotaxis has also been explored. Since DC EF is closer to physiological conditions, the availability of non-contact wireless DC EF guided electrotaxis will be highly valuable. In this study, we developed a customizable parallel plate capacitor based experimental platform that could facilitate the use of non-contact DC EF to guide cell migration.COMSOL Multiphysics modeling shows that our platform can generate a relatively uniform EF in the center region of the cell chamber. This uniformity is important as it allows for more consistency and reproducibility of the experimental results. The design of the parallel plate capacitor apparatus allows for complete customization during use, including the flexibility to adjust the distance between electrode plates, removable petri-dish holders, and seamless integration with an optical microscope for live cell imaging. The developed platform was validated with several cell types including human metastatic breast cancer cells and human peripheral blood immune cells. With the developed platform, interesting cell migratory behaviors were observed through various quantitative analyses of time-lapse cell migration image data.We have started to further explore the mechanism behind non-contact DC EF guided electrotaxis.
The Electron-Ion Collider (EIC) is envisioned as an experimental facility to investigate gluons in nucleons and nuclei, offering insights into their structure and interactions. The Electron-Proton/Ion Collider Experiment (ePIC) Collaboration was formed to design, build, and operate the EIC project detector, which will be the first experiment at the collider. The unique physics goals at the EIC necessitate specific design considerations for the electromagnetic calorimeter in the barrel region of ePIC. Precise measurements of electron energy and shower profiles are crucial for effectively distinguishing electrons from background pions in Deep Inelastic Scattering processes at high Q2 within the barrel region. Furthermore, the calorimeter must accurately gauge the energy and coordinates of photons from processes such as Deeply Virtual Compton Scattering, while identifying photon pairs from π0 decays.
In this presentation, I will discuss the design of the Barrel Imaging Calorimeter of ePIC. Our hybrid approach combines scintillating fibers embedded in lead with imaging calorimetry based on AstroPix sensors, a low-power monolithic active pixel sensor. Through comprehensive simulations, we have tested the calorimeter design against the key requirements outlined in the EIC Yellow Report. I will focus on the anticipated performance of the calorimeter, detailing progress in design and prototyping. Additionally, I will provide insights into the development timeline and collaborative efforts involved in this endeavor.
The Electron-Ion Collider (EIC) is a new US$2.5B particle collider facility to be built at Brookhaven National Laboratory (BNL), on Long Island, New York, by the US Department of Energy (US-DOE). The EIC is the next discovery machine offering high science impact but with significant technical challenges. In the 2022–2026 Canadian Subatomic Physics Long Range Plan, the community named the EIC as a “flagship program with broad outcomes.” Similar to Canadian involvement in other large international science projects of global scale like the High Luminosity upgrade at CERN, we anticipate delivering key enabling components, expanding on existing Canadian strengths in particle accelerator technology. Canada, through expertise at TRIUMF, has significant relevant experience in superconducting radio-frequency (SRF) technology. Through discussions with EIC, we have identified an in-kind contribution with high technical complexity that would provide a significant and challenging deliverable to the EIC project. The scope consists of the design and production of 394-MHz crab cavities and cryomodules that will increase the probability of collision of the circulating beams and are essential for reaching the scientific aims of the EIC. The present layout of the EIC foresees two 394MHz cavities per interaction point per side for the Hadron Storage Ring (HSR), and one 394MHz cavity per IP per side for the Electron Storage Ring (ESR). TRIUMF’s experience in SRF technology is already being exploited to supply similar cryomodules to the high luminosity upgrade project at CERN. The EIC deliverables will expand Canada’s core competencies in accelerator technology benefitting fundamental research and industry. TRIUMF is presently engaged in design studies on the 394MHz cavities. The presentation will briefly summarize the existing TRIUMF SRF program in supporting international accelerator projects and present the proposed contribution to the EIC.
In order to search for the physics beyond the Standard Model at the precision frontier, it is sometimes essential to account for Next-to-Next- Leading Order (NNLO) theoretical corrections. Using the covariant approach, we calculated the full electroweak leptonic tensor up to quadratic (one loop squared) and reducible two loop level NNLO (α3) order, which can be used for the processes like e−p and μ− p scattering relevant to EIC, MOLLER (background studies) and MUSE experiments, respectively. In the covariant approach, we apply unitary cut of Feynman diagrams and separate them into leptonic and hadronic currents and hence, after the squaring matrix element, we can obtain differential cross section up to NNLO level.
In this presentation, I will quickly review covariant approach and provide our latest results for quadratic and reducible two loop level QED and Electroweak corrections in case of e−p scattering process.
One of the unique aspects of the Electron-Ion Collider (EIC) detectors is the extensive integration of the far-forward and far-backward detectors with the EIC ring components. This is based primarily on experience from the only prior electron-proton collider, HERA, where far-forward detector infrastructure was only partially installed initially, and it was difficult to install highly efficient and hermetic detector coverage as the needs of the physics program evolved. In contrast, the ePIC detector is envisaged to have a highly sophisticated Zero Degree Calorimeter (ZDC) far downstream of the interaction region, supplemented with tracking and calorimetry $inside$ the first downstream dipole, the B0 detector, and Roman pots. The talk will present a summary of feasibility studies utilizing the $\pi^+$ and $K^+$ deep exclusive meson production reactions. These provide well-defined but challenging final states that test the far forward event reconstruction, and shed vital information on the detector requirements needed to deliver the physics program. The $p(e,e'\pi^+n)$ reaction reconstruction is relatively straightforward, but the $K^+$ reactions are particularly challenging, as they involve the reconstruction of both 4 and 5 final particle states, $p(e,e'K^+)\Lambda/\Sigma^0$, where the hyperon decays into the far forward detectors via $\Lambda(\Sigma^0)\rightarrow p\pi^-(p\pi^-\gamma)$ or $\Lambda(\Sigma^0)\rightarrow n\pi^0(n\pi^0\gamma)$.
Quantum information processing, at its very core, is effected through unitary transformations applied to states on the Bloch sphere, the standard geometric realization of a two-level, single-qubit system. That said, to a geometer, it may be natural to replace the original Hilbert space of the problem, which is a finite-dimensional vector space, with a finite-rank Hermitian vector bundle, through which unitary transformations are replaced very naturally with parallel transport along a connection. This imparts new degrees of freedom into the generation of quantum gates. A new approach to quantum matter — relying upon exotic hyperbolic geometries — that has emerged in my work over the past half decade with mathematicians, theoretical physicists, and experimentalists suggests that this setup may be achievable as an actual computing platform. I'll describe these developments, and there will be lots of pictures.
The resource theories of separable entanglement, non-positive partial transpose entanglement, magic, and imaginarity share an interesting property: an operation is free if and only if its renormalized Choi matrix is a free state. We refer to resource theories exhibiting this property as Choi-defined resource theories. We demonstrate how and under what conditions one can construct a Choi-defined resource theory, and we prove that when such a construction is possible, the free operations are all and only the completely resource non-generating operations.
The time-dependent Schrödinger equation in one-dimension has a remarkable class of shape-preserving solutions that are not widely appreciated. Important examples are the 1954 Senitzky coherent states, harmonic oscillator solutions that offset the stationary states by classical harmonic motion. Another solution is the Airy beam, found by Berry and Balazs in 1979. It has accelerating features in the absence of an external force. Although these solutions are very different, we show that they share many important properties. Furthermore, we show that these belong to a more general class of form preserving (solitonish) wave functions. We conclude with an analysis of their dynamics in phase space with their Wigner functions.
Finding the ground-state energy of many-body lattice systems is exponentially costly due to the size of the Hilbert space, making exact diagonalization impractical. Ground-state wave functions satisfying the area law of entanglement entropy can be efficiently expressed as a matrix product states (MPS) for local, gapped Hamiltonians. The extension to a bundled matrix product state describes excitations, but a formal proof is lacking despite excellent performance in practical computation. We provide a formal proof for the claim. We define a bundled density matrix as a set of independent density matrices which are all written in a common (truncated) basis. We demonstrate that the truncation error is a practical metric that determines how well an excitation is described in a given basis common to all density matrices. We go on to demonstrate that states with volume law entanglement are not necessarily more costly to include in the bundle. The same is true for gapless systems if sufficient lower energy solutions are already present. This result implies that bundled MPSs can describe low-energy excitations without significantly increasing the bond dimension over the cost for ground state calculation with the proviso of some conditions that we explain.
Fast ignition in Inertial Confinement fusion (ICF) is an important technique to enhance the coupling efficiency of the laser to the core [1]. One of the primary challenges faced in fast ignition is the electron divergence, leading to reduced laser-core coupling [2]. A key solution to this problem is the generation of intense Megagauss magnetic fields to guide the ignition electrons, which results in an improvement in the energy coupling efficiency of the laser with the compressed fuel. Capacitor coils present themselves as excellent candidates for producing magnetic pulses of approximately 0.1-0.5 kT and a duration of around 5 ns, driven by high-energy, high-intensity (on the order of a few 10^15 W/cm2) nanosecond laser pulses [3-4]. At the University of Alberta, we have characterized gas jet nozzle targets to investigate the instantaneous magnetic fields produced by capacitor coils, based on measurements of high resolution Zeeman splitting. For optimum Zeeman splitting, the plasma conditions such as plasma temperature and density should be controlled to minimize broadening and maximize brightness of the spectral lines. We explore the response of the UV spectral line CIII 229.78 nm (1s22s2p-1s22p2) via modelling and experiments under various spatiotemporal plasma conditions. The aim is to investigate optimum plasma conditions to avoid large line broadening due to the high plasma density and temperature which can exceed the Zeeman splitting.
Recently, orbital angular momentum (OAM) beams have demonstrated at relativistic intensities at several high-power laser facilities around the world using off-axis spiral phase mirrors. The additional angular momentum carried by OAM beams, even when linearly polarized, introduces a new control parameter in laser plasma interactions and has shown promise to introduce new and exciting phenomena not possible with a standard Gaussian beam.
Of particular interest is the relativistic inverse Faraday effect where laser angular momentum is absorbed by a plasma generating large axial magnetic fields colinear with the laser k vector. Our recent work has demonstrated that magnetic fields on the order of 100’s of Tesla, extending 100’s of microns, and lasting on the order of 10 picoseconds can be generated with laser powers less than 5 terawatts. In this work we will explore this phenomenon through theory, simulations, and present results from a recent campaign at the COMET laser at Lawrence Livermore National Laboratory in which we used a linearly polarized Laguerre Gaussian laser to drive magnetic fields for the first time in the laboratory. Experimental results will be compared and validated against theory and simulations.
Betatron x-rays from a laser wakefield accelerator provide a new avenue for high-resolution, high-throughput radiography of dense materials. Here, we demonstrate the optimization of betatron x-rays for high-throughput x-ray imaging of metal alloys at the laser repetition rate of 2.5 Hz. Using the Advanced Laser Light Source in Varennes, QC, we characterized the x-ray energy spectrum, spatial resolution, beam stability, and emission length from helium, nitrogen, and mixed gas (99.5% He, 0.5% N) targets to determine the conditions for optimized imaging quality with minimized acquisition time. The optimized betatron x-ray source at 2.5 Hz was used for high-resolution imaging of micrometer-scale defects in additively manufactured metal alloys, demonstrating the potential of these sources for high-throughput data collection, accelerating the characterization of complex mechanical processes in these materials.
Cold plasma technology finds diverse applications spanning from microfabrication, medicine, agriculture, and surface decontamination. Precision required in these applications usually necessitate high control over the electric field of plasma sources, allowing for tailored targeting of specific chemical pathways. To determine the electric field, high-resolution detection techniques are essential for time and spatial resolved diagnostics. We proposed to use electric field-induced second harmonic (E-FISH), a well-established nonperturbative technique for measuring the amplitude and orientations of cold atmospheric plasma electric fields. Although E-FISH allows for a good and tunable time resolution, it has been shown that it presents some issues with spatial resolution and sensitivity. While spatial resolution can be improved by the overlapping of two non-collinear optical beams, the interaction section is much more reduced and lead in a significant signal reduction. To overcome this signal reduction, coherent Amplification of Cross-beam E-FISH (ACE-FISH) signal is introduced by mixing the low E-FISH signal and a phase-locked bright local oscillator. The enhancement of the signal is demonstrated by introducing a local oscillator, and the polarity of the electric field is determined through the phase of the homodyne signal. In a groundbreaking application, we employ ACE-FISH to measure, for the first time, the magnitude and direction of the electric field in a cold atmospheric pressure plasma jet. This jet dynamically follows the profile of the applied bias current. The ACE-FISH method not only overcomes spatial resolution challenges but also enhances sensitivity, thus presenting a promising avenue for improved diagnostics and applications across various domains of cold plasma technology [1-2].
[1] J.-B. Billeau, P. Cusson, A. Dogariu, A. Morozov, D. V. Seletskiy, and S. Reuter, “Coherent homodyne detection for amplified crossed-beam electric-field induced second harmonic (ACE-FISH),” Applied Optics, (Unpublished), 2023.
[2] J. Hogue, P. Cusson, M. Meunier, D. V. Seletskiy, and S. Reuter, “Sensitive detection of electric field-induced second harmonic signals,” Optics Letters, vol. 48, no. 17, p. 4601, aug 2023.
The nontrivial topological features in non-Hermitian systems provide promising pathways to achieve robust physical behaviors in classical or quantum open systems. Recent theoretical work discovered that the braid group characterizes the topology of non-Hermitian periodic systems.
In this talk, I will show our experimental demonstrations of the topological braiding of non-Hermitian band energies, achieved by implementing non-Hermitian lattice Hamiltonians along a frequency synthetic dimension formed in coupled ring resonators undergoing simultaneous phase and amplitude modulations. With two or more non-Hermitian bands, the system can be topologically classified by nontrivial braid groups. We demonstrated such braid-group topology with two energy bands braiding around each other, forming nontrivial knots or links. I will also show how such braid-group topology can be theoretically generalized to two and three dimensions. Furthermore, I will also show how such non-Hermitian topology can manifest in the dynamical matrices describing bosonic quadratic systems associated with the squeezing of light, where our latest results reveal a highly intricate non-Hermitian degeneracy structure that can be classified as the Swallowtail catastrophe.
The enhancement of the light-matter interaction through localized surface plasmon resonances (LSPRs) by the heterostructure of noble metal and copper sulfide nanoparticles has aroused wide concern. The higher-order nonlinear process has also gained considerable interest for its efficient enhancement of the harmonic generation process in harmonically resonant heterostructures. In this work, a theory of fourth harmonics generations (4HG) and fifth harmonics generations (5HG) is developed for metallic nanohybrids. Theoretical calculations were performed for a triple-layer nanohybrid in an ensemble of Au, Al and CuS metallic nanoparticles. When a probe field is applied to the nanohybrids, the photons would be coupled to the surface charges, and forming the surface plasmon polaritons (SPPs). The applied field would also induce dipoles, and these dipoles interact with each other which causes dipole-dipole interaction (DDI). With the produced SPP and DDI fields, the intensities of the output 4HG and 5HG fields are calculated by using the coupled mode formalism based on Maxwell's equations. The susceptibilities of different metallic nanoparticles are determined by the density matrix method under their localized SPP resonance frequencies. It is found that the 4HG and 5HG intensities depend on the fourth and fifth-order magnetic susceptibility. In the presence of SPP and DDI, the light-matter interaction is significantly enhanced by the coupling of their LSPRs. The output 4HG and 5HG intensities of the Al/Au/CuS triple-layer nanohybrids formed by the coupled LSPRs are calculated and compared with the experimental data, which showed the consistency with the theoretical model. The findings illustrate the effectiveness of producing higher harmonic generations within resonant plasmonic structures. This hybrid system can be also applied to manufacturing optical nano-switching devices.
We report the observation of the frequency nonlinearity during amplitude stabilization of the gain-embedded resonator which was previously interpreted as a van der Pol oscillator. Our investigation reveals that this specific nonlinear oscillationn is more accurately described by the van der Pol-Duffing oscillator model. We initially observed this phenomenon in a gain-embedded circuit oscillator and noted bistable behaviour upon coupling with a damped resonance. Then, in a gain-embedded cavity, we experimentally verified this nonlinear phenomenon. The bistable behavior of the cavity-magnonic polariton is well-fitted by this van der Pol-Duffing model.
The SGM (200 - 2000 eV) and SXRMB (1.7 - 10 keV) spectroscopy beamlines at the Canadian Light Source allow for a variety of novel in-situ and operando measurement techniques. This talk will cover the experimental data acquisition modes available at both beamlines and highlight the ways in which their unique capabilities allow for answering specific scientific questions. The emphasis will be on showcasing how spectroscopy can be used in a myriad of different ways to answer important topics in environmental and materials science.
Berry curvature manifests as current responses to applied electric fields. When time reversal is broken, a Berry curvature ''monopole'' gives rise to a Hall current that is proportional to the applied field. When time reversal is preserved, a Berry curvature ''dipole'' may result in a Hall current that is second order in the applied field. In this work, we examine a current response arising from a Berry curvature ''quadrupole''. This arises at third order in the applied field. However, it is the leading response when the following symmetry conditions are met. The material must not be symmetric under time-reversal ($ \mathcal{K} $) and four-fold rotations ($C_{4n}$); however, it must be invariant under the combination of these two operations ($C_{4n} \mathcal{K}$). This condition is realized in altermagnets and in certain magnetically ordered materials. We argue that shining light is a particularly suitable approach to see this effect. In the presence of a static electric field, light gives rise to a dc electric current that can be easily measured.
An electric dipole moving in a magnetic field acquires a geometric phase known as the He-McKellar-Wilkens (HMW) phase, which is the electromagnetic dual of the Aharanov-Casher phase. The HMW phase was first measured in 2012 using an atom interferometer [1]. In that experiment the electric and magnetic fields were static. We propose a modification where these fields are generated by laser beams.
[1] Lepoutre et al, PRL 109, 120404 (2012)
We compare the results of Electromagnetically Induced Transparency (EIT) and Four-Wave Mixing (4WM) in both thermal Rubidium vapor and cold atom-based systems. Our aim is to balance simplicity and fidelity in systems that aim to produce atom-resonant quantum stated of light. We discuss the construction of a Magneto Optical Trap (MOT) on an extremely low budget and discuss strategies for implementing a cold atom system with limited resources. In our next steps, we plan to employ a cavity-enhanced 4WM system with minimal optical power to generate squeezed quantum states. In order to achieve the required phase stability between the involved fields, we have tested both electronic phase-lock systems and a sideband approach using an Electro-optical modulator. In the proposed work, a cavity is locked to a laser which in turn is locked to an atomic ensemble, enabling strong photon-atom interactions.
We consider a dilute gas of bosons in a slowly rotating toroidal trap, focusing on the two-mode regime consisting of a non-rotating mode and a rotating mode corresponding to a single vortex. This system undergoes a symmetry breaking transition as the ratio of interactions to `disorder potential’ is varied and chooses one of the two modes spontaneously, an example of macroscopic quantum self-trapping. Analyzing elementary excitations around the BEC using Bogoliubov theory, regions of energetic instabilities with negative excitation frequencies are found, as well as dynamical instabilities, where excitations have complex frequencies. For the latter, amplitudes grow or decay exponentially. Instabilities can occur at bifurcations where the classical field theory provided by the Gross-Pitaevskii equation predicts that two or more solutions appear or disappear. Those complex eigenvalues confirm that the Bogoliubov Hamiltonian is non-Hermitian as picking a phase for the BEC breaks U(1) symmetry. In non-Hermitian quantum theory, the requirement of self-adjointness is replaced by a less stringent condition of PT-symmetry, which still ensures that Hamiltonians exhibit real and positive spectra if PT-symmetry is unbroken. We are investigating how the occurence of the dynamical instability is connected to a PT-symmetry breaking phase transition.
Coherent anti-Stokes Raman scattering (CARS) is a nonlinear optical process that is used for spectroscopy and imaging. The stimulated CARS signal is orders of magnitude stronger than in spontaneous Raman scattering, enabling CARS to achieve substantially faster acquisition speeds. This has positioned CARS as a desirable alternative to spontaneous Raman scattering as a contrast mechanism for chemical imaging. However, CARS suffers from the presence of a so-called non-resonant background (NRB) that distorts peak shapes and intensities, thus hindering the broader adoption of this powerful technique. The NRB makes quantitative analysis of CARS spectra nontrivial and reduces image contrast. NRB removal techniques that retrieve Raman-like signals from CARS spectra have thus become a central focus of the CARS literature. We present an original and accessible approach to NRB removal based on gradient boosting decision trees.
Gradient boosting decision trees are increasingly being used to win machine learning competitions, demonstrating their potential to compete with neural networks. Here, we apply the open-source gradient boosting framework XGBoost to NRB removal. A dataset of 100,000 stochastically generated CARS (input) and Raman-like (label) spectra was used for the training of the decision trees with a train-validation split of 80/20, while a dataset of 1000 independently generated pairs of spectra was used for testing. After hyperparameter tuning, the best decision tree yielded a Pearson correlation coefficient of r=.97 (p<.001) between retrieved and ground-truth Raman-like spectra, corresponding to a mean squared error (MSE) of 0.00047. When the trained model is applied to experimental CARS spectra obtained from samples with well-known Raman peaks, the model reproduces all of the expected Raman peaks for each of the samples that were tested. Our results establish gradient boosting decision trees as an effective tool for CARS NRB removal in lieu of neural networks.
The Abraham-Minkowski controversy refers to the ambiguity in defining the momentum of light within a dielectric medium. The choice one has in the partitioning of the total stress energy tensor of a system into a “material” portion and an “electromagnetic” portion has historically led to vigorous debate. The difference between Abraham’s formulation of the momentum density of light in a medium and Minkowski’s version of the same quantity leads to either the presence or absence, respectively, of the so-called “Abraham force” at the level of the equations of motion. We propose an atom-interference experiment for measuring the quantum geometric phase which ultimately gives rise to the Abraham force.
NSERC Chairs for Inclusion in Science and Engineering brings together three leaders to change the face of STEM in the Atlantic region: Dr. Svetlana Barkanova (Physics, Memorial University of Newfoundland), Dr. Kevin Hewitt (Physics, Dalhousie University), and Dr. Stephanie MacQuarrie (Chemistry, Cape Breton University). CISE-Atlantic will present an overview of our initiatives and focus on two important directions including incorporating and accounting for outreach and EDI into the Tenure and Promotion process and leading a call to action to recognize service to the community in the T&P process and “Physics in Rural Classrooms” a key initiative the team is leading. In Atlantic Canada, some of the rural communities do not have physics teachers at all, or the teachers assigned to science or physics classes may struggle with some of the physics topics. By directly connecting presenters from all over Canada with remote schools, we are providing a welcome resource to teachers and exciting role models to students. Where relevant, the curriculum may refer to the regional priorities and include Indigenous knowledge. The program offers four online guest talks per year to address specific curriculum for students in Grades 7 to 11. The talk will outline the motivation, logistics, and possible ways for our physics community to engage in the program.
The Tokai-to-Kamioka (T2K) experiment consists of an accelerator complex colliding protons on a graphite target, generating mesons which decay to neutrinos and detecting these neutrinos at both a near detector and a far detector, Super-Kamiokande (SK), 295km away. SK is a water Cherenkov detector and thus Cherenkov radiation from charged particles is detected by roughly 11000 photomuliplier tubes, the output of which is reconstructed to infer particle type and kinematics.
The current reconstruction algorithm in SK, fiTQun, uses classical likelihood maximization to estimate particle type and kinematics from Cherenkov rings produced when a neutrino interaction produces a charged lepton or hadron. This reconstruction algorithm has excellent performance in the most important T2K metrics - for example classifying electron neutrino events from muon neutrino events - but improvements to charged pion separation from muons, vertex and momentum reconstruction as well as computation time would greatly benefit many T2K and SK analyses.
The Water Cherenkov Machine Learning (WatChMaL) collaboration seeks to update classical reconstruction processes with machine learning. For SK data, investigations have centered on using either ResNet or PointNet architectures for both particle identification and vertex, momentum reconstruction. This talk will outline the data processing which the SK data must undergo to ensure adequate training, challenges in adapting state-of-the-art machine learning algorithms to our target problem, and current performance and comparisons with the classical algorithm. Outlines of future steps, including potential of adversarial networks to mitigate detector systematics in Super-Kamiokande will be discussed. Finally, other efforts in the WatChMaL collaboration will be described, including those on upcoming neutrino detectors.
T2K (Tokai to Kamioka) is a long-baseline neutrino experiment designed to investigate neutrino oscillations. The experiment employs a neutrino beam generated by colliding a proton beam with a graphite target. This target area is enclosed within a helium vessel containing the Optical Transition Radiation (OTR) monitor. The OTR monitor plays a crucial role in measuring the profile and position of the proton beam, essential for characterizing neutrino production and ensuring target protection. However, we observe a discrepancy between the beam width measured by the upstream beam monitors and OTR which could be caused by a broad background present in OTR images. We hypothesize this background light originates from scintillation induced by the proton beam. In order to understand the background in OTR images, we have built a Geant4 simulation to test two scintillation mechanisms. We model primary scintillation from excitation of the helium gas by the proton beam as well as secondary scintillation from the proton beam interacting with the upstream collimator and target. By confirming Geant4 simulation results through comparison with ray-tracing studies and experimental data we have developed an accurate model of the background light essential for improving OTR measurements. Minimizing uncertainty in OTR light production mechanisms is critical for fine-tuning the proton beam orbit at the onset of the T2K experiment, while also providing significant insights for physics analysis.
The DEAP-3600 dark matter experiment is at the forefront of our efforts to uncover the mysteries of the universe’s dark abundance. In this presentation, we explore significant developments in energy calibration techniques used within the DEAP-3600 experiment, showcasing an innovative approach that uses high-energy gamma rays from both the background spectrum and the AmBe calibration spectrum. This new method not only improves the precision of energy calibration but also strengthens the experiment’s ability to search dark matter particles.
We demonstrate the effectiveness of using high-energy gamma rays from the background spectrum to refine our understanding of the detector’s response across a wider energy range, thus enhancing the DEAP-3600 experiment’s capacity to identify potential dark matter interactions. Furthermore, it enables us to extend the utility of the detector to other rare event searches, including 5.5 MeV Solar Axions and Boron-8 neutrinos searches, broadening the scientific impact of our work.
This presentation will investigate into the alternative energy calibration techniques, providing insights into the recent results achieved by the DEAP-3600 experiment. Furthermore, we will explore the promising horizons offered by our detector upgrade. By doing so, we aim to emphasize the significance of these developments in advancing our understanding of dark matter.
Cryogenic (O(mK)) technologies are used for a variety of applications in astroparticle, nuclear, and quantum physics. The Cryogenic Underground TEst facility (CUTE) at SNOLAB, provides a low-background and vibrationally isolated environment for testing and operating these future devices. The experimental stage of CUTE can reach a base temperature of ~12mK and can hold a payload of up to 20 kg. The facility has been used to test detectors for SuperCDMS and is transitioning to become a SNOLAB user facility. The main design features and operating parameters of CUTE will be discussed in this talk as well as the current and future status and availability of the facility.
In this presentation, we introduce an innovative method for achieving comprehensive renormalization of observables, such as theoretical predictions for cross sections and decay rates in particle physics. Despite previous efforts to address infinities through renormalization techniques, theoretical expressions for observables still exhibit dependencies on arbitrary subtraction schemes and scales, preventing full renormalization. We propose a solution to this challenge by introducing the Principle of Observable Effective Matching (POEM), enabling us to attain both scale and scheme independence simultaneously. To demonstrate the effectiveness of this approach, we apply it to the total cross section of electron-positron to hadrons, utilizing 3- and 4-loop MS scheme expressions within perturbative Quantum Chromodynamics (pQCD). Through POEM and a process termed Effective Dynamical Renormalization, we achieve full renormalization of these expressions. Our resulting prediction, 1.052431+0.0006−0.0006 at Q=31.6GeV, closely aligns with the experimental value of Rexpe+e−=1.0527+0.005−0.005, showcasing the efficacy of our method.
The nature of dark matter is one of the most important open questions in the Standard Model, and dark matter direct detection holds exciting promises of new physics. By operating state-of-the-art kilogram-scale detectors at milliKelvin temperatures in one of the world’s deepest laboratories, SuperCDMS SNOLAB will be sensitive to a large range of dark matter masses. From October 2023 to March 2024, one SuperCDMS tower, consisting of six High Voltage detectors, was deployed at the Cryogenic Underground TEst facility (CUTE). This marks the first time that the new-generation SuperCDMS detectors are operated in an underground, low-background environment, allowing for a comprehensive detector performance study and possibly early science results. In this talk, I will detail the detector testing efforts and present our first findings about these detectors.
Incorporation of foreign atoms in low-dimensional materials such as graphene are interesting for many applications, including biosensing, super-capacitors, and electronic device fabrication. In such processes, controlling the nature of the foreign atom incorporation is a key challenge, as different moieties can contribute differently to doping and present different reactivities. With plasma processing increasingly requiring atomic level precision, a detailed understanding of the mechanisms by which ions, electrons, reactive neutrals, excited species, and photons interact simultaneously with materials such as graphene has become more important than ever.
In recent years, we studied the interaction of low-pressure argon plasmas with polycrystalline graphene films grown by chemical vapor deposition. Spatially-resolved Raman spectroscopy conducted before and after each plasma treatment showed defect generation following a 0D defect curve, while the domain boundaries developed as 1D defects. Surprisingly and contrary to common expectations of plasma-surface interactions, damage generation was slower at the grain boundaries than within the graphene grains, a behavior ascribed to a new preferential self-healing mechanism. Through a judicious control of the properties of the flowing afterglow of a microwave N2 plasma obtained by space- resolved optical emission spectroscopy, we further demonstrated an aromatic incorporation of nitrogen groups in graphene with minimal ion-induced damage. The use of both reactive neutral atoms and N2 excited states (mostly metastable states) was a radical departure from what was the state of the art in atomic manipulation, mainly because excited species can provide sufficient energy for the activation of adatom covalent incorporation while leaving the translational energy of both the impinging species and the low-dimensional materials undisturbed. A selective nitrogen doping due to preferential healing of plasma-generated defects near grain boundaries was also highlighted.
Very recently, a new setup was specifically designed to examine plasma-graphene interactions. In-plasma Raman spectrometry is used to monitor the evolution of selected Raman peaks over nine points of the graphene surface. On one hand, for high-energy ions, defect generation progressively rises with the ion dose, with no significant variations after ion irradiation. On the other hand, for very-low-energy ions, defect generation increases at a lower rate and then decreases over a very long time scale after ion irradiation. Such self-healing dynamics cannot be explained by a simple carbon adatom-vacancy annihilation. Using a 0D model, it is demonstrated that that various mechanisms are in play, including carbon adatom trapping by Stone-Wales defects and dimerization. These mechanisms compete with the self-healing of graphene at room temperature, and they slow down the healing process. Such features are not observed at higher energies for which carbon atoms are sputtered from the graphene surface, with no significant populations of carbon adatoms. We believe that these experiments can be used as building blocks to examine the formation of chemically doped graphene film in reactive plasmas using, for example, argon mixed with either traces of N- or B-bearing gases.
Tungsten-based materials are the currently favoured choice for the first-wall/Plasma Facing Components (PFC) in plasma fusion devices such as the ITER tokamak. The behaviour of tungsten-based materials under high-fluence ion bombardment is therefore highly relevant for fusion device engineering problems. The USask Plasma Immersion Ion Implantation (PIII) system has been optimized for high-fluence ion bombardment candidate PFC materials. PIII can be used to simulate the high fluence ion bombardment encountered in plasma fusion devices, and therefore provides a useful tool for PFC testing. This talk will discuss a recent study of tungsten-based materials (pure tungsten, W-Ni-Cu heavy alloy, and W-Ta) PIII-implanted with helium and deuterium. The post-implant analysis of these materials was carried out using synchrotron-based Grazing-Incidence X-ray Diffraction (GIXRD) and Grazing-Incidence X-ray Reflection (GIXRR) at the Canadian Light Source. These data reveal important aspects of the effect of helium and deuterium ion bombardment of tungsten-based PFC materials, and shed light on their suitability for fusion devices.
The magnetic-field-dependent fluorescence properties of NV$^{-}$ center defects embedded within a diamond matrix have made them a candidate for solid state qubits for quantum computing as well as magnetic field sensing. Microwave plasma assisted chemical vapor deposition (MPCVD) of diamond with \emph{in situ} nitrogen doping has provided reproducibility and uniformity in the production of NV$^{-}$ centers on multiple substrates[1]. What has yet to be understood is the impact of the nitrogen doping time on the MPCVD process and its impact on the creation of NV$^{-}$ centers.
Analysis of the NV$^{-}$-containing diamond films has been carried out using Scanning Electron Microscopy (SEM), X-ray Diffraction (XRD), Raman spectroscopy, Photoluminesence spectroscopy, and optical microscopy. In addition, calculated plasma parameters and models have been used to quantify the properties of the MPCVD process.This study aims to investigate the effect of nitrogen doping time and its effect on the produced spectral lines associated with the 1333 cm$^{-1}$ Diamond Raman spectra peak, 637 nm Photoluminesence NV$^{-}$ spectral peak, and the <111> and <220> diamond XRD peaks. This investigation aims to quantify a relationship between spectral peaks, NV$^{-}$ density, and nitrogen doping time in terms of MPCVD process parameters.
[1] Ejalonibu, H. A., Sarty, G. E., Bradley, M. P. (2019, April 25). ``Optimal parameter(s) for the synthesis of nitrogen-vacancy (NV) centres in polycrystalline diamonds at low pressure" - \emph{Journal of Materials Science: Materials in Electronics.} SpringerLink. https://link.springer.com/article/10.1007/s10854-019-01376-z
The vastest majority of the attempts at synthesizing novel two-dimensional (2D) materials have been relying on growth methods that work under thermodynamic equilibrium conditions, such as chemical vapor deposition, because these techniques have proven themselves successful in yielding a plethora technologically attractive, albeit thermodynamically stable, 2D materials. Out of equilibrium synthesis techniques are more rarely used for 2D materials, but this comes with limitations on the variety of 2D systems that can thus be obtained. For example, 2D tungsten semi-carbide (W2C) is a metallic quantum material that has been theoretically predicted, but was yet to be experimentally demonstrated, because the corresponding full carbide (WC) is energetically favored under thermodynamic equilibrium conditions, such as chemical vapor deposition. Here, we report a novel dual-zone remote plasma deposition reactor that was specially conceived to grow 2D carbides out of thermodynamic equilibrium. As far as tungsten carbide is concerned, this has led to tungsten carbide deposits with well-tuned ratios of W and C precursors, as demonstrated by optical emission spectroscopy (OES) of the plasma precursors, which has ultimately led us to obtain few-layer 2D W2C. In the second part of our talk, we will discuss the behavior of remote-plasma grown W2C 2D crystals under strain, and their investigation with scanning tunneling microscopy (STM) and spectroscopy (STS). We show that, in agreement with theoretical predictions, plasma-grown W2C offer tunable density of electronic states at the Fermi level, a property that may be potentially uniquely suited for obtaining fractional quantum Hall effects, superconductivity, and quantum thermal transport. Collectively, our study points at the critical relevance of out-of-equilibrium remote-plasma techniques towards the growth of unprecedented 2D materials.
I will present our recent progress in designing algorithms that depend on quantum-mechanical resources – superposition, interference, and entanglement – for the solution of computational problems. Combined, these algorithms cover a large variety of challenging computational tasks spanning combinatorial optimization, machine learning, and model counting. First, I will discuss an algorithm for combinatorial optimization based on stabilizer states and Clifford quantum circuits. The algorithm iteratively builds a quantum circuit that maps an initial easy-to-prepare state to approximate solutions of optimization problems. Since Clifford circuits can be efficiently simulated classically, the result is a classical quantum-inspired algorithm. We benchmark this algorithm on synthetic instances of two NP-hard problems, namely MAXCUT and the Sherrington-Kirkpatrick model, and observe performance competitive with established algorithms for the solution of these problems. Next, I will present a quantum machine learning (QML) model based on matchgate quantum circuits. This restricted class of quantum circuits is efficiently simulable classically through a mapping to free Majorana fermions. We apply our matchgate QML model to commonly studied datasets, including MNIST and UCI ML Breast Cancer Wisconsin (Diagnostic), and obtain better classification accuracy than corresponding unrestricted QML models. Finally, I will outline ongoing work on algorithms for hard problems in #P, the computational complexity class encompassing counting problems. These examples demonstrate that (a) using restricted quantum resources as an algorithmic design principle of classical algorithms may lead to significant advantages even without a quantum computer, and (b) the frontier of near-term quantum advantage may lie further in the future than anticipated by some.
Biological systems need to react to stimuli over a broad spectrum of timescales. If and how this ability can emerge without external fine-tuning is a puzzle. This problem has been considered in discrete Markovian systems where results from random matrix theory could be leveraged. Here, a generic model for Markovian dynamics with parameters controlling the dynamic range of matrix elements via uniformity and correlation of state transitions. Analytic predictions of critical values where transitions between random and non-random dynamics were found before having the model applied to real data. The model was applied to electrocorticography data of monkeys at wakeful rest undergoing an anesthetic injection to induce sleep, an antagonist injection was then administered in order to bring the monkey back to wakefulness. This data was processed into discrete Markov models at regular time intervals throughout the task. The Markov models were then analyzed with respect to the uniformity and correlation for transition rates, as well as resultant entropy and entropy rate measurements. The results were quantitatively understood in terms of the random model and the brain activity was found to cross over a predicted critical regime. Moreover, interplay between the uniformity and correlation parameters coincided with predictions of maintaining criticality across a task. Results are robust enough that the states of consciousness for the monkey were identifiable through parameter values, with sudden changes correlating with transitions between wakefulness and rest.
We show how thin wall magnetic monopoles can exist in a false vacuum, hence the name false monopoles, and how they can trigger the decay of the false vacuum.
In physics, Spacetime is always assumed to be a smooth $4-$manifold with a fixed (standard) differential structure. Two smooth $n-$manifolds are said to be exotic if they have the same topology but different differential structures. S. Donaldson showed that there exist exotic differential structures on $\mathbb{R}^4$. In the compact case, J. Milnor and M. Kervaire classified exotic differential structures on $n-$spheres $\mathbb{S}^n$. A fundamental question now remains to be answered : do exotic differential structures on spacetime play any role in physics ? The possibility of the applications of exotic structures in physics was first suggested by E. Witten in his article "Global gravitational anomalies". Trying to give a physical meaning of exotic spheres, Witten conjectures that exotic $n-$spheres should be thought as gravitational instantons in $n-$dimensional gravity and should give rise to gravitational solitons in $(n+1)-$dimensions. In this talk, we will address these questions in two steps. First we construct Kaluza-Klein $SO(4)-$monopoles on Milnor's exotic $7-$spheres (solutions to the 7-dimensional Einstein equations with cosmological constant). Secondly, taking exotic $7-$spheres as models of spacetime, we address physical effects of exotic smooth structures on the energy spectra of elementary particles. Finally we discuss other possible applications of exotic $7-$spheres in other areas of physics.
A generally covariant gauge theory is presented which leads to the Gauss constraint but lacks both the Hamiltonian and spatial diffeomorphism constraints, and possesses local degrees of freedom. The canonical theory therefore resembles Yang-Mills theory without the Hamiltonian. We describe its observables, canonical quantization, and some generalizations.
Active matter is a term used to describe matter that is composed of a large number of self-propelled active ‘particles’ that individually convert stored or ambient energy into systematic motion. Examples include a flock of birds, a school of fish, or at smaller scales a suspension of bacteria or even the collective motion within a human cell. When viewed collectively, active matter is an out-of-equilibrium material. This talk focuses on active matter systems where the active particles are very small, for example bacteria or chemically active colloidal particles. The motion of small active particles in homogeneous Newtonian fluids has received considerable attention, with interest ranging from phoretic propulsion to biological locomotion, whereas studies on active bodies immersed in inhomogeneous fluids are comparatively scarce. In this talk I will show how the dynamics of active particles can be dramatically altered by the introduction of fluid inhomogeneity and discuss the effects of spatial variations of fluid density, viscosity, and other fluid complexity.
Enzymes are valuable because they can catalyze reactions by binding transiently and greatly enhance the reaction probability for "substrate" molecules to convert to "product" molecules. But do they receive a physical kick while this reaction is proceeding? This would make them substrate-driven nanometers, or nanoscale active matter. Numerous fluorescence-based measurements (and a few others) say yes; several other measurements now say no!
We examine the diffusion of enzymes attached to nanoparticles (NPs) by multiple techniques. We also measure the enzyme activity of these enzyme-functional NPs. I will talk about the interesting behaviour of the enzyme activity of enzyme-functional NPs. And I might even answer the question in the title!
In recent years, there has been a surge of interest in minimally invasive medical techniques, with magnetic micro-robots emerging as a promising avenue. These microrobots
possess the remarkable ability to navigate through various mediums, including viscoelastic and non-Newtonian fluids, thereby facilitating targeted drug delivery and medical interventions. However, while many existing designs draw inspiration from micro-swimmers found in biological systems like bacteria and sperm, they often rely on a contact-based approach for payload transportation, which can complicate release at the intended site. Our project aimed to explore the potential of helical micro-robots for non-contact delivery of drugs or cargo. We conducted a comprehensive analysis of the shape and geometric parameters of the helical micro-robot, with a specific focus on its capacity to transport passive filaments. Through our examination, we propose a novel design comprising three sections with alternating handedness, including two pulling and one pushing microhelices, to enhance the capture and transportation of passive filaments in Newtonian fluids using a non-contact method. Furthermore, we simulated the process of capturing and transporting the passive filament and evaluated the functionality of the newly designed micro-robot. Our findings offer valuable insights into the physics of helical micro-robots and their potential applications in medical procedures and drug delivery. Additionally, the proposed non-contact approach for delivering filamentous cargo holds promise for the development of more efficient and effective microrobots in medical applications.
Molecular motors are nanoscale machines capable of transducing chemical energy into mechanical work. Inspired by biology, our transnational team has conceived different designs of artificial motors comprised of protein building blocks – proteins, because these are Nature's choice of such functional units. We have recently characterized the motility of one of these designs – the Lawnmower – and found that its dynamics demonstrate motor-like properties. I’ll describe the burnt-bridge ratchet principle of Lawnmower motility and our simulations and experiments that explore its motion.
Work in my group on this project led by PhD graduate Chapin Korosec, with funding from NSERC.
Publication: Korosec et al., Nature Communications 15, 1511 (2024)
Muon capture is a nuclear-weak process in which a negatively charged muon, initially in an atomic bound state, is captured by the atomic nucleus, resulting in atomic number reduction by one and emission of a muon neutrino. Thanks to the high momentum transfer involved in the process, it is one of the most promising probes for the yet unobserved neutrinoless double-beta decay. To help the planned muon-capture experiments, reliable theory predictions are of paramount importance.
To this end, I will discuss recent progress in ab initio studies on muon capture in light nuclei, focusing in particular on the ab initio no-core shell model. These systematically improvable calculations are based on nuclear interactions derived from chiral effective field theory. The computed rates are found to be in good agreement with available experimental counterparts, motivating future experimental and theoretical explorations in light nuclei.
Recent analysis of Fermi decays by C.Y. Seng and M. Gorshteyn and the corresponding $V_{ud}$ determination have revealed a degree of tension with Cabibbo-Kobayashi-Maskawa (CKM) matrix unitarity, confirmation of which would indicate several potential deficiencies within the Standard Model (SM) weak sector. Extraction of $V_{ud}$ requires electroweak radiative corrections (EWRC) from theory to be applied to experimentally obtained $ft$-values. Novel calculations of corrections sensitive to hadronic structure, i.e., the $\gamma W$-box, are at the heart of the recent tension. Moreover, to further improve on the extraction of $V_{ud}$, a modern and consistent treatment of the two nuclear structure dependent corrections is critical. These corrections are (i) $\delta_C$, the isospin symmetry breaking correction (ii) and $\delta_{NS}$, the EWRC representing evaluation of the $\gamma W$-box on a nucleus. Preliminary estimations of $\delta_{NS}$ have been made in the aforementioned analysis, however, the approach cannot include effects from low-lying nuclear states which require a true many-body treatment. Via collaboration with C.Y. Seng and M. Gorshteyn and use of the Lanczos subspace method, these corrections can be computed in ab initio nuclear theory for the first time. We apply the no-core shell model (NCSM), a nonrelativistic quantum many-body theory for describing low-lying bound states of $s$- and $p$-shell nuclei starting solely from nuclear interactions. We will present preliminary results for $\delta_{NS}$ and $\delta_{C}$ determined in the NCSM for the $^{10}\text{C} \rightarrow {}^{10}\text{B}$ beta transition, with the eventual goal of extending the calculations to $^{14}\text{O} \rightarrow {}^{14}\text{N}$ and $^{18}\text{Ne} \rightarrow {}^{18}\text{F}$.
In recent years, there has been a dramatic improvement in our ability to probe the nuclear many-body problem, due to the availability of several different powerful many-body techniques and sophisticated nuclear interactions derived from chiral effective field theory (EFT). In a recent paper [1], we investigated the perturbativeness of these chiral EFT interactions in a many-body context, using quantum Monte Carlo (QMC). QMC techniques have been used to probe a variety of nuclear many-body systems, ranging from light nuclei to neutron matter [2]. There are a variety of ways in which the Monte Carlo method can be applied to the many-body problem. The diffusion Monte Carlo method involves propagating a many-body system through imaginary time can be used on the continuum, where it is often improved with the application of auxiliary fields to handle complicated nuclear correlations, as well as in a lattice formalism, where particles are allowed to hop between lattice sites and interact with each other when they occupy the same site. In a recent publication, we began investigating how this lattice formulation, which is typically used to study condensed matter systems, can be applied to systems of interest to nuclear physics [3]. This presentation will discuss recent work involving the application of QMC approaches to the nuclear many-body problem, as well as a further discussion on how these methods can be improved to help expand on our understanding of nuclear physics.
[1] R. Curry, J.E. Lynn, K.E. Schmidt, and A. Gezerlis., Second-Order Perturbation Theory in Continuum Quantum Monte Carlo Calculations, Phys. Rev. Res. 5, L042021 (2023)
[2] J. Carlson et al., Quantum Monte Carlo Methods for Nuclear Physics, Rev. Mod. Phys. 87, 1067 (2015).
[3] R. Curry, J. Dissanayake, S. Gandolfi, and A. Gezerlis., Auxiliary Field Quantum Monte Carlo for Nuclear Physics on the Lattice, arXiv:2310.01504.
Anomalies in the systematics of nuclear properties challenge our understanding of the underlying nuclear structure. One such anomaly emerges in the Zr isotopic chain as a dramatic ground-state shape change, abruptly shifting from spherical into a deformed one at N=60. Only a few state-of-the-art theoretical models have successfully reproduced this deformation onset in $^{100}$Zr and helped to establish the shape coexistence in lighter Zr isotopes [1, 2]. Of particular interest is $^{98}$Zr, a transitional nucleus lying on the interface between spherical and deformed phases. Extensive experimental and theoretical research efforts have been made to study the shape coexistence phenomena in this isotope [3,4,5,6]. Although they provide an over-all understanding of $^{98}$Zr's nuclear structure, uncertainties remain in interpreting its higher-lying bands. Specifically, two recent studies utilizing Monte Carlo Shell Model (MCSM) [3] and Interacting Boson Model with configuration mixing (IBM-CM) [4] calculations have presented conflicting interpretations. The MCSM predicts multiple shape coexistence with deformed band structures, whereas the IBM-CM favours a multiphonon-like structures with configuration mixing.
To address these uncertainties, a $\beta$-decay experiment was conducted at TRIUMF-ISAC facility utilizing the 8$\pi$ spectrometer with $\beta$-particle detectors. The high-quality and high-statistics data obtained enabled the determination of branching ratios for weak transitions, which are crucial for assigning band structures. In particular, the key 155-keV $2_{2}^{+} \rightarrow 0_{3}^{+}$ transition was observed, and its branching ratio measured, permitting the $B$(E2) value to be determined. Additionally, $\gamma$-$\gamma$ angular correlation measurements enabled the determination of both spin assignments and mixing ratios. As a result, the $0^+$, $2^+$, and $I=1$ natures for multiple newly observed and previously known (but not firmly assigned) states has been established. The new results revealed the collective character of certain key transitions, supporting the multiple shape coexistence interpretation provided by the MCSM framework. These results will be presented and discussed in relation to both MCSM and IBM-CM calculations.
References
[1] T. Togashi, Y. Tsunoda, T. Otsuka, and N. Shimizu, Phys. Rev. Lett. 117, 172502 (2016).
[2] N. Gavrielov, A. Leviatan and F. Iachello, Phys. Rev. C 105, 014305 (2022).
[3] P. Singh, W. Korten et al., Phys. Rev. Lett. 121, 192501 (2018).
[4] V. Karayonchev, J. Jolie et al., Phys. Rev. C 102, 064314 (2020).
[5] J. E. Garcia-Ramos, K. Heyde, Phys. Rev. C 100, 044315 (2019).
[6] P. Kumar, V. Thakur et al., Eur. Phys. J. A 57, 36 (2021).
Classical chaos arises from the inherent non-linearity of dynamical systems. However, quantum maps are linear; therefore, the definition of chaos is not straightforward. To address this, we study a quantum system that exhibit chaotic behavior in their classical limits. One such system of interest is the kicked top model Haake, Ku ́s, and Scharf, Z. Phys. B 65, 381 (1987), where classical dynamics are governed by Hamilton’s equations on phase space, while quantum dynamics are described by the Schr ̈odinger equation in Hilbert space. In the kicked top model, non-linearity is introduced through the exponent of the angular momentum term, denoted as J^p. Notably, when p = 1, the system remains integrable. Extensive research has focused on the case where p = 2. In this study, we investigate the critical degree of non-linearity necessary for a system to exhibit chaotic behavior. This is done by modifying the original Hamiltonian such that a non-integer value of p is allowed. We categorize the modified kicked top into two regimes: 1 ≤ p ≤ 2 and p > 2, and analyze their distinct behaviors. Our findings reveal that the system loses integrability for any p > 1, leading to the emergence of chaos. Moreover, we observe that the intensity of chaos amplifies with increasing non-linearity. However, as we further increase p (> 2), we observe unexpected behavior, where chaos is suppressed, and regions of chaotic sea are confined to a small region in phase space. This study sheds light on the complex interplay between non-linearity and chaos, offering valuable insights into their dynamic behavior.
Bell's inequalities provide a practical method for testing whether correlations observed between spatially separated parts of a system are compatible with any local hidden variable description. For $2-$ qubit pure states, entanglement and nonlocality as measured by Bell inequality violations are directly related. However, for multiqubit pure states, the much more complex relation between N-qubit entanglement and nonlocality has not yet been explored in much detail. In this work, we analyze the violation of the Svetlichny-Bell inequality by N-qubit generalized GHZ (GGHZ) states, and identify members of this family of states that do not violate the inequality. GGHZ states are a generalization of the well known GHZ state, which is a useful entanglement resource. GGHZ states are hence natural candidates to explore for extending various quantum information protocols, like controlled quantum teleportation, to more than three parties. Our results raise interesting questions regarding characterization of genuine multipartite correlations using Bell-type inequalities.
Among the different approaches to studying the structure of atomic nuclei comprising protons and neutrons, the nuclear shell model formalism is widely successful across different regions of the nuclear chart. However, applying the shell model formalism becomes difficult for heavier mass regions, as the Hilbert space needed to define such a problem scales exponentially with increasing number of nucleons. Quantum computing is a promising way to deal with such a scenario, however for systems of practical relevance the amount of quantum resources required is beyond the capabilities of today’s hardware. Quantum entanglement provides a distinctive viewpoint into the fundamental structure of strongly correlated systems, including atomic nuclei. There is a growing interest in understanding the entanglement structure of nuclear systems, and leveraging this knowledge to simulate many-nucleon systems more efficiently.
In this work, we apply entanglement measures to reduce the quantum resources required to simulate a nuclear many-body system. We calculated the single orbital entropies as more neutrons were added for selected p-shell (Z = 2, 3, and 4) nuclei within the nuclear shell model formalism. In the case of the Li (Z = 3) isotopic chain, the proton single orbital entanglement of the 0p1/2 orbital in $^6$Li (1+) is 1.7 times more than $^7$Li (3/2-) and $^8$Li (2+). Also, the single orbital entanglement of proton 0p1/2 in $^9$Li (3/2-) is five times less than that of $^6$Li (1+). Hence, if the less entangled orbitals are treated differently, more efficient simulation circuits with fewer qubits and fewer quantum gates are possible for nuclei like $^9$Li (3/2-). Moreover, other entanglement metrics like mutual information can provide valuable insight into the underlying structure of a few-nucleon system. This method of reducing quantum resources could be useful for other neutron-rich nuclei of different isotopic chains.
Many-body entanglement is essential for most quantum technologies, but generating it on a qubit platform is generally experimentally challenging. On the other hand, continuous-variable (CV) cluster states have recently been realized among over a million bosonic modes. In our work, we present a hybrid CV-qubit approach to generate entanglement between many qubits by downloading it from efficiently generated CV cluster states. Our protocol is based on hybrid CV-qubit quantum teleportation in the displaced Gottesman-Kitaev-Preskill (GKP) basis. We develop an equivalent circuit model to characterize the dominant CV errors: finite squeezing and loss. Our results show that only 6dB squeezing is sufficient for robust qubit memory, and 12dB squeezing is sufficient for fault-tolerant quantum computation. We also show the correspondence between loss and qubit dephasing. Our protocol can be implemented with operations that can be commonly found in many bosonic platforms and does not require strong hybrid coupling.
Studying emergent phenomena in classical statistical physics remains one of the most computationally difficult problems. With the appropriate algorithm to renormalize the system, one of the most effective methods to study these problems is tensor networks. In the context of research areas like condensed matter, the result is a coarse grained and truncated system where only the most relevant states ranked by entropy have been maintained. An explosion of numerical algorithms which compute general properties of a statistical physics system such as specific heat, magnetization, and free energies are available; however, an overview of which tensor algorithms are best and where they must be improved would be highly advantageous for the scientific community. With our newly coded library of open access tensor network algorithms we make new recommendations of which algorithms to use, speculate on improvements for future algorithms, and provide information on how to implement novel tensor networks using our framework, the DMRjulia library.
M.R.G.F. acknowledges support from the Summer Undergraduate Research Award (SURA) from the Faculty of Science at the University of Victoria and the NSERC CREATE in Quantum Computing Program, grant number 543245.This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grants RGPIN-2023-05510 and DGECR- 2023-00026
Superconducting radiofrequency (SRF) cavities are an enabling technology for modern high power accelerators enabling material science (e.g. Canadian Light Source), nuclear physics (e.g TRIUMF), and particle physics (e.g. LHC, Electron ion collider) experiments. The behaviour of superconductors under radiofrequency is distinctively different from the DC case being intrinsically dissipative at temperatures above 0K and strongly dissipative above the lower critical field Hc1. This requires dedicated research and development for reliable operation and advancing the technology beyond state of the art. One particular technical challenges is efficient recovery and mitigation of performance degradation during operation to maximize availability for experiments. Under ideal conditions, state of the art SRF cavities reach fundamental limitations in terms of accelerating gradient (energy gain per unit length) and power dissipation. Further performance increases require specialized chemical and surface treatments, tailored to specific cavity types (optimized in shape for different charged particles from electrons to heavy ions) and exploring heterostructure nanomaterials. I will highlight recent research highlights from TRIUMF and UVic including results from testing new surface treatments on unique multimode coaxial resonators and material science investigations using beta detected nuclear magnetic resonance (beta-NMR) and muon spin rotation and relaxation (muSR) combined with the state of the art material analytic techniques (Transmission electron microscopy, secondary ion mass spectroscopy). The very low dissipation of SRF technology is also of interest to applications in quantum technology. Based on SRF cavity data we have developed a model for two level system losses.
The Dept. of Physics, together with the NEWS-G collaboration at Queen’s University are developing Spherical Proportional Counters (SPC) aimed for dark matter detection research. The response of SPCs to nuclear recoils with the interaction of the hypothetical dark matter particles can be best calibrated with a high intensity beam of low energy neutrons (~10 keV – 100 keV). Presently, the number of facilities having such neutron sources are quite low. This project aims to design and build a low energy neutron source at the proton accelerator facility of the Reactor Materials Testing Laboratory (RMTL).
This new neutron source consists of proton beam of 1.89 MeV – 2 MeV energy which bombards a target of Lithium Fluoride. The target is made by evaporating LiF on a tantalum substrate. This target plate is then kept on an Aluminium Nitride backing plate which together is kept on a 304L stainless steel flange which seals the vacuum chamber.
According to the theoretical calculations, LiF produces a good yield of neutrons, best at an angle of 45°. However, the energy spectrum of these neutrons ranges from ~31keV to higher. To achieve a monoenergetic source of neutrons at 24keV, we are also developing a collimator with an iron filter.
The collimator would consist of a combination of shielding materials, particularly Borated (B-PE) and Non-Borated Polyethylene (PE), and Lead (Pb). B-PE would thermalize the neutrons leaving at undesirable angles from the source and the Pb shielding would absorb the gamma radiation created by the B-PE as these would induce undesirable background in the SPC.
A thin layer of PE will also be used to decrease the energy of the neutrons originally in the energy range of > 31keV, down to a suitable energy range before they reach the iron filter to produce a 24keV neutron beam.
The High Energy Light Isotope eXperiment (HELIX) is a balloon-borne payload designed to measure the isotopic abundances of light cosmic ray nuclei. Precise measurements of the 10Be isotope from 0.2 GeV/n to beyond 10 GeV/n will allow the refining of cosmic ray propagation models, critical for interpreting excesses and unexpected fluxes reported by several space-borne instruments in recent years. Beryllium isotopes will be observed by HELIX with the first in a series of long duration balloon flights this summer in the Arctic. Upon completion of its maiden voyage, the detectors that make up the payload will be upgraded for a second flight to enhance performance and increase statistics. Potential upgrades for the HELIX hodoscope, an instrument contributing to the observation of particle path in the experiment, are being developed for this purpose.
The hodoscope is a position-measuring detector that uses ribbons of scintillating fibres woven into silicon photomultipliers to provide the location of incident particles in a high resolution. A prototype for an updated optical sensor readout system is being constructed at Queen’s University without fibre weaving. In this presentation, I will discuss the design and development status of the prototype hodoscope for the future HELIX payloads.
A Laser Ablation Source (LAS) can be used as an adaptable tool for ion production in mass spectrometry experiments [1]. The choice in ablation material allows for diverse production of ion species. This flexibility particularly complements online ion-trap-based mass spectrometry experiments, which require a variety of calibrant species across a wide range of masses. A LAS is currently being developed as an ion source for TRIUMF's Ion Trap for Atomic and Nuclear Science (TITAN). The LAS will couple to TITAN's Multiple-Reflection Time of Flight Mass Spectrometer (MR-TOF-MS) [2] to enhance the variety of stable and long-lived species for calibration during on-line experiments, off-line experiments, and technical developments. The LAS will additionally aid the other ion traps at TITAN in tuning prior to experiments through the production of chemical or mass analogs of targeted isotopes. Optimization of the ion optics and the overall design have been completed. Manufacturing is underway at the University of Calgary, where assembly and off-line testing will be completed before installation onto the on-line TITAN facility. The status of the LAS will be discussed, including characterizations of the assembled system such as the spatial resolution of the laser ablation spot on multi-material targets. The addition of the LAS to TITAN will not only improve the precision of online ion-trap-based mass spectrometry experiments through the introduction of isobaric mass calibrants, but also open new pathways for TITAN to engage in a variety of environmental and medical studies.
References
1. K. Murray et al. “Characterization of a Spatially Resolved Multi-Element
Laser Ablation Ion Source”. In: International Journal of Mass Spectrometry 472
(2022), p. 116763. issn: 13873806. doi: 10.1016/j.ijms.2021.116763
2. T. Dickel et al. “Recent upgrades of the multiple-reflection time-of-flight mass
spectrometer at TITAN, TRIUMF”. in: Hyperfine Interactions 240.1 (2019).
issn: 0304-3843. doi: 10.1007/s10751-019-1610-y
Polycyclic hydrocarbons (PHs) are carcinogens often present in water due to its contamination from oil and vehicle exhausts, and their removal is difficult due to their resistance to conventional water purification methods. Here, we present a thorough synchrotron-based characterization of carbon nano-particles derived from different parts of the cannabis plant (hurd and bast) and their ability to adsorb PHs in aqueous environment, with anthracene as a case study. The synthesis of carbon nano-particles was carried out by pyrolysis at varying temperatures followed by strong acid (HNO3:H2SO4) treatment. The goal is to establish a structure-function relationship between the synthesis parameters and the ability of these nano-particles to promote PH adhesion at their surface via pi-pi electron stacking. Synchrotron based X-ray absorption (XAS) is used to investigate the composition of these nanoparticles as well as their electronic structure which profoundly differs from graphene oxide and carbon dots, and is more resembling to amorphous carbons. Along with dynamic light scattering, XAS also demonstrates that defect-free sp2 carbon clusters (with limited hydroxyl and carboxyl groups at their surface) is necessary for the interfacial adhesion of anthracene at their surfaces. Our XAS results are also corroborated by benchtop techniques including Fourier-transform infrared (FTIR), photo-luminescence (PL) and UV-visible optical spectroscopies, as well as atomic force microscopy (AFM). We demonstrate that a unique advantage of our biomass-derived carbon nano-particles rests in the rapidity of their anthracene capture process, only requiring a few seconds, as opposed to several hours as in other systems proposed in the literature. Collectively, our study demonstrates the importance of advanced XAS techniques for the characterization of pi-pi electron stacking in carbon nanosystems.
The ability to measure small deformations or strains is useful for understanding many aspects of materials especially in soft condensed matter systems. Systematic shifts of speckles arising from small angle x-ray coherent diffraction when analyzed enable flow patterns of particle in the elastomers to be inferred. This information is obtained from cross-correlations of speckle patterns. This speckle tracking technique measures strain patterns with a accuracy similar to X-ray single crystal measurements but in amorphous or highly disordered materials.
Epitaxy of group IV semiconductors is a key enabler for electronics, telecommunications, and quantum devices. In the case of Sn, the growth challenges posed by lattice mismatch and the low solid solubility of Sn (<0.1%) in Si and Ge are significant. This research addresses these challenges by investigating ion implantation as a non-equilibrium growth technique combined with post-implantation annealing. A range of Sn concentrations was explored using Sn ions implanted into Si (001) at different doses (5E14 – 4E16 atoms/c$m_2$) and annealed at 600$^o$C and 800$^o$C (30 mins, dry $N_2$). The structural and optical properties of the samples were analyzed using Rutherford Backscattering Spectrometry (RBS), Scanning Electron Microscopy (SEM), X-ray Photoelectron Spectroscopy (XPS), Positron Annihilation Spectroscopy (PAS), and Spectroscopic Ellipsometry (SE). RBS and SEM results indicate a maximum Sn dose of 5E15 for avoiding segregation during annealing at 600$^o$C and 800$^o$C, with Sn substitutionality reaching ~95 ±1%. SE results demonstrate increased optical absorption coefficient (𝛂) of Si for all implanted Sn doses (for λ = 800 - 1700 nm), with the highest 𝛂 values recorded for the highest dose of Sn (4E16). Evidence of segregated Sn contributing to changes in optical properties of Si is observed by etching the SiSn sample with 4E16 dose of Sn. The results show a reduction in the initial 𝛂 values; however, values obtained after etching were still higher than for pure Si. In conclusion, our study identifies Sn compositions that achieve high (~95%) substitutionality in Si without onset of segregation at 600$^o$C and 800$^o$C annealing temperatures. We analyze the implications of these findings on the optical properties of Si.
We may expect lithium to be the simplest metal as it has only a single $2s$ valence electron. Surprisingly, lithium's crystal structure at low temperature and ambient pressure has long been a matter of debate. In 1984, A. W. Overhauser proposed a rhombohedral $9R$ structure. Subsequent neutron experiments by Schwarz et al. in 1990 favour a disordered polytope. More recently, in 2017, Elatresh et al. argued against the $9R$ structure while Ackland et al. found fcc ordering. In this work, we seek to understand the physical principles that could lead to such conflicting findings. We describe metallic bonding in an arbitrary close-packed structure within the tight-binding approximation. Close-packed structures, also called Barlow stackings, are infinite in number. They can be codified by stacking sequence (e.g. fcc $\leftrightarrow ABC$) or by a Hagg code (e.g. fcc $\leftrightarrow +++$). From the point of view of an atomic orbital, all close-packed structures offer similar local environments with the same number of nearest neighbours. When hoppings are short-ranged, the tight binding description shows a surprising gauge-like symmetry. As a result, the electronic spectrum is precisely the same for every close-packed structure. This results in competition across a large class of structures that all have the same binding energy.
A preference for one ordering pattern can only emerge from (a) long-ranged (third-neighbour and further) hoppings or (b) phonon free energies at finite temperatures. Our results could explain the observed fcc structure in lithium under high pressure.
There is a critical knowledge gap in understanding the kinetics and mechanisms of mineral formation and degradation in the context of potential technologies that are targeted for carbon capture, utilization, and storage [1]. Both crystallization and dissolution of carbonate minerals figure prominently in many such climate-change-mitigation strategies that aim for carbon dioxide removal. For example, different approaches to ocean-based alkalinity enhancement involve processes that depend on mineral surface and interfacial effects in order to increase water pH with concomitant atmospheric carbon removal. In this context, I will describe my team’s work related to tracking changes in carbonate mineral phases, including surfaces and bulk structures, due to dissolution and recrystallization processes [2]. In doing so, I will emphasize the urgent need for collaborations between researchers who do foundational materials physics with those involved in developing monitoring, reporting, and verification protocols for potential carbon dioxide removal strategies.
[1] Basic Energy Sciences Roundtable, Foundational Science for Carbon Dioxide Removal Technologies, US Department of Energy (2022) DOI: 10.2172/1868525
[2] B. Gao, K. M. Poduska, S. Kababya, A. Schmidt. J. Am. Chem. Soc. (2023) 48, 25938-25941. DOI: 10.1021/jacs.3c09027
This research aims to enhance the performance of thermoelectric systems through a multifaceted approach combining computational modeling and machine learning techniques. The study focuses on analyzing quantum statistics within thermoelectric systems to uncover novel insights into alloy doping. We verified key derivations concerning the extrema of thermal, lattice thermal, and electrical conductivities as a function of temperature. Utilizing the analytical equations proposed by Yadav et al. (2019), we numerically verified and validated these equations, and discussed the theoretical predictions given in the paper. We developed a machine-learning model to predict thermoelectric figures of merit. With the use of the Polylogarithm and Lambert W functions, the model aims to provide optimal values for doping in thermoelectric alloys and seeks to identify compositions that can significantly enhance thermoelectric performance. This study involves a comprehensive analysis of the interplay between doping concentration, material properties, and thermoelectric efficiency. Our study endeavours to provide valuable insights into materials that advance thermoelectric technology to develop more efficient and sustainable energy conversion systems.
The dynamics of particles residing at a liquid-gas interface have shown to be of high importance both in fundamental studies and technological applications in recent years. Interfacial particles are amply found in artificial material manufacturing and biological systems. A better understanding of the unique physical properties of particles at the interface requires extensive attention to the surface interaction between the particle and the fluid. In this research, a computational method is employed to firstly successfully simulate the coexistence of liquid and gas, and secondly study the wetting properties of spherical particles at a liquid-gas interface. Different wetting boundary conditions will be tested to analyze the adsorption of the particle onto the interface. The simulations will be performed using a modified version of the lb/fluid package in LAMMPS, which is an implementation of Lattice Boltzmann method for simulating fluid mechanics. These results can provide us with enough insights to study interfacial particles with more complex conditions.
To date, there are very few all-optical techniques, if any, that are suitable for the purpose of acquiring, with nanoscale lateral resolution, quantitative maps of the thermal conductivity and thermal expansivity of 2D materials and nanostructured thin films, despite huge demand for nanoscale thermal management, for example in designing integrated circuitry for power electronics. Here, we introduce ω-ω and ω-2ω near-field thermoreflectance imaging as an all-optical and contactless approach to map the thermal conductivity and thermal expansion coefficents at the nanoscale with precision. Testing of our technique is performed on nanogranular films of gold and multilayer graphene (ML-G) platelets. As a case study, our recently invented ω-ω near-field scanning thermoreflectance imaging (NeSTRI) technique is here applied to multilayer graphene thin films on glass substrates. Thermal conductivity of micrometre-size multilayer graphene platelets is determined and is consistent with previous macroscopic predictions. As far as thermal expansion coefficient (TEC) is concerned, our method demonstrates that the TEC of ML-G is (-5.77±3.79) x10-6 K-1 and is assigned to in-plane vibrational bending modes. A vibrational-thermal transition from graphene to graphite is observed, where the TEC becomes positive as the ML thickness increases. Basically, our nanoscale method demonstrates results in excellent agreement with its macroscopic counterparts, as well as superior capabilities to probe 2D materials and interfaces.
The intense confinement of electromagnetic fields between metallic bispheres remains a subject of ongoing technological interest. Similarly, light can be concentrated into near-flied subwavelength hotspots in dimers of high refractive index dielectric resonators. Micro-resonators made of silicon and germanium are often exploited in forming exceedingly strong axial hotspots in dimers at visible spectrum region, facilitated by the hybridization of morphology-dependent resonances (MDRs) in individual objects. With an index of refraction approaching 9 at microwave frequencies, water has a large index contrast between the dielectric and the surrounding air, making water a particularly suitable material for obtaining strong Mie resonances. As a result, cm-sized aqueous dielectric dimers such as grapes can exhibit sufficiently strong axial hotspots to ignite plasma within household microwave ovens. Since individual grapes are never observed to spark, an understanding of the hybridization of isolated MDRs in dimers (and clusters) is of interest from a fundamental and technological (nano)photonic perspective.
We employ a combination of experimental, analytical, and computational methods to investigate MDRs hybridization in water, with a focus on the formation of axial hotspot in aqueous dimers. Experimentally, we use hydrogel beads and thermal imaging to explore polarization and size-dependence in hybridization. An analytical approach of applying vectoral addition of spherical harmonics provides geometric insight into which modes most strongly interact to form an electromagnetic hotspot. Finally, we employ the FEM simulations to further investigate mode concentrations and hotspot formation in dimers of various sizes, orientations, and separation.
During this talk, I aim to facilitate a critical conversation about Black women's educational experiences in post-secondary physics and astronomy education in Canada. To accomplish this, I develop a framework to understand the normative physics and astronomy curriculum, wherein 'normative curriculum' refers to the learning and performance expectations that extend beyond what texts like syllabi, course outlines, and standard educational material might inform. Drawing on the literature reviewed in my doctoral study, I describe the educational experiences of women in North America and use these experiences to shape typical curricular expectations. To begin, I explore how education in post-secondary physics and astronomy programs is understood within research. A comprehensive review of critical thinkers in science education leads to the conceptualization of how individuals encounter the curriculum. Subsequently, I operationalize the notion of curriculum as experiences, as revealed by the research on and about White women, Women of Color, and Black Women who study, research, and work in the physical and astronomical sciences. In doing so, I gather themes from literature to highlight the often-overlooked commitments and tasks that women must fulfill to be recognized as legitimate physicists and astronomers. Following this, I describe areas of the normative physics and astronomy curriculum, detailing critical perspectives on thinking and learning in science education. Throughout this talk, I will make connections to findings from my current study on Black women's educational experiences, including how they navigate predominantly White and male-dominated spaces in Canada. By the end of this discussion, I hope to deepen our overall understanding of physics and astronomy education within a national context.
Culture is a way of learning and therefore determine student career interest and performance. In Nigeria, the females are often regarded as the weaker sex and therefore need to choose careers not met for men. This results to gender inequality especially in the science and related discipline. The purpose of this study is to determine the cultural effect of gender on the admission and performance of physics students in six selected universities from the six geopolitical zones in Nigeria. The simple statistical analysis showed that female to male admission ratio overwhelmingly favour males. On performance, however, the performance of the female students did not show any gender difference. The cultural effects on the difference in gender admission and its non-significant effects on performance are discussed.
International Union of Pure and Applied Physics (IUPAP) is an organization that is deeply committed to promoting EDI among the worldwide community of professional physicists. As a Chair of 2022-2024 C14 (Physics Education) Commission of IUPAP, I had a privilege to participate in and co-organize several events that took place during the mandate of the current commission and were aimed at promoting gender equality in physics education. In my talk I will report on the Education Workshops at the 8th International Conference on Women in Physics 2023 (ICWIP2023) that I was tasked to co-organize in collaboration with my colleagues from the IUPAP's Women in Physics Working Group (WG5). More broadly, I will touch upon IUPAP’s EDI principles and initiatives that benefit physics education worldwide.
Diversity is lacking by most measures, in most STEM fields, including physics. A survey of Canadian physicists from 2021, called CanPhysCounts, found that the percentage of white men only increased as you move up the ranks in physics; the undergraduate level having the most diversity and people in physics careers or faculty positions being the least diverse with over 50% of people surveyed identifying as white men.
Increasing diversity, equity, inclusion, and accessibility (DEIA) in physics, and other STEM fields, is critical to producing good science. If there are more voices at the table, there will be new, interesting questions being asked and if we include more diverse thinkers in our science, that science will become better. To get these voices involved we have to prioritize DEIA within our physics worlds. A diverse group of people are not going to stay in physics if the physics space is not welcoming, inclusive, equitable, and accessible to them. In an effort to prioritize DEIA, I have created a practical guide that will help meeting and conference organizers make their meetings more inclusive and accessible. This guide was written as a compliment to the 500 Women Scientists’ Inclusive Scientific Meetings Guide. Scientific meetings and conferences are a good place to do DEIA work, because meetings and conferences are where many early career scientists find opportunities to advance their careers; from presenting their work, to engaging with collaborators, to meeting potential future advisors and employers. The same concepts can be generalized, though, into many scientific environments. Here I will present the motivation for my guide, the work that has previously been done in this area, what my guide brings to the table, and how I hope my guide will be used.
Since the discovery of the Higgs boson by the ATLAS and CMS Collaborations in 2012, a major focus in particle physics has been the understanding of its interactions. In the last years, huge progress was made in determining the strength of the Higgs bosons couplings to fermions and vector bosons, but its self‐interaction has yet to be established. The Higgs self-interactions are tightly related to the form of the Higgs potential, thus representing an extremely important measurement for our understanding of the origin of electroweak symmetry breaking and our universe. The most natural way to probe the self-interaction and the shape of the Higgs potential is through searches for Higgs boson pairs (HH) at particle colliders. This talk aims to summarize the most recent Higgs boson pairs results of the ATLAS experiment, as well as the prospects for future measurements.
The Large Hadron Collider (LHC) at CERN, is the largest and most powerful particle collider in the world, and the only machine capable of producing Higgs bosons. Interactions with the Higgs field gives particles mass, and a particle’s coupling with the Higgs boson is proportional to its mass. The Standard Model particles that make up matter can be grouped into different generations, and previous measurements of Higgs’ couplings have focused on the third generation of particles, which are the most massive. The best opportunity to measure the Higgs coupling to a second-generation particle at a lower, untested mass scale is by measuring the Higgs boson decay into two muons.
The Higgs to dimuon decay is a very rare process, and there are many other processes that can mimic this signature, making it very difficult to measure. Advanced methods are required to identify this small signal from a large continuous background in the data collected by the ATLAS detector at the LHC. An important technique to increase the signal-to-background ratio is splitting the data into distinct categories, based on the properties and kinematics of the events. The Higgs signal can then be extracted separately from several datasets with different signal-to-background ratios, resulting in a large increase in overall statistical significance of the measurement. Using the latest advancements in machine learning, I will use a deep neural network (NN) to optimize these categories. Various observables measured by the ATLAS detector will be provided to this NN, and it will determine the optimal way to separate the data into categories to maximize the statistical significance.
After the data has been split into optimal categories, the Higgs boson resonance peak can be extracted from the background. With improvements in analysis techniques and data currently being taken during Run-3 of the LHC, we hope to measure the Higgs to dimuon production with at least 3 sigma significance at the ATLAS detector, which would establish evidence for this process.
There exists a large body of indirect evidence for the existence of Dark Matter (DM) but, to date, no direct evidence has been found. Because of this, there is a wide range of open parameter space which has given rise to many different models. One class of models proposes that dark matter is composed of particles that have their own interactions and only minimally couple to the standard model through one or more “portal” interactions. One category of such models include a vector portal term that kinetically mixes dark gauge fields with standard model gauge fields. These models are characterized by Dark Matter having a component consisting of a Mili-charged particle - particles having an effective electric charge that is a fraction of the electron's electric charge. Direct detection of dark matter at accelerators is a high priority to narrow down possible models. Detecting or ruling out some possible DM models is a part of the experimental program for the MoEDAL experiment located at the LHC. The MAPP extension to the MoEDAL experiment, now approved for run 3, focuses on searching for Mili-Charged Particles (mCPs), and Long-Lived Particles (LLP). The vector portal that gives rise to mili-charged Dark Sector components has two possible phases: the Holdom phase, which is characterized by a massless dark vector gauge field, and the Okun phase, which has a massive dark vector gauge field. This talk will focus on a 'mixed' phase, which assumes both a massless and massive dark vector field. We will then look at Drell-Yan production of Dark mCPs and explore their phenomenology within the context of MoEDAL-MAPP.
Recent progress in understanding the algebraic structure of Feynman integrals has lead to a new "tropical" numerical integration algorithm introduced by Borinsky and collaborators. For the first time, it is possible to systematically study the numerical values of very many Feynman integrals from a relatively broad class. I will present the findings of such a study that involved all subdivergence-free vertex-type Feynman graphs of phi^4 theory in 4 dimensions up to 13 loops, and partial data up to 18 loops. In total, more than 1.5 million vacuum integrals have been computed, amounting to over 20 million vertex-type integrals. The resulting data indicates that at high loop order, most Feynman integrals follow a smooth distribution, but higher moments of that distribution diverge. This has severe consequences for the accuracy of randomly sampling Feynman graphs. Moreover, this study has lead to new numerical data for the subdivergence-free contribution to the beta function up to 18 loops, confirming a longstanding prediction for the leading asymptotic growth of these coefficients.
Based on JHEP 2023.160
We study Unruh phenomena for an accelerating qudit detector coupled to a quantized scalar field, comparing its response to that of a standard qubit-based Unruh-DeWitt detector. We show that there are limitations to the utility of the detailed balance condition as an indicator for Unruh thermality of higher-dimensional qudit detector models. This can be traced to the fact that a qudit has multiple possible transition channels between its energy levels, in contrast to the 2-level qubit model. We illustrate these limitations using two types of qutrit detector models based on the spin-1 representations of SU(2) and the non-Hermitian generalization of the Pauli observables (the Heisenberg-Weyl operators). https://arxiv.org/abs/2309.04598
In the 1970s, it was discovered that a uniformly accelerated detector, interacting with the vacuum state of a quantum scalar field in flat spacetime, has a thermal response with a temperature proportional to its proper acceleration. This phenomenon, known as the Unruh effect, is considered a signpost in the search for a quantum theory of gravity. Since the discovery of the effect, efforts have been dedicated to the study of quantum detectors in curved spacetime because their response encodes information about fluctuations of the vacuum state of the field and hence of the underlying spacetime. However, despite more than four decades of dedicated research, little is known about the response of quantum detectors as they freely fall into black holes. I present results detailing the response of a detector interacting with the Hartle-Hawking vacuum state of a massless scalar field in a Bañados-Teitelboim-Zanelli (BTZ) black hole as the detector freely falls toward and across the event horizon. I also discuss how this response changes for the geon counterpart of a BTZ black hole, an object identical to the BTZ black hole outside its horizon but having a different topology inside. Our results suggest that the detector can potentially serve as an ‘early warning system’ that indicates the presence of the event horizon and discerns the interior topology of the black hole.
We consider the transition rate of a static Unruh-DeWitt particle detector in a variety of spacetimes built out of quotients of $\text{AdS}_3$ spacetime. In particular, we contrast the behavior of a Unruh-DeWitt detector interacting with a quantum scalar field in the $\mathbb{R}\text{P}^{2}$ geon spacetime and a spacetime constructed by Aminneborg et al. The Wightman functions of these spacetimes are obtained using the method of images. We find a number of features that distinguish the two spacetimes, which are identical outside of the black hole's event horizon, most notably, in the response functions of gapless detectors in the sharp-switching limit. This points to a way in which the interior topology of a black hole may be discerned by an external observer.
Recent studies have shown that an Unruh-DeWitt (UDW) detector coupled to a massless scalar field in (3+1) Schwarzschild and (2+1) non-rotating BTZ spacetimes exhibits a local extremum in transition rate at the horizon. This non-monotonicity is of interest, as it suggests that the event horizon is distinguishable to a local probe when QFT is taken into consideration. In this study, we calculate the transition rate of a freely falling UDW detector in (2+1)-dimensional rotating BTZ spacetime. We explore different values of black hole mass, black hole angular momentum, and boundary conditions of the field at infinity. The results that we obtain are consistent with previous studies in the limit as black hole angular momentum vanishes; however, the presence of rotation introduces new phenomena, and our results provide a more general profile for the infalling detector problem in BTZ spacetime. There is now a growing body of evidence for detector excitement across black hole event horizons, and we anticipate that further searches will be conducted in other spacetimes to better understand its physical meaning.
In appropriate semiclassical limits, the so-called Island Formula computes the entropy of non-gravitational quantum systems entangled with a gravitational theory. This is a special case in which the quantum-corrected Ryu-Takayanagi formula has been shown to compute a von Neumann entropy using only properties of the gravitational path integral and, in particular, without relying on the existence of a holographic dual field theory. It is thus natural to claim that a similar conclusion should hold more broadly, and that any asymptotically-AdS gravitational theory will define an algebra for any boundary region such that, in appropriate limits, the entropy of any state on that algebra is computed by the quantum-corrected Ryu-Takayanagi formula. Recent works by Chandrasekaran, Pennington and Witten have used the theory of von Neumann algebras to derive results of this form in various special contexts. We argue here that the above claim holds more generally, whenever the Euclidean path integral of the gravitational theory satisfies a set of standard axioms. We thus allow finite values of all coupling constants and do not require taking any special limits. Since our axioms do not restrict ultra-violet bulk structures, they may be expected to hold equally well for successful formulations of string field theory, spin-foam models, or any other approach to constructing a UV-complete theory.
Dynamins are an essential superfamily of mechanoenzymes that remodel membranes and often contain a “variable domain” important for regulation. For the mitochondrial fission dynamin, dynamin-related protein 1 (Drp1), a regulatory role for the variable domain (VD) is demonstrated by gain- and loss- of-function mutations, yet the basis for this is unclear. Here, the isolated VD is shown to be intrinsically disordered and undergo a liquid–liquid phase separation under in vitro crowding conditions. MD simulations suggest this liquid-liquid phase separation arises from weak, multivalent interactions similar to other systems involving intrinsically disordered regions. These crowding conditions also enhance binding to cardiolipin, a mitochondrial lipid, which appears to also promote phase separation. Since dynamin-related protein 1 is found assembled into discrete punctate structures on the mitochondrial surface, the inference from the present work is that these structures might arise from a condensed state driven by interactions between VD domains and between cardiolipin and VD. These findings support a model where the variable domain mediates phase separation that enables rapid tuning of Drp1 assembly necessary for fission.
Introduction: We have previously demonstrated, using polarized light, that we can image amyloid protein deposits in the retina without a dye. Postmortem, their numbers predict the load of amyloid in the brain and severity of Alzheimer’s disease (AD). Here we differentiate retinal deposits of presumed amyloid beta, associated with AD, from presumed retinal deposits of alpha synuclein, associated with two other neurodegenerative diseases (multiple system atrophy, MSA and dementia of Lewy bodies, DLB). We also image precursors to these deposits.
Methods: Eyes and brains were obtained post-mortem in compliance with the Declaration of Helsinki from 10 donors with AD, and from 2 donors with MSA or DLB in whom alpha synuclein had been found in the brain. Individuals with multiple post-mortem brain pathologies were excluded from this study. Eyes were fixed in 10% formalin. Retinas were stained with 0.1% Thioflavin-S and counterstained with DAPI, flat mounted in quadrants and imaged using a microscope, custom fitted with a polarimeter. In each subject, deposits found in association with the neural retina as well as the surrounding retinal area were imaged. For each imaged region, 10 polarized light interactions were examined. The presence of interactions with polarized light was measured both in the deposits and surrounding tissue.
Results: Although their size distributions overlapped, retinal deposits were significantly smaller in retinas in which amyloid beta deposits were expected, compared with the size of the presumed alpha synuclein deposits. After correction for repeated measures, the averages and standard deviations of four polarimetric properties differed significantly between the presumed amyloid deposits and the presumed alpha synuclein deposits. Using machine learning (random forest and convolutional neural networks), we were able to separate the two deposit types with accuracies of >85%. Interactions with circularly polarized light were also detected.
Conclusions: Interactions with polarized light can separate deposits in the retina due to Alzheimer’s disease from those due to diseases with alpha synuclein pathology (MSA and DLB), early in the disease. Polarized light also detects two circular signals which are presumed to be precursors to deposits. These findings could lead to earlier and simpler diagnosis and differentiation of multiple brain diseases.
Molecular azobenzene photoswitches have long been attractive in the design of photoresponsive materials owing to their reversible light-triggered photoisomerization about the azo bond (N=N) between trans and cis isomeric configurations. Towards more versatile materials applications, azopyridines have been designed as a next-generation azobenzene photoswitch possessing pH sensitivity, hydrogen-bonding, and metal binding abilities. As a result, they have become a key element in the photocontrol of liquid crystals, pharmacological agents, photodriven oscillators, and molecular spin switches. Our group is also developing a new, nature-inspired optical oxygen sensor for tumours whose quantitative readout is the isomerization rate of an azopyridine photoswitch unit. However, detailed studies on the isomerization kinetics of azopyridines and their protonated forms are still lacking. Not only would such studies extend their application to biological contexts, but protonation can also serve as a tool to significantly modulate the photoisomerization process and has even been shown to abolish it entirely. Moreover, there is a conspicuous lack of literature on the photoisomerization of azopyridines in chlorinated solvents where adventitious protonation can occur.
In this work, irradiation of 4-phenylazopyridine (AzPy) in chlorinated solvent with 365 nm light produced significant bathochromic shifts in the π-π and n-π absorption bands rather than the expected spectral changes associated with trans-cis photoisomerization. In addition, there was a significant acceleration of the cis-trans back-isomerization rate, which was attributed to protonation of AzPy at the pyridine nitrogen due to HCl production from UV-mediated photodecomposition of the solvent. Calculations with the density functional theory quantum-mechanical method demonstrated that due to weakening of the electronics of the azo bond, protonation reduced the activation barrier for cis-trans isomerization significantly, corresponding to a 9-fold acceleration in the isomerization rate. Remarkably, protonation also shut down intersystem crossing between singlet and triplet potential energy surfaces along the isomerization reaction coordinate.
Proton therapy uses an external beam of protons to destroy cancerous tissue while reducing damage to healthy tissue. Of particular interest is the recent concept of proton FLASH therapy, where ultra-high dose rates (> 40 Gy/s) are delivered for under one second, with improved sparing of healthy tissue compared to conventional dose rates. The FLASH effect and the influence of beam properties and biological characteristics are not yet fully understood, hence, a sensitive dosimeter with high spatial resolution and in-situ relative dose information for FLASH is needed to bring it into the clinic. Optical fibers (OF) are gaining traction as dosimetry detectors in radiotherapy, including proton therapy, due to their superior spatial resolution, linear dose dependence, independence of dose rate, real-time response, and independence from electromagnetic fields and temperature fluctuations within the range of realistic clinical conditions.
At TRIUMF, characterizations of OF for proton FLASH dosimetry are ongoing. As beam-availability at the Proton Therapy Research Centre is limited, we are now exploring experiments at the TR13, TRIUMF’s 13 MeV cyclotron, which is used to produce medical isotopes and where the beam is more regularly available. To characterize a fiber’s light yield and radiation hardness, a fiber holder customized for the TR13 is needed. The fiber holder was designed based on Monte Carlo simulations in FLUKA as well as temperature calculations using in-house data.
Three different fiber holders were tested in simulations. Two designs were discarded because of energy deposition inhomogeneity in the fiber and other considerations. The third fiber holder showed promising results regarding beam deposition, heat transfer calculations, and radiation activation limitations.
The current fiber holder design can hold silica fibers up to a diameter of 350 um and withstand irradiations of up to 2 µA beam current. This holder will allow systematic evaluation of OF for potential use with proton FLASH.
The MOLLER experiment is a >$40M USD experiment expected to run in 2026. This experiment has a large Canadian contribution, to both the spectrometer and detector systems. The experiment utilizes parity-violation in the weak interaction to measure the asymmetry between longitudinally polarized electrons in the positive and negative helicity states. The electrons scatter from electrons in liquid hydrogen, are collimated and bent through the spectrometer system to the main detector array. There are 224 integrating quartz detectors in the array. In addition there are a set of tracking detectors to study backgrounds and determine the acceptance. In fact, the whole accelerator is part of the experiment, with beam position and charge monitors throughout the beamline serving to study helicity-correlated backgrounds. In this talk I will describe the goals of the MOLLER experiment and its design and provide a status, in particular of the spectrometer and detector systems.
The conventional picture of the hadron, in which partons play the dominant role, predicts a separation of short-distance (hard) and long-distance (soft) physics, known as 'factorization'. It has been proven that for certain processes, at sufficiently high $Q^2$, the reaction amplitude factorizes into a hard part, representing the interaction of the incident virtual photon probe with the parton, and a soft part, representing the response of the nucleon to this interaction. One class of such processes is Deep Exclusive Meson Production (DEMP), which provide access to a novel class of hadron structure observables known as Generalized Parton Distributions (GPDs). Unifying the concepts of parton distributions and of hadronic form factors, GPDs correlate different parton configurations in the hadron at the quantum mechanical level, and contain a wealth of new information about how partons make up hadrons. However, access to such GPD information requires that the 'factorization regime' has been reached kinematically, and this can be tested only experimentally. I will summarize prior and planned tests of the validity of GPD factorization in DEMP reactions, such as exclusive pion and kaon production, using the Jefferson Lab Hall C apparatus.
Measurements of several rare eta and eta′ decay channels will be carried out at the Jefferson Lab Eta Factory (JEF). JEF will commence in fall 2024 using an upgraded GlueX detector in Hall D. The combination of highly-boosted eta/eta′ production, recoil proton detection, and a new fine-granularity high-resolution lead-tungstate insert in the GlueX forward calorimeter confers uniqueness to JEF, compared to other experiments worldwide. JEF will search for new sub-GeV gauge bosons in portals coupling the SM sector to the dark sector, will provide constraints on C-violating/P-conserving reactions, and will allow precision tests of low-energy QCD. Details on the hardware upgrade and simulations will be presented.
Measurements of the neutron electric dipole moment (EDM) place severe constraints on new sources of CP violation beyond the standard model.
The TRIUMF UltraCold Advanced Neutron (TUCAN) EDM experiment aims improve the measurement of the neutron EDM by a factor of 10 compared to the world's best measurement. The experiment must be conducted in a magnetically quiet environment. A magnetically shielded room (MSR) has been prepared at TRIUMF to house the experiment. The MSR was designed to provide a quasi-static magnetic shielding factor of minimally 50,000, which would be sufficient to meet the requirements of the EDM experiment. Measurements have showed that the shielding factor goal was not met. Several additional measurements were taken in order to understand the result. In communication with the MSR vendor, we have designed a new insert for the MSR, which is expected to restore its capabilities. In this presentation I will review the situation with the TUCAN MSR, how we discovered its performance issues, and our progress on fixing the problem.
The Lassonde School of Engineering at York University launched k2i (kindergarten to industry) academy in June 2020 with a mission to create an ecosystem of diverse partners, committed to dismantling systemic barriers to opportunities for underrepresented students in STEM. The k2i academy is a key component of the Lassonde School of Engineering Equity, Diversity, and Inclusion Action Plan. In this talk, Lisa Cole, Director of Programming at k2i academy will share how Inclusive Design approaches are currently being used to create programs that questions systemic barriers, innovates viable solutions, and builds alongside k12 sector partners.
There is increasing demand for measurements of atmospheric properties as the climate continues to change at an unprecedented rate. Remote sensing allows us to acquire information about our atmosphere from the ground and from space by detecting reflected or emitted radiation. I will present initial results of a comparison using simulated space-based measurements from the HAWC ALI satellite instrument with ground-based measurements from a network of micro-pulse lidars, MPLCAN.
The Aerosol Limb Imager (ALI) is a part of the High-altitude Aerosol, Water, and Clouds (HAWC) satellite, a Canadian mission which will help fill a critical gap in our understanding of the role of aerosol, water vapour, and clouds in climate forcing. ALI will retrieve aerosol extinction and particle size in the troposphere and stratosphere.
The Canadian Micro-Pulse Lidar Network (MPLCAN) is a network consisting of five micro-pulse lidars (MPLs) across eastern and northern Canada. The MPLs can detect particulates produced from wildfire smoke, volcanic ash, and anthropogenic pollutants by collecting backscattered light. They can also differentiate between water and ice in clouds by measuring the polarization state of the backscatter signal.
Coincident measurements between the MPLCAN and ALI instruments have great potential to validate the ALI measurements, and to extend their horizontal coverage. However, the ALI retrieved quantities are not directly comparable to the MPL backscatter measurements, so assumptions must be made about the constituents and optical properties of the atmosphere to compare them. The ALI retrieved quantities were converted to an MPL backscatter measurement for comparison using two methods. First, Mie scattering theory was used based on the ALI retrievals of aerosol particle size to calculate the backscatter coefficient. The second method assumed a lidar ratio, the ratio of backscatter to extinction, appropriate for background stratospheric aerosols. The ALI-derived backscatter coefficient from both methods yielded similar results. Preliminary comparisons between both simulated and actual MPL measurements and the converted ALI retrieval show promising agreement. Future work will aim to model ALI passing over multiple MPLs for realistic HAWC satellite tracks to simulate wildfire smoke events.
Resistance spot welding employs the Joule heating effect to form a localized molten pool between two or more metal sheets, which upon solidification forms a solid bond. This process is widely used in automotive and other industrial sectors due to its low cost and ease of automation. Quality assurance of such joints is primarily done using offline inspection with a multi-element ultrasonic transducer to allow for 2D measurements of the weld size to occur. Due to the high number of spot welds in automotive applications, averaging about 5000 welds per car, this inspection is performed only on critical welds, or periodically on select samples.
Currently, the novel in-process inspection, which monitors during welding, employs a single-element ultrasound transducer built into the welding electrode. A series of pulses are then used to form a time evolution signature from which size is estimated based on the penetration of the weld into the sheet. For this reason, adoption has been hindered in applications where the physical diameter of the welding zone is required by safety standards.
To overcome this, current techniques in the field such as muti-element matrix and phased array have been explored. Although both offer the possibility for diameter measurement, the increased size of the transducer requires a significantly larger welding electrode and makes integration difficult. Phased array also employs electronic focusing, increasing both the complexity and cost of the system by an order of magnitude.
In order to allow for imaging to occur, a radical alternative was required. By using a series of point-like sources, we propose a novel approach implementing a built-in lens cut into the welding electrode, as a result, a 2D image of the welding process can be performed using a transducer that is a fraction of the size of even single-element solutions. After theoretical and numerical validation, a prototype was fabricated for experimental study.
The primary drawback of this technique results from the drastically smaller size, resulting in approximately 5 orders of magnitude lower signal.
This talk covers the current results and state of development, future approaches to overcome implementation challenges, and the potential for new advanced solutions based around this innovative approach.
Geophysical methods and soil test analysis have been used to study soil properties in the farm of the Centre for Entrepreneurial Studies (CES), Delta State University Abraka Nigeria. Vertical electrical sounding (VES), borehole geophysics, electrical resistivity tomography (ERT) and geochemical methods were used for the study. Seven VES stations were occupied along five traverses ERT measurements. Soil samples were collected close to the VES stations for soil test and grain size analysis to corroborate the VES and ERT results. The results of the topsoil obtained from the VES is in agreement with the ERT and borehole log results and this ranges from fine grained silt topsoil to sandy clay. The low resistivity of the topsoil is as a result of partial decomposition of plants and animals forming organic matter, and ranges from 168 – 790 Ωm with average value of 494 Ωm and average depth of 2.3 m. This depth cover the upper root zone of some significant crop, and depict a high amount of moisture and mineral nutrients, and a fair degree of stoniness to aid adequate rooting of the crops. Also, the observed topsoil is high in porosity and water retention which are major suitable factors for the yield of tuber and stem plants. The soil test results gave pH: 6.13-7.16, organic matter: 6.48-8.66 %, Nitrogen: 65.72-78.21 %, Phosphorus: 53.32-67.43 %, Copper: 14.16-22.61 mg/kg, Nickel: 1.16-3.11 mg/kg, Lead: 4.00-8.84 mg/kg, Arsenic: 0.08-0.1 mg/kg Iron: 96.33-151.63 mg/kg. These recorded concentrations are below the WHO standard for crop production.
The Cosmological Advanced Survey Telescope for Optical and uv Research (CASTOR) is a proposed Canadian Space Agency (CSA) mission that would image the skies at ultraviolet (UV) and blue-optical wavelengths simultaneously. Operating close to its diffraction limit, the 1-m-diameter CASTOR telescope is designed with a spatial resolution similar to the Hubble Space Telescope (HST), but with a field of view about one hundred times larger. The exciting science enabled by the CASTOR suite of instruments and the planned legacy surveys encompasses small bodies in the Solar System, exoplanet atmospheres, cosmic explosions, supermassive black holes, galaxy evolution, and cosmology. In addition, this survey mapping capability would add UV coverage to wide-field surveys planned for the Euclid and Roman telescopes and enhance the science return on these missions. With a CSA-funded phase 0 study already complete, the CASTOR science case and engineering design is on track for a launch in 2030 pending continued funding.
We use atomic force microscopy-force spectroscopy (AFM-FS) to measure the morphology and mechanical properties of cross-linked polyethylene (PEX-a) pipe. PEX-a pipe is being increasingly used to replace metal pipe for water transport and heating applications, and it is important to understand ageing, degradation and failure mechanisms to ensure long-term reliability. AFM-FS measurements on the PEX-a pipe surfaces and across the pipe wall thickness allow us to quantify changes in the morphology and mechanical properties from high resolution maps of parameters such as stiffness, modulus, and adhesion. Measurements performed on pipes subjected to different processing and accelerated ageing conditions generate a substantial amount of data. To classify and correlate these images and the associated properties, we have used machine learning techniques such as k-means clustering, decision trees, support vector machines, and neural networks, revealing distinctive changes in the morphology and mechanical properties with ageing. Our machine learning approach to the analysis of the large body of AFM-FS data complements our deep generative modeling of infrared images of the same pipes [1], providing additional insight into the complex phenomena of ageing and degradation.
[1] M. Grossutti et al., ACS Appl. Mater. Interfaces 15, 22532 (2023).
The development of nanotechnology has brought a great opportunity to study the linear and nonlinear optical properties of plasmonic nanohybrids made of metallic nanoparticles and quantum emitters. Rayleigh scattering is a nonlinear scattering mechanism occurring due to the elastic collision of electromagnetic radiation from bound electrons in atoms or molecules after they have been excited to virtual states far from resonances. A theory for stimulated Rayleigh scattering (SRS) has been developed for metallic nanohybrids composed of an ensemble of metallic nanoparticles and quantum dots (QDs). The intensity of the output stimulated Rayleigh scattered light is obtained using the coupled-mode formalism of Maxwell’s equations and evaluated by the density matrix method. An analytical expression of the SRS intensity is calculated in the presence of surface plasmon polaritons (SPPs) and dipole-dipole interactions (DDIs). We have compared this theory with experimental data for a nanohybrid doped with an ensemble of Ag-nanoparticles and rhodamine 6G dye. There was found to be good agreement between experiment and theory. We have also predicted an enhancement of the SRS intensity due to the extra scattering mechanisms of the SPP and DDI polaritons with QDs. It was also found that at low values of DDI coupling the SRS intensity spectrum contains two peaks. However, when the DDI coupling is increased there is only one peak in the SRS spectrum. The findings of this paper can be very useful. For example, the analytical expressions can be valuable for experimental scientists and engineers who can use them to compare their experiments and make new types of plasmonic devices. The enhancement in the SRS intensity can also be used to fabricate SRS nanosensors. Similarly, our finding about the SRS intensity having two peaks to one peak due to the DDI coupling can be used to fabricate SRS nanoswitches where the two peaks can be thought of as the ON position and the one peak can be considered as the OFF position.
Metal additive manufacturing emerges as a pivotal innovation in modern manufacturing technologies, characterized by its exceptional capability to fabricate complex geometries. This process depends on the critical phase change phenomenon, where metals change between solid and liquid states under the intense heat from lasers. Accurate simulations of these phase changes are essential for enhancing the precision and reliability of metal additive manufacturing processes, thereby expanding the range of producible designs. However, the challenge lies in the detailed modeling of particle responses to thermal variations. This entails an understanding of melting dynamics—how particles transition from solid to liquid upon reaching their melting points, their interactions and fusion during this transformation, and the resultant changes in properties such as viscosity and flow. In response, this study introduces an innovative Discrete Element Method (DEM) for simulating particle dynamics and phase changes in metal additive manufacturing. By modeling metal powder as a cluster of interconnected smaller particles, this approach simplifies the simulation of melting and solidification. It combines particle dynamics and phase change simulations into a single framework, offering computational efficiency and adaptability to various materials and manufacturing conditions. As a result, this presents a practical alternative to more complex methods like Computational Fluid Dynamics (CFD) and facilitates rapid prototyping and optimization in metal additive manufacturing.
The field of domain wall electronics is part of a broad effort to engineer novel electronic functionalities in complex oxides via nanoscale inhomogeneities. Conducting ferroelectric domain walls offer the possibility of writeable electronics, for which the conduction channels may be manipulated by external fields or strains. In this talk, I discuss a simple problem, namely how the shape of a conducting domain wall changes with the density of free electrons on the domain wall. I show that the competition between electrostatic forces and domain wall surface tension naturally leads to a zigzag domain wall morphology.
Silicon nitride (SiN) stands out as a promising material for the fabrication and design of integrated photonic devices applicable to precision spectroscopy, telecommunications, and quantum optical communication. Notably, SiN demonstrates low losses, high nonlinearities, and compatibility with existing CMOS technology. We will report on our lab's optimized process, guiding quantum devices from the fabrication stage to optical characterization.
Our methodology employs low-pressure chemical vapor deposition to generate stoichiometric silicon nitride. Notably, removing the backside of the nitride from the wafer significantly impacts achieving nominal values for the refractive index [1]. Understanding how the index changes with wafer and fabrication processing proves critical for predicting correct geometries and the associated group velocities required for realizing novel quantum technologies. The quantified propagation loss of our devices is measured at 1.2 dB/cm, with coupling losses at 2 dB/facet, aligning with the current state-of-the-art.
Furthermore, we've conducted device modeling and theoretical simulations to predict device performance. We employed the Lugiato-Lefever Equation, solving it using the split-step Fourier method [2]. Guided by our theoretical predictions, we initiated the fabrication of new resonators for optical frequency combs and solitons, subsequently moving these newly fabricated devices to the lab for characterization.
In conclusion, I will discuss how our progress in developing these novel devices can be applied to exciting applications [3].
[1] A. M. Tareki, et al., IEEE Photonics Journal. 15, 1-7, (2023) [2] T. Hansson, et al. Optics Communications, 312, 134-136 (2014) [3] M.A. Guidry, et al. Nat. Photon. 16, 52–58 (2022).
Future quantum networks have significant implications in the secure transfer of sensitive information. A key component to enabling longer transmission distances in these networks is an efficient and reliable quantum memory (QM) device. QM devices can enable the storage of quantum optical light and will be a vital component of quantum repeater nodes and precise quantum sensors. We will present the Signal-to-Noise Ratio (SNR) and a Bit-Error Rate (BER) performance metrics for a unique, dual-rail QM system housed in a deployable module.
Our setup utilizes a rubidium vapor cell operating at near room temperature under the conditions of electromagnetically induced transparency [1]. This effect allows optical light states to be coherently mapped into and out of a warm atomic ensemble. A dual-rail configuration is employed which permits the storage of arbitrary polarization qubits. We will report the capabilities of our memory as a device in visible light communication and its SNR and BER performance under various operating conditions such as memory lifetime and optical storage efficiency [2].
Furthermore, we will present the capability of this system for an on-off keying communication scheme by analyzing differential signaling between the rails. This is, to our knowledge, the first demonstration of an optical dual-rail memory utilized for this type of communication scheme.
Demonstrations utilizing these novel QM systems in established communication protocols will be key for quantum networks and the future quantum internet.
[1] Namazi, Mehdi et al., Phys. Rev. Appl. 034023 (2017) [2] J. De Bruycker, et al., 12th International Symposium on Communication Systems, Networks and Digital Signal Processing pp. 1-5 (2020).
A robust, reliable and field-deployable quantum memory device will be necessary for long-distance quantum communication and the future quantum internet [1]. An attractive implementation to meet these requirements is a warm vapour system operating under the conditions of Electromagnetically Induced Transparency. This technique is capable of storing and receiving quantum optical light states [2]. Our study investigates the temperature dependence of the storage lifetime for the D1 transition in Rb87 vapour. Rubidium is chosen for its favorable operational temperature and resonant wavelengths that are readily attainable from commercial light sources.
We employ a rack-mountable optical memory setup containing isotopically pure Rb87 vapour cells. Using spectroscopic techniques for temperature calibration [3], we explore a range of operating temperatures. Employing optical pulses of ~500 ns duration, we achieved storage decay lifetimes as long as 175 μs, which is a promising benchmark for this type of system. The measured storage lifetimes provide insight into the decoherence mechanisms that can affect optical memory performance. Lower operating temperatures can exhibit an increased coherence time due to reduced atomic motion but tend to also lead to a subsequent decrease in memory efficiency due to lower atomic depths.
These lifetimes demonstrate the potential for field-deployable systems in long-distance quantum communication schemes. Our results also underscore the importance of temperature control in quantum memory systems and offer practical insights for utilizing quantum architecture in both classical and quantum regimes in new and exciting applications.
[1] Mehdi Namazi et al., Phys. Rev. Appl. 18, 044058 (2022)
[2] Mehdi Namazi et al., Phys. Rev. Appl. 8, 034023 (2017)
[3] Li-Chung Ha et al., Phys. Rev. A 103, 022826 (2021)
The impact of ion dynamics in the sheath of argon DC plasma discharges is analysed. We show that, at moderate pressures where the ion mean free path is of the order of the sheath width (10-150 Pa), the spatial variations of the ion temperature have a strong impact the sheath formation process, especially on the density profiles of plasma species and the mean velocity of the ions impacting the cathode. To show these findings, we compare simulation the data of DC argon discharges obtained from a Particles-In-Cell 1D3V (one dimension in space and three dimension in velocity) kinetic model with the simulation data of one-dimensional self-consistent fluid ones. Simulations show that ion collisions with neutral atoms must absolutely be considered in the fluid model to accurately simulate the discharge, especially in the sheath region, and s self-consistent calculation of the ion temperature profile is necessary in the whole simulation domain. In particular, in the cathode sheath where there is large potential fall, despite the relatively large ion-neutral frequency in the considered pressure range, the ion temperature can be several orders of magnitude larger than the background gas temperature. Kinetic simulations also showed that ion-neutral collisions are responsible for a progressive spreading of the ion velocities in the directions perpendicular to the electric field in the cathode sheath.
Non-equilibrium plasmas at atmospheric pressure are often characterized by optical emission spectroscopy. Despite the simplicity of recording optical emission spectra in plasmas, the determination of spatially resolved plasma properties (e.g. electron temperature) in an efficient way is very challenging.
Hyperspectral imaging is a spectroscopic technique that combines optical emission spectroscopy with 2D optical imaging to simultaneously generate spectral and spatial mappings of optical emission. Using this technique, images are acquired over a wide range of wavelengths with narrow bandwidths, and a 2D spatial mapping of the spectral variation is generated within a reasonable time. Each pixel of the image ends up containing spectral information, and collectively, the pixels form a hyperspectral cube that comprises both spatial and spectral information.
In this presentation, we show spatially resolved optical images of a microwave argon plasma jet expanding into ambient air recorded over a wide range of wavelengths using a hyperspectral imaging system based on a tunable Bragg-grating imager coupled to a scientific Complementary Metal–Oxide–Semiconductor camera. The working principles of the system are detailed, along with the necessary post-processing steps. Further analysis of the spatial-spectral data, including Abel transform used to determine 2D radially resolved spatial mappings, is also presented.
Overall, we will show that the proposed approach provides unprecedented cartographies of key plasma parameters, such as argon and oxygen line emission intensities, argon metastable number densities, and argon excitation temperatures.
Considering that all these plasma parameters were obtained from measurements performed in a reasonable time, Bragg-grating-based hyperspectral imaging constitutes an advantageous plasma diagnostic technique for detailed analysis of microwave plasma jets used in several applications.
A low-$\beta$ plasma is characterized by a dominance of magnetic energy over internal (kinetic) energy, where the magnetic pressure ($B^2/2\mu_0$) surpasses the kinetic pressure ($p$), confining the plasma within magnetic fields. Under specific conditions, low-$\beta$ plasmas adhere to Alfvén's theorem, wherein magnetic field lines remain 'frozen' within the plasma and move with it. These plasmas are commonly associated with magnetic confinement fusion reactors, star atmospheres, and plasma-based space propulsion technologies.
This talk aims to present findings from plasma acceleration simulations conducted using magnetohydrodynamics (MHD) and Particle-In-Cell (PIC) codes. The study will review Weber-Davis solar wind acceleration, following Parker's theoretical framework. Furthermore, various plasma acceleration modes have been studied, including the critical points responsible for accelerating solar winds from subsonic to supersonic velocities. Transitioning from solar winds to magnetic nozzle scenarios involves minor adjustments, leading to a convergent-divergent magnetic field configuration that converts plasma's thermal energy into directed kinetic energy.
Both PIC and MHD simulations are analyzed and compared to understand plasma acceleration modes, with a focus on torsional Alfvén waves, pressure-induced acceleration, and centrifugal confinement.
Ionospheric turbulence is studied extensively with satellites and rocket instrumentation and with ground-based radars. There are two distinct regimes. One concerns the E region below 130 km altitude and the other the F region above 150 km. The E region is often subjected to intense Hall currents, which lead to various instabilities dominated by a modified two-stream instability. F region instabilities are more slowly growing and cover much greater scales. In recent years we have come to understand that the dominant structures evolve in such a way that their electric field is reduced compared to the ambient electric field in a way such that they match threshold electric field conditions, for which the growth rate is next to nihil. These structures also heat the electrons, sometimes to a point that the heating rate exceeds the local classical Joule heating rate. Exceptions to the rules have also been found with narrow radar spectra, where the Doppler shift of the structures actually matches expectations from linear growth rate theory owing to the peculiar directions at which said structures are generated. With modern radars we can now localize the location of decameter turbulence in relation to optical images related to the aurora boreales and find that they are parallel to auroral arcs but not inside, indicating stronger electric fields on the edges of aurora. For the F region, we often observe far larger structures generated by slowly growing instabilities like the generalized Rayleigh-Taylor instability. In the equatorial region where such structures are generated, we find that structures up to 70 km in size decay at an ambipolar diffusion rate associated with much smaller 500 m structures and conclude that the culprit is mode-coupling down to sizes for which classical diffusion is fast enough to offer a sink of wave energy. At higher latitudes we systematically observe steepening spectra but only when and where the plasma is connected to a large E region plasma density produced either by solar illumination or energetic auroral particles.
We propose to investigate the breakdown of superfluidity in strongly correlated Li Fermi superfluids in a new trap geometry that allows for a long coherence time: a homogenous box with one periodic boundary condition. We will achieve this by trapping on the surface of a cylinder and introducing flexible barriers to superfluid flow. We report progress toward this new trap and prospects for future experiments.
We revisit and expand upon previous results (1) related to He2+-Ne2 collisions to analyse electron-removal processes resulting in dimer fragmentation. The standard independent-electron multinomial analysis of single- and multi- electron transitions is compared to a Slater-determinant-based analysis that accounts for the Pauli principle. Given the orientation of a projectile travelling parallel to the dimer axis, we account for electron capture by the projectile from the first atom it interacts with in the dimer. The results indicate strong agreement between the two analyses and confirming our previous prediction of a strong Interatomic Coulombic Decay (ICD) signal at low energies (~10keV/amu).
For a He+ projectile there is a smaller total ICD cross-section, but no relevant competing process in the Ne+ - Ne+ fragmentation channel. Measuring the kinetic energy release spectrum would indicate a clear ICD signal.
(1) T. Kirchner, J. Phys. B 54, 205201 (2021)
The development of Photonic Crystal Fibers in the 1990s has led to considerable research on Supercontinuum generation, in which nonlinear effects play a big role. The majority of simulation work done to model the nonlinear Raman effect has been using the Generalized Nonlinear Schrödinger Equation (GNLSE) - which is computationally efficient but lacks accuracy in broadband modelling due to its reliance on the Slow Varying Envelope Approximation. Ultra-broadband spectra have been modelled using other equations however, such as the Forward Maxwell Equation (FME) - which makes minimal approximations - and an equation developed by Silva, Weigand and Crespo (SWCE) - another computationally efficient model used to simulate Cascaded Four-Wave Mixing.
Nonlinear media have also been employed for a recent amplification method called Kerr Instability Amplification (KIA). However, the only simulations done to test KIA so far have been using the Forward Maxwell Equation. In this work, we simulated both these effects using all three equations and compared them. We find that they all perform similarly in modelling the Raman effect, but the GNLSE exhibits noticeably lower amplification in KIA simulations. The SWCE shows similar results to the FME while being substantially more efficient. We expect that understanding how these equations compare in simulating these nonlinear effects will prove useful to the photonics community.
A quantum internet of connected nodes requires the ability to send single photons across vast distances, something not possible with current fiber optic technology. A solution to this is the utilization of quantum repeater nodes which are reliant on a trustworthy quantum memory (QM) device. We will present a deployable quantum memory system that utilizes Electromagnetically Induced Transparency for optical storage in a warm atomic vapour. We have characterized the storage lifetime, signal to noise ratio (SNR) and bit error rate (BER) of the D1 transition manifold of isotopically pure Rb87.
Using optical pulses of 500ns duration, we obtained a storage lifetime of 175us. These lifetimes highlight the potential of our portable quantum memory system for long distance quantum communication schemes. Further, our QM system has a dual rail configuration that allows the storage of arbitrary polarization qubits. The dual rail system allows us to quantify SNR of two spatially distinct channels and to characterize the memory performance for on-off keying through use of polarization differential.
Our poster will provide an overview of this novel system and highlight the capability of deployable QM systems in long distance communications and the possible future applications.
Variational calculations readily produce high precision energies and wave functions for the ground state, but typically the accuracy rapidly deteriorates with increasing principal quantum number n. The current limit is n = 10 [1,2]. We will report the results of new variational calculations based on the use of triple basis sets in Hylleraas coordinates. The basis sets are "tripled" in that each combination of powers i,j,k in basis functions of the form r1^i r_2^j r_12^k exp(-alpha r_1 - beta r_2) is repeated three times with different nonlinear parameters alpha and beta that are separately optimized to span different distance scales. Results will be reported for the S- and P-states up to n = 24, including a comparison with high precision measurements for n = 24.
[1] G. W. F. Drake and Z.-C. Yan, Phys. Rev. A 46, 2378 (1992).
[2] D. T. Aznabaev, A. K. Bekbaev, and V. I. Korobov, Phys. Rev. A,
98, 012510 (2018).
[3] ] G. Clausen et al., Phys. Rev. Lett. 127, 093001 (2021).
Blood disorders, such as low iron anemia, affect almost one-third of Canadians. The symptoms range from extreme fatigue to shortness of breath. Considering these situations, early detection is paramount. A common diagnostic test for anemia is red blood cell count, but this process must be done at a lab and does not have an immediate turnaround time. High costs and less availably of equipment and doctors in poorer countries make such a test a luxury. Our proposed research aims at addressing these challenges by creating a rapid, reliable, sensitive, and specific point-of-care device that would be affordable for quantifying hemoglobin (Hb) levels in real time using a photodetector. The significance of this research lies in its potential to revolutionize Hb disorder diagnosis by leveraging photodetector technology.
In this study, we use lab-built Dye-sensitized solar cells and characterize it for their current response at a fixed voltage. In order to determine the Hb levels in blood, we converted the photodetector’s transmission response into quantifiable current readings based on Hb concentration. This process included the development of a calibration curve of Hb concentration vs current at a set voltage. From preliminary responses, we found a linear relationship between the current and the concentration of hemoglobin present in glass. Hb concentration exhibit a distinct optical absorption property, which can be distinguished and measured using a photodetector. However, this device does not make it specific to Hb detection, hence further studies will involve fabricating a test strip that will adsorb only Hb onto it which will enable the quantification of Hb concentration in blood.
Canada plans to use a deep geological repository (DGR) system consisting of corrosion-resistant used fuel containers (UFCs) and other barriers to store spent nuclear fuel safely. Concerns arise regarding potential fuel exposure to groundwater, leading to fuel oxidation and dissolution due to water radiolysis by residual fuel radioactivity. Radionuclides within spent fuel, primarily located in UO2 grains, can be released based on fuel corrosion rates. Unlike that of β- and γ-radiations, the α-radiation dose rate will remain high for extended periods and make the α-radiolysis of water the primary source of oxidants.
This study aims to explore α-particle interactions with fuel surfaces to figure out their impact on UO2 dissolution rates. The methodology is to conduct in-situ α-irradiation-electrochemistry experiments using the Rutherford backscattering beamline on the Western Tandetron accelerator and sealed radiation sources to provide a constant flux of high-energy α-particles. As direct usage of UO2 fuel pellets was impractical due to the specific experimental setup, uranium oxide thin films were grown on metallic foil substrates via electrodeposition, using an aqueous electrolyte containing uranyl nitrate. Films were grown using current densities of 5-30 mA/cm2, pH = 7.5 to 8.5, and 76 ± 1 °C temperature.
To make sure the composition of the deposited films matched the composition of the used fuel, a detailed characterization of the films was performed. Films showed a cauliflower-like morphology in SEM analysis, with uranium and oxygen presence confirmed through EDX. RBS measurements indicated the film thicknesses in the 1-5 μm range. XRD showed that as-deposited films were amorphous, turning into UO2 polycrystalline films after annealing at 600 ˚C in 10-6 Torr H2. Raman analysis detected U4O9 and U3O8 phases in the as-deposited films, while UO2 phases emerged in the annealed samples' spectra. Further characterization of the films, as well as preparation for in-situ α-irradiation-electrochemistry experiments, is currently underway.
Formation of microporous structures on a polymer surface leads to improved surface properties such as self-cleaning, anti-fogging, antibacterial characteristics, and strengthened adhesion with metals. Femtosecond laser-induced microporous structures (fs-LIMS) are microscale features created using laser technology for subsequent metal deposition. However, their quality is heavily influenced by complex interactions between various laser processing parameters and material properties. Presently, the selection of appropriate laser parameters relies largely on the operator’s experience and requires laborious experimentations. To achieve a more efficient, rapid, and cognitively automated process, an integrated machine learning methodology is introduced for determining the optimal process conditions for fs-LIMS. This methodology commences with feature extraction from images captured by scanning electron microscopy (SEM) using a convolutional neural network (CNN). Subsequently, various dimensionality reduction techniques such as principal component analysis (PCA), multidimensional scaling (MDS), and t-distributed stochastic neighbor embedding (t-SNE) are employed to explore various analytical approaches. The k-means clustering method is then utilized to automatically classify the main characteristics (extracted from various dimensionality reduction methods) of fs-LIMS into categories representing high, moderate and low quality. Among the diverse dimensionality reduction methods, PCA proves most effective, achieving a peak accuracy of 95.97% in a three-dimensional PCA model. Finally, based on the labeled images by PCA and k-means clustering, support vector machine (SVM), artificial neural network (ANN), and random forest (RF) algorithms are applied to predict the laser processed outcomes. The results reveal that SVM attains the highest accuracy, performing at a level of 92%. This study introduces a novel approach for identifying the optimal laser process conditions to create laser-induced microscale porous structures.
The titanium oxide surface is responsible for many of the properties associated with the metal because it creates a hard, uniform, and thermodynamically stable protective coating over metallic titanium. Because of the characteristics of the oxide film, titanium has found uses in biomedical implants, aerospace engineering, industrial piping for corrosive environments, and other areas where high strength and low weight are required. Our project is aimed at understanding the atomistic mechanisms of TiO2 formation, including oxidation rates and the role of anodization potential on Ti oxide layer structure and morphology, using Rutherford backscattering spectrometry (RBS) for elemental depth profiling during oxide growth, complemented by other surface-sensitive techniques.
Our research involves using a specially designed in-situ cell with an ion-permeable silicon nitride window to provide a barrier between the ultra-high vacuum (UHV) required to perform RBS and the electrolyte solution required for electrochemical analysis and anodization. The thin silicon nitride window is coated with titanium and exposed to an electrolyte solution; RBS measurements are taken as the titanium metal is anodized to titanium oxide. To determine information about the growth mechanism of titanium, in-situ anodization during RBS is performed, providing information about the growth mechanism of titanium oxide. In-situ RBS results show a significant increase in the oxidation rate of titanium compared to equivalent ex-situ measurements, as well as spontaneous TiO2 film growth, without applied potential, in the presence of high-energy He+ particles interacting with the electrolyte solution. Additionally, a significant change is observed between benchtop electrochemical impedance spectrometry experiments and those performed under high energy He+ flux. Direct and indirect alpha radiation exposure measurements are performed to determine the enhanced titanium oxide growth rate generated via radiation and radiolysis effects. The quantification of these effects allows for a reliable comparison of in-situ RBS anodization experiments with ex-situ benchtop anodization experiments.
Many fundamental science processes and engineering designs are affected by the presence of hydrogen, e.g. hydrogen embrittlement. In order to understand fundamental issues in these materials and devices, quantifying the amount of hydrogen is needed. A method with high sensitivity is critical to improve current hydrogen analysis techniques to understand hydrogen related processes. To overcome these limitations, a new method called medium energy elastic recoil detection analysis (ME-ERDA) is adapted from two existing techniques – elastic recoil detection analysis (ERDA) and medium energy ion scattering (MEIS). ME-ERDA successfully detects hydrogen at surfaces and interfaces with a resolution of ~10 Å. An important aspect of analysis is quantifying the amount of hydrogen in a material, a process which requires a calibration standard with a large known amount of hydrogen. Improving hydrogen analysis methods will be achieved by synthesizing calibration standards made of thin metal hydrides, particularly titanium hydride (TiHx), and quantifying the amount of hydrogen in the standards. This will be accomplished by depositing titanium on a Si (001) wafer via magnetron sputter deposition (Western Nanofab). Forming the metal hydride will be done using two methods: 1) annealing in a hydrogenated environment, and 2) galvanostatic polarization. A hydrogen depth profile has been done using ERDA (Western Tandetron Accelerator Lab), secondary ion mass spectrometry (SIMS) (Surface Science Western), and ME-ERDA; with an emphasis on improving the resolution of ME-ERDA by adjusting the detector setup. To gain insight into hydrogen sensitivity and depth resolution for these techniques, a comparative analysis will be made between ME-ERDA, ERDA, and SIMS. This newly developed ME-ERDA technique and the establishment of hydrogen standards hold significant importance for future engineering applications requiring hydrogen depth profiling, as well as for advancing our fundamental understanding of hydrogen related processes.
We introduce a novel concept for relational and discrete cyclic timekeeping for application in a quantum clock design. Taking inspiration from ancient timekeeping systems, we challenge conventional use of continuous time by exploring temporal space definable by finite Euclidean 1D geometry bound by discrete event-driven zero time points. In contrast to abstract continuous and infinitesimal time, our proposed quantum clock synchronizes the start/stop cycle with events in physical reality, offering a potential avenue to address challenges in quantum computing and discrete event simulations. Our approach is based on temporal space that is bound by physical limit in time, where time can be precisely defined as zero [t = 0]. This temporal limit aligns with Planck’s limits and the Mohist definition of an “atomic,” representing an indivisible line. For instance, superposition phenomena occur precisely at t = Øt = 0, independent of space. In contrast to infinitesimal intervals proposed beyond our dimensional reality, our definition of temporal space is confined to our observable universe, relevant to normal matter. The concept highlights the contrast with relativistic modeling, emphasizing Rt relationalism's capability to separate space from time, offering a distinct perspective on temporal metrics within our observable reality.
Natural and artificial impulsive sources in the atmosphere can generate infrasound, or very low frequency (f<20 Hz) acoustic waves, that can travel over long distances with minimal attenuation. Traditionally confined to ground-based sensors, the domain of infrasound sensing has expanded in recent years to include airborne platforms (e.g., balloons). Unlike other sensing modalities that might have geographic (e.g., inaccessible regions), time-of-day (e.g., optical) or other limitations, infrasound can be utilized continuously (day and night) on a global scale. Volcanoes, lightning, chemical explosions, re-entry vehicles, space debris, and bolides are among the diverse sources producing infrasound phenomena. Among these, bolides present a particularly intriguing scientific challenge due to their varying velocities, entry angles, and physical properties. Theoretically, bolide infrasound signatures should carry information about the source (e.g., velocity, altitude, mass) but the dynamic changes in the atmosphere that occur on temporal scales of minutes to hours might lead to loss of that information. Therefore, to fully utilize infrasound towards characterization of bolides and sources alike, it is of essence to have both the detailed event ground truth and accurate atmospheric specifications. This information serves as the foundation for improving and validating models, with the ultimate goal of utilizing infrasound signatures alone to infer characteristics of the source. In this context, a succinct overview of bolide infrasound will be provided, complemented by notable examples, to elucidate its utility in atmospheric studies.
SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525
High-end microwave systems rely heavily on oscillators with minimal phase noise. The research work introduces a novel method to decrease phase noise by employing a gain-driven polariton platform. Through coherent coupling-induced mode hybridization, frequency distribution around the carrier signal is effectively suppressed.
The approach to achieve minimal phase noise performance will be shown using three prototypes. The first prototype is used to demonstrate the phase noise reduction mechanism (more than 25dB). The second prototype, optimized to operate at a fixed frequency of 3.5GHz, exhibits remarkable phase noise levels of -131 dBc/Hz and -133 dBc/Hz at 10 kHz and 100 kHz offset frequencies, respectively. The third prototype offers a tuning range from 2.1 to 2.7 GHz
The research work merges gain-embedded cavity technology with YIG oscillator technology using cavity magnonics. The integration results in improved spectral purity, leveraging the synergy between the two mature technologies.
Erbium is one element in the globally-recognized class of critical minerals, the rare earth elements (REE’s). It is an essential component in various clean energy and modern technology applications from nuclear control rods to infrared optics. Growing demand for these high-tech applications alongside geopolitical supply chain risks underscore the critical status of REE’s. To address this, it is of interest to advance resource development through all available means, including both mining and recycling. In order to develop and maintain responsible resource management strategies, it is crucial to be able to reliably identify and quantify rare-earth-containing materials and to have a comprehensive understanding of their properties.
This work presents an effort to advance current methodologies surrounding the analysis and characterization of rare-earth-containing materials, with a focus on erbium. Using several analytical techniques such as X-ray Photoelectron Spectroscopy (XPS), Secondary Ion Mass Spectrometry (SIMS), and Rutherford Backscattering Spectroscopy (RBS) we are developing robust characterization procedures for various erbium-containing materials. By identifying subtle binding energy shifts and structural variances in the complex XPS signals of erbium compounds, we are developing novel and practical standard curve-fitting procedures. These fitting procedures will serve as reference data to allow for the future identification and Er content quantification of these compounds in unknown erbium-bearing materials. We are also exploring the fabrication of element-specific SIMS standards through ion implantation, as more representative standards will allow for more accurate quantification of those elements in materials. Using both Al-K𝛼 and high-energy Ag-L𝛼 XPS sources in conjunction with elemental mapping via Energy Dispersive X-ray spectroscopy, we have also identified several light REE’s residing in interstitial grain boundaries between barite and calcite mineral grains within bastnaesite ore. Collectively, these techniques provide a strong foundation for our understanding of the composition, electronic structure, and surface chemistry of erbium-containing materials. These advancements are critical for optimizing the extraction and recycling processes by increasing the processing yield, efficiency, and by reducing waste.
Despite its outstanding electronic properties, silicon has limited light emission capabilities due to its indirect bandgap. However, Si quantum structures (Si-QSs) exhibit light emission through quantum confinement. In this project, we investigate the co-implantation of silicon and germanium to create SiGe quantum dots (QDs). The relative concentration of Ge has a direct influence on the optical properties since the bandgap depends on it. Silicon ions at 40 keV were implanted into a 1 μm thermally grown $\mathrm{SiO}_2$ film on a Si (001) substrate to achieve a peak concentration of 17.5 at. % in relation to the matrix. The chosen energy placed the implanted peak 50 nm below the surface. Samples were subsequently implanted with 55 keV $\mathrm{Ge}^+$ with 0.5, 1.0, 2.0, 4.0, and 7.5 peak at. %, and thermally annealed to promote cluster growth and crystallization. The Ge implantation energy was calculated to put the Ge ion range at the same position as the Si ion range. For a second set of samples, $\mathrm{Ge}^+$ implantation was done after $1100 \, ^\circ\mathrm{C}$ annealing, necessary for Si QDs growth. Therefore, we also studied the influence of annealing order on the properties of the samples. Structural properties were studied with Raman spectroscopy, and we observed a Ge-Si peak at $405 \, \mathrm{cm}^{-1}$ indicating the formation of Si-Ge bonds only for the second set of samples with 7.5 peak Ge at. %. The optical properties of these SiGe QDs were studied with photoluminescence in the visible and near-infrared, with emissions around 800 nm and 1000 nm for both sets. It was observed that PL intensity decreased in both sets of samples with increasing Ge content, and the samples with no annealing between implants exhibited more intense PL. The PL peak at 1000 nm shifts to a lower wavelength with higher Ge at. %, which provides evidence of Ge incorporation in Si QDs in both sets of samples. Finally, the emission was investigated using time-resolved photoluminescence (TR-PL), and it showed that the lifetime time decreases as the Ge concentration increases for both sets of samples.
Composition and optical properties of ion beam fabricated SiGeSn layers in Si (001)
A.W. Henry a, C.U. Ekeruche a, P.J. Simpson b, L.V. Goncharova a
a Department of Physics and Astronomy, University of Western Ontario, London, Ontario, Canada, N6A 3K7
b Department of Computer Science, Mathematics, Physics, and Statistics, University of British Columbia, Okanagan Campus, Kelowna, British Columbia, V1V 1V7
Abstract
SiGeSn compounds, a unique class of semiconductors with the ability to engineer both the lattice parameter and band structure, have been investigated for their potential in monolithic integration of electronic and photonic devices. These materials have demonstrated potential in diverse applications, including lasing, thin-film waveguide fabrication, high electron mobility transistors, and fully depleted-MOSFETs. The study focused on the optical and electronic properties of a 200-400nm SiGeSn layer in a Si (001) substrate. Various characterisation techniques, including Spectroscopic Ellipsometry (SE), Channeling Rutherford Backscattering Spectroscopy (c-RBS), Positron Annihilation Spectroscopy (PAS), and Scanning Electron Microscopy (SEM) with Energy Dispersive X-Ray Analysis (EDX), were employed. The RBS elemental depth distribution of SiGeSn was characterised, revealing successful implantation of Ge and Sn to their intended doses 5-80nm below the surface, as well as different Ge and Sn distributions at various annealing temperatures and times. SE modelling, based on RBS compositional data, was conducted to investigate observed Ψ, Δ plot features. The models, indicated an average implanted volume thickness of ~63nm, and increase near-IR absorption properties as compared to crystalline Si. Growth defects were identified and quantified via c-RBS. The data showed increased substitutionality of Ge and Sn in annealed samples. This research underscores the promise of SiGeSn alloys in cost-effective and CMOS-compatible optoelectronic devices.
Transition Metal Dichalcogenides (TMDs) are layered semiconducting materials of the form MX$_2$, where M represents a transition metal atom and X represents a chalcogen. In the 1T structural phase, the chalcogens provide an octahedral environment for each metal atom. We propose a quantum loop model to explain the nature of bonding in these materials. We focus on metal atoms from group VI of the periodic table (e.g., MoS$_2$, MoSe$_2$, WS$_2$, WSe$_2$) which have two valence electrons in their $d$ orbitals. These electrons reside in t$_{2g}$ orbitals that point towards the six nearest neighbors on the underlying triangular lattice. We argue that these form covalent bonds that connect together to form loops. Loops can be formed in a large number of ways, leading to a resonating valence bond picture. We numerically enumerate all allowed loop configurations for small sizes of systems. We then construct a minimal effective Hamiltonian with local ‘potential energy’ and ‘kinetic energy’ terms. The kinetic energy term reflects processes where neighboring loops are cut and merged to form new loops, or a single loop changes shape. The potential energy term is due to the repulsion of proximate bonds. We construct a phase diagram, finding two prominent stripe-like phases. One of these closely resembles the 1T' structure, which is a well-known stripe-like distortion of the 1T phase. We discuss further tests of these ideas, e.g., in impurity-induced textures.
In 2010 Sau $et~al$ proposed a topological superconducting Majorana fermions can be realized in a semiconductor
quantum well coupled to an $s$-wave superconductor and a ferromagnetic insulator. In the same year, Alica, proposed a simpler architecture for detecting Majorana fermions by applying an in-plane magnetic field to a (110)-grown
semiconductor coupled only to an $s$-wave superconductor. Here we propose an alternative setup, wherein a topological superconducting phase is in proximity to a tilted Dirac materials with a variable tilt parameter, in order to explore if the system can be driven into a topological superconducting state. Success creating topological superconductors would open these systems up as a unique flexible platform for topological quantum computation.
We present an open-source API and software package called SymPhas for defining and simulating phase-field models, supporting up to three dimensions and an arbitrary number of fields. SymPhas is the first of its kind to offer complete flexibility in user specification of phase-field models from the phase-field dynamical or free energy equations, allowing the study of a wide range of models with the same software platform. This is accomplished by implementing a novel symbolic algebra library with a rich feature set that supports user-defined mathematical expressions with minimal constraint on expression format or grammar. The symbolic algebra library uses C++ template meta-programming, meaning that all expressions are represented as a C++ type. Consequently, symbolic expressions are "static" and formulated at compile-time, including all rules and simplifications that are applied. This approach dramatically minimizes application runtime, particularly for complex models since branching is entirely eliminated from the symbolic evaluation step. Performance is also augmented via parallelization with OpenMP and the C++ standard library. SymPhas has been used to simulate a number of well-known phase-field models, most of which are available as examples [1], as well as generating large-scale training and test data for a machine learning algorithm [2].
Silber, S. A. & Karttunen, M. SymPhas —General Purpose Software for Phase-Field, Phase-Field Crystal, and Reaction-Diffusion Simulations. Adv. Theory Simul. 5, 2100351 (2021).
Kiyani, E., Silber, S., Kooshkbaghi, M. & Karttunen, M. Machine-learning-based data-driven discovery of nonlinear phase-field dynamics. Physical Review E 106, 65303 (2022).
High-harmonic generation (HHG) and attosecond scale physics are important areas of current research, combining aspects optical, atomic, molecular, and condensed matter physics. In the past decade, the study of HHG has been extended from atomic gases to solids. HHG in solids does not follow the behaviour of atomic gas HHG due to the added complexity of bulk inter-atomic interaction, and this makes HHG in solids particularly suited for the exploration of properties of e.g. electronic band structure and spacing. While higher laser intensity allows for higher-order HHG cutoff, the application of such high energies can also lead to heating or damage to the sample through processes whereby the induced electron excitations thermalize with the lattice and induce lattice disruption or structural change. This thermal damage is potentially a limiting factor in experiment, and therefore the means of controlling thermal damage are of great practical interest. Here we present an initial study of the heating process following HHG in a solid state scenario. We consider a simple two-level model exhibiting HHG via direct simulation of the time-dependent Schrodinger equation, through which we determine how the energy deposited by the high intensity pulse heats the sample, and in turn the eventual thermalization of the excited electronic states. We explore the features of this model with varying pulse parameters (i.e. envelope, intensity, duration) to test the sensitivity of thermalization to the characteristics of the stimulation pulse. Finally, we discuss how these results may apply to more detailed models including the full electronic band structure.
Biomolecular self-assembly lies at the very heart of the function of living cells, where it organizes individual components into functional biological machines. The macromolecular sub-units typically correspond to proteins, whose shapes have been optimized over millions of years of evolution to ensure a proper functionality of the self-assembled structures. However, in pathological cases, proteins fail to achieve the optimal folding, which often leads to complex ill-fitting shapes. This produces geometrical incompatibility, which leads to frustrated interactions between the sub-units. Surprisingly, despite a huge variability in protein structure, such misfolded units tend to robustly self-assemble into aggregates with well-defined morphologies. Interestingly, these structures display a clear preference for slimmer topologies, such as fiber aggregates. This emergent principle of dimensionality reduction suggests that the aggregation of i