Welcome to the CAP2022 Indico site. This site is being used for abstract submission and congress scheduling. The Congress 2022 Schedule has now been finalized. Click on Timetable in the left menu for details. The abstract system is now closed. CONGRESS REGISTRATION is now open for in-person and virtual attendees. Access the registration system at https://crm.cap.ca.
Bienvenue au site web Indico pour ACP2022. Ce site servira à la soumission de résumés et à la préparation de l'horaire. L'horaire du Congrès 2022 est maintenant finalisé. Cliquez sur Timetable / Horaire dans le menu de gauche pour plus de détails. Le système de résumés est maintenant fermé. L'INSCRIPTION AU CONGRÈS pour la participation en personne ou virtuelle est maintenant ouvert. Accédez au système d’enregistrement à l’adresse https://crm.cap.ca.
|
COVID-19 has plagued the globe. Mathematical models have been used to inform public health decision makers various global regions. In Canada, non-pharmaceutical intervention and vaccination programming were informed by modelling forecasts. In this talk we will review COVID-19 in Canada. We will then introduce mathematical models that have been used during the pandemic. Mathematical models of immunity will be presented, which quantify immunity protection by Canadian region.
My research group focuses on development and application of molecular models, liquid state theory, molecular simulation, and machine learning for studying soft macromolecular materials. In this lecture I will share examples of how we develop appropriate molecular models and use them with computational methods to better understand and predict effects of polymer design on the resulting macromolecular material structure and thermodynamics. I will also share experimental work from our collaborators that help us validate our model and computational methods as well as confirm our computational predictions.
Astronomical and cosmological observations strongly suggest that most of the matter in our Universe is non-luminous and made of an unknown substance called Dark Matter. But, currently, it remains invisible and undetectable directly on Earth and makes it one of the greatest mysteries in particle physics. Even if its direct detection escapes to the scientific community in our time, dark matter remains a fundamental concept that would explain how our Universe was formed and offer a unique chance to discover physics beyond the Standard Model.
Many worldwide experiments are actively searching for dark matter to understand its properties. After presenting how we can detect it directly, I will give an overview of cutting-edge technologies used by particle physicists focusing on the challenges we are currently facing and the need for innovative tools to improve the sensitivity of measurements at low energies.
Cryogenic detectors offer excellent resolution and sensitivity for low mass dark matter searches but require testing in a well-shielded, low background environment for complete characterization. The Cryogenic Underground TEst (CUTE) facility is located 2 km underground in SNOLAB near Sudbury, ON. CUTE has served as a well-shielded, low background site for the testing, characterization and optimization of Super Cryogenic Dark Matter Search (SuperCDMS) detectors, since 2019. The low background at the facility combined with the low threshold of the new SuperCDMS detectors leaves the door open for competitive dark matter searches at CUTE. This talk will present an overview of the CUTE facility, a progress report of detector testing and measurements done to date and exciting plans for the upcoming testing of the first SuperCDMS detector tower at CUTE.
DEAP-3600 is a single-phase dark matter experiment looking at direct detection elastic nuclear scatters of the dark matter candidate, Weakly Interacting Massive Particles (WIMPs), with 3279 kg of liquid argon. The DEAP detector has recorded more than 3 years of physics data, and in addition to the direct search of dark matter, the collaboration is also working to extend the sensitivity of the detector by looking for annual modulation of the signal. The absolute stability of the detector and the detailed understanding of the detector systematics over the time of data collection allows the analysis of event rates in the detector data, which also compliments many other interesting physics analyses, such as a precise measurement of the lifetime of the 39Ar isotope. In this talk, the stability of the DEAP-3600 detector with some preliminary measurements for the 39Ar lifetime analysis and modulation analysis will be presented.
The SuperCDMS collaboration uses cryogenic silicon and germanium detectors to directly search for dark matter. Dark matter particles in the mass range of 1-10 GeV/$c^2$ interacting via nuclear recoils would deposit energies below 1 keV. Such interactions produce both phonons and electron-hole pairs. The number of electron-hole pairs produced per unit energy deposited in an electron recoil, called the ionization yield, is a critical quantity for reconstructing the recoil energy and properly modeling the dark matter signal. However, the ionization yield has not been well-characterized for sub-keV nuclear recoils. IMPACT is a neutron scattering measurement campaign that aims to measure the ionization yield in Si and Ge down to 100 eV recoil energies. This talk will describe the first data taking campaign at the Triangle Universities Nuclear Laboratory using a Si detector and present the results obtained from the data.
The Super Cryogenic Dark Matter Search (SuperCDMS) Collaboration uses cryogenic semiconductor detectors to look for evidence of dark matter interactions with ordinary matter. The current generation is under construction at SNOLAB, and will use two target materials (silicon and germanium) and two detector types (HV and iZIP) to probe low mass dark matter.
For potential future upgrades, SuperCDMS is exploring possibilities in both reducing known background contributions and improving detector performance. Multiple detector optimization scenarios have been modelled, with various detector sizes and sensor configurations, to enhance detector resolution and background discrimination ability.
This talk will describe sensitivity projections for such future upgrades. Forecasts for nucleon-coupled dark matter (5 MeV/c$^2$ - 5 GeV/c$^2$), dark photon-coupled light dark matter (1 - 100 MeV/c$^2$), and dark photons and axion-like-particles (1 - 100 eV/c$^2$) will be shown.
Gravitational solitons are globally stationary, geodesically complete spacetimes with positive energy. These event-horizonless geometries do not exist in the electrovacuum by the classic Lichnerowicz Theorem. However, gravitational solitons exist when there are non-Abelian gauge fields in higher dimensions. In this talk, I will present a new class of supersymmetric asymptotically globally Anti-de Sitter gravitational solitons. I will then show that they contain evanescent ergosurfaces, a timelike hypersurface where the timelike Killing vector field becomes null. The presence of this hypersurface strongly suggests nonlinear instability due to the stable trapping phenomena. I will present an analytical argument for the derivation of this nonlinear instability. This is joint work with Dr. Hari K. Kunduri.
The vacuum instability in the presence of a static electric field that creates charged pairs is termed as Schwinger pair creation. The classical field theory of Schwinger pair creation can be described using an effective Schr\"{o}dinger equation with an inverted harmonic oscillator (IHO) Hamiltonian which exhibits fall to infinity[1]. In this talk we demonstrate that the classical field theory of Schwinger pair creation has a hidden scale invariance described by the quantum mechanics of an attractive inverse square potential in the canonically rotated $(Q,P)$ coordinates of the inverted harmonic oscillator. The quantum mechanics of the inverse square potential is well known for the problem of fall to the centre and the associated ambiguities in the boundary condition. The physics of inverse square potentials appears in various problems including, pair creation in the presence of an event horizon [3] and black hole decay, optics of Maxwell's fisheye lenses [4] and coherence of sunlight on the earth [5] etc. We use point particle effective field theory (PPEFT) to derive the boundary condition which describes pair creation. This leads to the addition of an inevitable Dirac delta function with a complex coupling to the inverse square potential, describing the physics of the source that runs in the.sense of renormalization group. The complex coupling gives rise to conservation loss or gain at the centre which is physically due to charged pairs being produced in Schwinger pair production.
References :
[1.] R. Brout, S. Massar, R. Parentani, and P. Spindel, Physics Reports
260, 329 (1995).
[2.] N. Balazs and A. Voros, Annals of Physics 199, 123 (1990).
[3.] K. Srinivasan and T. Padmanabhan, Physical Review D 60,
024007 (1999).
[4.] U.Leonhardt, New J. Phys. 11 093040 (2009).
[5.] S. Sundaram and P. K. Panigrahi, Optics Letters, 41(18) 4222 - 4224, (2016).
Antisymmetric tensor fields are an unavoidable prediction from string theory that adds to the theory's set of unique signatures. After a brief review of the emergence of antisymmetric tensor fields and of other possible string signatures, we will focus on the possible implications of antisymmetric tensor fields for particle physics and dark matter research.
I report on the results of the first analysis of a time-and-space localised quantum system crossing the horizon of a (3+1)-dimensional black hole. We
analyse numerically the transitions in an Unruh-DeWitt detector, coupled linearly to a massless scalar field, in radial infall toward a (3+1)-dimensional Schwarzschild black hole. In the Hartle-Hawking and Unruh states, the transition probability attains a small local extremum near the horizon-crossing and is then moderately enhanced on approaching the singularity. In the Boulware state, the transi- tion probability drops on approaching the horizon.
There are some conjectures in the context of string theory related to the effective field theories (EFT) looking consistent with quantum gravity. Early time cosmology contains gravity as well as quantum field theory. It was speculated that some EFTs which seem to be consistent with quantum gravity are not in the landscape of string theory. An important model out of these conjectures will be discussed in my presentation.
Magnetic fields, if present in the primordial plasma prior to last scattering, would induce baryon inhomogeneities and speed up the recombination process. As a consequence, the sound horizon at last scattering would be smaller, which would help relieve the Hubble tension. Intriguingly, the strength of the magnetic field required to alleviate the Hubble tension happens to be of the right order of magnitude to also explain the observed magnetic fields in galaxies, clusters of galaxies and the intergalactic space. I will review this proposal and provide an update on its status in the context of the latest data.
The Canada-France-Hawaii Telescope (CFHT) Large Area U-band Deep Survey (CLAUDS) produces images to a median depth of $U=27.1$ AB. These U-band images are the deepest ever assembled over such a large area. The catalogue resulting from this survey contains a little more than 10,000,000 objects. Our goal is to identify white dwarfs from the CLAUDS deep fields and to study their physical properties and spatial distribution, in the Milky Way. Considering the size of the catalogue, we conduct our search via machine learning. We use the end-to-end open-source platform for machine learning, TensorFlow. Via TensorFlow, we perform a binary classification using deep learning methods. After filtering the white-dwarf candidates, to limit contamination by other objects such as main-sequence stars, we find over 600 white dwarfs. We then determine the physical properties of the white dwarfs, such as surface temperature, distance modulus and age, using cooling models. We then fit for the thin and thick disc scale heights of the white-dwarf space distribution, and we derive the white-dwarf luminosity function. Thanks to the properties of the CLAUDS fields, we provide one of the deepest catalogues of galactic white dwarfs.
The accelerating expansion of the universe has been widely studied beyond the standard $\mathrm{\Lambda}$-cold dark matter model ($\mathrm{\Lambda CDM}$) through modified gravity and dynamical dark energy models. Such modifications of laws of gravity at large scales usually require a new degree of freedom beyond the $\mathrm{\Lambda CDM}$ cosmology. In this work, we utilize the scalar-tensor theories of gravity to study models of scalar field dark energy non-minimally coupled to matter. We focus our study on a symmetron model, which is one of the modified gravity models with a screening mechanism, and provide a detailed analysis to investigate the evolution of the universe via an exact solution of field equations and within the quasi-static approximation (QSA). We consider two scenarios where in one case the scalar field is only coupled to dark matter and in the other it couples to all of the matter. We identify the range of the symmetron model parameters for which the QSA is valid.
Alternative gravity theories have been extensively explored beyond general relativity to study the modified growth of the cosmological perturbations, in which the scalar-tensor theory, with a single scalar field coupled to all of the matter is the most conventional one. MGCAMB, as the public code used to study modifications to the growth structure, has been used to study cosmological predictions of different types of such modified gravity theories. In this work, we extend MGCAMB to include models with a scalar field coupled only to dark mater. We then identify the characteristic observational signatures that could distinguish between the all-matter and the dark matter-only coupled scalar fields.
Low oxygen tension in tumour tissue has long been recognized as an indicator of poor outcomes and, independently, as an obstacle to effective treatment with radiation and chemotherapy drugs. Consequently, the search for non-invasive imaging techniques has been ongoing to guide diagnosis and monitor treatments. Dynamic contrast-enhanced MRI has seen the most widespread use but only visualizes a related, not overly direct measurement of blood supply. More recently attempts to measure oxygen saturation in tissue using MRI have been deployed with varying success. One vexing issue in TOLD (T1) and BOLD (T2*) experiments has been the many confounding influences on contrast.
We present a method of imaging the presence of oxygen more directly in tissue of tumour models using a dynamic oxygen-enhanced MRI imaging technique in the presence of a repeated oxygen gas challenge. Since many factors influence the T1-weighted signal intensity over the course of minutes, we use independent component analysis to separate the response to increased oxygen in the tumour microenvironment.
We have now tested our technique in a range of tumour models and compared to a ground truth of hypoxia status using pimonidazole staining on histology slices.
A remaining question of interest is the underlying cause of oxygen-mediated T1 changes: To what degree are there oxygen-modulated perfusion changes or true variations in the amount of tissue-dissolved, available oxygen? To elucidate this we are now also embarking on a simultaneous, dynamic measurement of T2*.
Magnetic resonance imaging (MRI) is widely used as a non-invasive diagnostic technique to visualize the internal structure of biological systems. Quantitative analysis of the T1 magnetic resonance (MR) relaxation time could reveal microscopic properties and has significance in the study of biological tissues such as the brain, heart, and tumors. A multicomponent model, with a continuous relaxation spectrum, requires exponential analysis which is an intrinsically ill-posed problem. Traditional methods to determine multicomponent T1 spectra require high quality data and are computationally intensive. With magnitude data, an additional phase correction pre-processing step is required which may lead to large errors with few input data points. A large number of data points and high signal-to-noise ratio (SNR) result in long acquisition times. Extending our previous work using neural networks for exponential analysis, artificial neural networks (ANNs) have been trained to generate the multicomponent T1 distribution spectra with as few as 8 input data points and reduced SNR.
Deep learning with ANNs is a technique for solving complex nonlinear problems. The performance of the optimized ANNs was evaluated across a large parameter range and compared to traditional methods. In addition to superior computation speed, a higher accuracy was achieved. No phase correction or user-defined parameters were required. This improved performance, with a significantly reduced number of input data points, will enable faster multicomponent relaxation experiments.
The proposed method for exponential analysis is not restricted to magnetic resonance. It is readily applicable in other areas with exponential analysis and can be extended to higher dimensional spectra. It can also be adapted to solve other ill-posed problems.
Introduction: Recently, accelerated imaging, using Compressed-Sensing (CS) and fitting to the Stretched-Exponential Model (SEM), has been shown to significantly improve SNR of MRI images without increasing scan duration1: k-space is undersampled according to high acceleration factors (AF) and averaged together using a specific averaging pattern. A density decay curve can then be fitted and reconstructed using the SEM combined with CS.2 Reconstruction artefacts can be minimized or removed using a convolutional neural network.3
Method: 1H MR was performed on a resolution-phantom at the low-field (0.074T) MRI scanner using a home-built RF coil. Using FGRE, 9 2D undersampled k-spaces were acquired for three AFs (7, 10, 14): these were averaged for every unique combination of images without overlap, resulting in 14 k-spaces total (2 combinations for 4 averages, etc.). Nine fully-sampled 2D human lung images were acquired at 3.0 T using inhaled hyperpolarized 129Xe (35%); these were averaged using the previously-described pattern, and retroactively undersampled for 3 Cartesian sampling schemes (FGRE, x-Centric4, & 8-sector FE Sectoral5). The SNR attenuation is assumed to represent a decrease of the resonant isotope density in phantom after diluting it with the non-resonant isotope. For both phantom and lung images, the resulting signal decay (density) curve was fitted using the Abascal method.2 A 3-stage U-Net was developed to generate artefact masks (segmentation), and applied to phantom data to remove artefacts.
Results: The reconstructed human lung images saw 4-5x higher SNR (>21 for all sampling schemes) compared to the original non-averaged images (SNR=6). FE-Sectoral featured less artefacts than FGRE and x-Centric.
Conclusion: In all cases, this technique resulted in 4-5x higher SNR without increasing scan duration; although only a third of a typical 129Xe dose was used, the human lung images still saw large SNR gains. The artefact removal neural network was able to remove reconstruction artefacts from AF=7 phantom images, but suffered at higher AFs. These improvements in SNR permit the use of a smaller xenon dose, significantly reducing scan costs.
References:
1 Perron et al. ISMRM (2021); 2 Abascal et al. IEEE Trans Med Imaging (2018); 3 Lee et al. MRM (2017); 4 Ouriadov et al. MRM (2017); 5 Khrapitchev et al. JMR (2006)
Magnetic Resonance Imaging (MRI) is a non-invasive imaging modality which provides excellent soft tissue contrast. An MR echo signal can be generated by an excitation and a refocusing radiofrequency (RF) pulse, where spatial encoding is achieved by applying magnetic field gradients that create signal phase evolution at different spatial locations. A train of echoes can be generated with multiple refocusing RF pulses to acquire images more rapidly. However, non-ideal refocusing RF pulses result in multiple coherence pathways in the echo signals, leading to image artifacts.
The Rapid Acquisition with Relaxation Enhancement (RARE) method used crusher gradients to remove the unwanted coherence pathways. Balanced imaging gradients within each echo interval were employed for a net zero phase evolution due to spatial encoding gradients. The high amplitude gradient pulses limit the echo spacing which affects the MRI image contrast, resolution and signal-to-noise ratio. High levels of gradient switching can reduce image quality, create acoustic noise, and cause peripheral nerve stimulation. In this work, we propose to employ RF phase cycling to eliminate the coherence pathway artifacts and reduce the magnetic field gradient duty cycle. The phase cycling schemes were determined through an optimization procedure. The method was applied to both 2D and 3D imaging sequences and compared to conventional balanced RARE sequences.
The PREX-II and CREX experiments at Jefferson Lab have completed measurements of the parity violating elastic electron scattering asymmetry from $^{208}$Pb and $^{48}$Ca targets. These asymmetries are sensitive to the weak charge radius of the nuclei, and thus to the RMS radius of the neutron distribution. In neutron-rich nuclei such as $^{208}$Pb and $^{48}$Ca, the neutrons extend to larger radii than the protons, forming the neutron skin. Evaluation of the neutron skin in $^{48}$Ca provides an important benchmark for nuclear theory, while that of $^{208}$Pb provides meaningful constraints to the density dependence of the symmetry energy in neutron rich nuclear matter, a parameter of the nuclear equation of state. A brief discussion of the experimental techniques, analysis, and results of the experiments will be presented, as well as our understanding of the impact regarding nuclear matter systems, from nuclear structure to neutron stars.
*We acknowledge the support of the U.S. Department of Energy Office of Science, Office of Nuclear Physics, the National Science Foundation, and NSERC (Canada).
Nuclei away from the line of stability have been found to demonstrate behavior that is inconsistent with the traditional magic numbers of the spherical shell model. This has led to the concept of the evolution of nuclear shell structure in exotic nuclei, and the neutron-rich Ca isotopes are a key testing ground of these theories; there have been conflicting results from various experiments as to the true nature of a sub-shell closure for neutron-rich nuclei around $^{52}$Ca. In November of 2019, an experiment was performed at the ISAC facility of TRIUMF; $^{52}$K, $^{53}$K, and $^{54}$K were delivered to the GRIFFIN gamma-ray spectrometer paired with the SCEPTAR and the ZDS ancillary detectors for beta-tagging, as well as DESCANT for neutron-tagging. Using this powerful combination of detectors, we combine the results to construct level schemes for the isotopes populated in the beta-decay. Preliminary results from the analysis will be presented and discussed in the context of an N=32 shell closure in neutron-rich nuclei.
Heavy element synthesis within stellar bodies typically manifests in explosive environments such as neutron star mergers. However, at the low temperature and high density conditions of a neutron star crust, degenerate neutrons provide alternate synthesis pathways compared to conventional systems. In this work, we study the effect of this degeneracy on neutron capture rates by several rp-process ashes and neutron-rich nuclei within accreting neutron stars. We consider strongly interacting asymmetric nuclear matter and its effect on the neutron chemical potential and therefore on the capture rates. We then investigate variations in the nuclear physics input which constructs the absorption cross section, and their effects on the reaction rate in the context of degenerate neutron capture. Finally, we propose an analytic approximation for highly degenerate neutron capture rates. Our results may help interpret the abundance evolution of rp-process ashes.
In recent years, attention has been brought to intermediate neutron capture processes, working between the rates and environmental neutron densities of the r-process and s-process, while their full contribution to abundances is not yet fully characterized. Operating in neutron densities of 10$^{13}$ - 10$^{20}$ neutrons/cm$^{3}$, the i-process and n-process have been shown in sensitivity studies to take reaction pathways through experimentally accessible neutron-rich nuclei, providing an opportunity to better characterize the neutron capture rates that define these processes and their resultant abundances.
In this contribution we will review the $\beta$-Oslo analysis of the notable n-process isotope, $^{91}$Sr, taken with the SuN total absorption spectrometer at the NSCL in 2018. By simultaneously measuring both $\gamma$-ray energies and excitation energies in this experimental setting, a coincidence matrix was produced to perform the Oslo analysis, providing experimental information on the nuclear level density and gamma ray strength functions, two critical components to finding the neutron capture cross section. Since the neutron capture rates are historically unconstrained by experimental work, this provides an opportunity to further reduce these uncertainties, better characterizing the contribution of such nuclei to these exotic captures processes.
Although the shell model forms the backbone of our understanding of nuclear structure, the breakdown of traditional magic numbers far from stability gives insight into the nature of the underlying nuclear interactions and acts as a tool to test existing models. Islands of inversion (IoI) in the nuclear landscape are characterized by the presence of deformed multi-particle multi-hole (npnh) ground states instead of the 0$\it{p}$0$\it{h}$ configurations predicted by spherical mean-field calculations. In the N=40 region, the relatively large energy gap separating the pf shell from the neutron g$_{9/2}$ orbital points towards a strong sub-shell closure at N=40, which has been supported by the observation of a high-lying 2$^{+}$ state and low $\it{B}$($\it{E}$2) value in $^{68}$Ni (Z=28) [1]. However, systematics of E(2$^{+}$) and $\it{B}$($\it{E}$2) values have indicated a sudden increase in collectivity below Z=28 when approaching N=40, evidenced in the rapid drop of E(2$^{+}$) in Fe (Z=26) and Cr (Z=24) isotopes [2,3]. This increase in collectivity around N=40 and Z<28 is thought to be due to the neutron occupation of intruder states from a higher shell, similar to the island of inversion around N=20 [4,5]. Recent studies also suggest the occurrence of a new IoI at N=50 and a proposed merging of the N=40 and N=50 IoIs, equivalent to the one observed between N=20 and N=28 [6,7]. Detailed spectroscopic information of the Fe, Co, and Ni isotopes will be crucial to understand the structure of nuclei near and inside the N=40 IoI and map the bridge between N=40 and N=50. To this end, an experiment was performed at TRIUMF-ISAC using the GRIFFIN spectrometer that utilized the β and βn decay of $^{68}$Mn to populate excited states in $^{67,68}$Fe, $^{67,68}$Co and $^{67,68}$Ni. Preliminary results from the analysis which includes an expanded $^{68}$Fe level scheme will be presented and discussed.
[1] O. Sorlin et al. PRL (2002)
[2] S. Naimi et al. PRC (2012)
[3] M. Hannawald et al. PRL (1999)
[4] S. M. Lenzi et al. PRC (2010)
[5] Y. Tsunoda et al. PRC (2014)
[6] C. Santamaria et al. PRL (2015)
[7] E. Caurier, F. Nowacki, and A. Poves. PRC (2014)
A high precision half-life measurement was performed for the radioactive isotope, 26Na at the Isotope Separator and Accelerator (ISAC) rare-isotope beam facility at TRIUMF in Vancouver. This is the first experimental test of the high-efficiency Gamma-Ray Infrastructure for Fundamental Investigations of Nuclei (GRIFFIN) spectrometer for performing high precision (± 0.05% or better) half-life measurements [1]. Following the implantation of the samples at the centre of the GRIFFIN spectrometer, a γ-ray counting measurement was performed by detecting 1809-keV γ-rays in the 26Mg daughter. In this talk, I will discuss new results of the half-life obtained from gating on 1809-keV γ-ray photopeaks that include corrections for pile-up and deadtime losses. The results obtained from these techniques will be compared to a previous high-precision measurement of the 26Na half-life that employed direct β counting [2].
KEYWORDS: radioactive isotope, half-life, deadtime, pile-up
References
1. Garnsworthy, A. B., Svensson, C. E., Bowry, M., Dunlop, R., MacLean, A. D., Olaizola, B., ... & Zidar, T. (2019). The GRIFFIN facility for Decay-Spectroscopy studies at TRIUMF-ISAC. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 918, 9-29.
2. Grinyer, G. F., Svensson, C. E., Andreoiu, C., Andreyev, A. N., Austin, R. A. E., Ball, G. C., ... & Zganjar, E. F. (2005). High precision measurements of Na 26 β− decay. Physical Review C, 71(4), 044309.
Given the potential impact of quantum computing on many neighboring disciplines of science and engineering, it is key that a diverse talent pool is developed for the future. We have opted to start education and exposure at the K-12 level, with particular focus on underserved demographics, in order to support student’s confidence and passion for STEM. In this talk, I will share how we adapted to fully online education, our most successful methods of teaching technical topics to diverse youth, and how we intend to expand our reach to students, teachers and the general public.
How do we prepare physics students to grapple with vital questions regarding complex relationships between physics and society? This winter semester, I facilitated for my first time the ethics requirement for physics specialists at the University of Toronto. I will share my course design considerations, lessons learned, useful resources, and invite discussion of the educational implications and responsibilities to place physics in its societal context.
The Innovation, Diversity, Exploration, and Advancement in STEM (IDEAS) Initiative has recently been launched at Queen’s University. The IDEAS Initiative utilizes a multi-generational approach to diversity and outreach, aiming to coordinate historically under-represented individuals within STEM to foster an interest towards the natural sciences in Canadian youths. The IDEAS Initiative is a major EDII and outreach arm of both the Arthur B. McDonald Canadian Astroparticle Physics Research Institute and the Queen’s University Department of Physics, Engineering Physics & Astronomy.
The IDEAS Initiative has run an extensive suite of outreach programmes following this methodology since 2019.Volunteer scientists lead experiments, projects, and confidence building activities both in-person and online to encourage development of self-identity and a sense of belonging to STEM in the participants. In parallel, the IDEAS Initiative provide opportunities for volunteers to participate in teaching and outreach training workshops as a cornerstone of a network-building objective, which is then utilized to maximize impact. Pedagogy, performance, and future prospects of the IDEAS Initiative will be discussed.
During the Winter 2022 semester we have begun the first pilot of STEM for Everyone, a research-based intervention designed to promote the Physics Identity of women and other gender minorities in high school physics classes. For the past 15 years, the median proportion of female physics students across Ontario has remained constant at <35%. This imposes a hard cap on any potential progress to reduce gender inequities for undergraduate programs and beyond. This is why STEM for Everyone has been designed to target high school students, looking to address the long-standing gender gap in high school physics.
We will present preliminary results from our pilot project partnering with the Toronto District School Board and select independent schools across Ontario. This includes attitudinal surveys to measure change in Physics Identity, focus groups, and an analysis of enrollment statistics. We will also discuss planned improvements to the program based on student and teacher feedback as we look towards expanding during the 2022-2023 academic year.
TBD
The Canadian Light Source has been running the 250 MeV electron linac from the 1960s for injecting into the booster synchrotron since 2002. As part of a renewal program, the CLS will be installing a new linac with an RF frequency synchronised to the booster ring. The project is expected to take 3 years to 2025 and lead to improved performance, especially for the recently achieved constant brightness top-up mode. An overview of the requirements and plans for the new linac will be provided.
Superconducting Radio-frequency (SRF) technology using niobium accelerating cavities enables high performance and efficient acceleration for modern accelerator projects. These projects deliver accelerators that serve different fields of science, dominated by sub-atomic physics, photon science and applications. Global R&D in SRF science and technology focusses on: 1) Reducing rf losses, 2) Increasing accelerating gradients 3) Developing new materials beyond Niobium. In Canada, TRIUMF and CLS both utilize SRF technology in their on-site accelerators. TRIUMF has had an active SRF R&D program since 2000 to support the development of in-house accelerators, to support global collaborations and to support student focused fundamental studies. Recent developments together with the U. of Victoria key on the study of coaxial cavities for hadron acceleration as well as the characterization of SRF materials using a variety of material science techniques. In particular, a new beamline has just been commissioned at the beta-NMR facility at TRIUMF for depth resolved studies of the Meissner state at high parallel fields. The talk will give an overview of SRF science and technology in Canada.
Compact Accelerator-based Neutron Sources (CANS) offer the possibility of an intense source of pulsed neutrons with a capital cost significantly lower than spallation sources. In an effort to close the neutron gap in Canada a prototype, Canadian compact accelerator-based neutron source (PC-CANS) is proposed for installation at the University of Windsor. The PC-CANS is envisaged to serve two neutron science instruments, a boron neutron capture therapy (BNCT) station and a beamline for fluorine-18 radioisotope production for positive emission tomography (PET). To serve these diverse applications, a linear accelerator (or, linac) solution is selected, that will provide 10 MeV protons with a peak current of 10 mA within a 5% duty cycle. The accelerator is based on an RFQ and DTL with a post-DTL pulsed kicker system to simultaneously deliver macro-pulses to each end-station. The neutron production targets for both neutron science and BNCT will be of Beryllium and engineered to handle the high beam power density. Conceptual studies of the accelerator and benchmarking studies of neutron production and moderation in FLUKA and MCNP in support of the target-moderator-reflector (TMR) design will be presented.
The Canadian Light Source (CLS) has recently created a new Electron Source Laboratory (ESL) . This lab is a cut off section of the linac hall/tunnel that was constructed late 1950's to host the Saskatchewan Accelerator Laboratory experimental nuclear physics. The rebuilt has new shielding and a separate entrance that can accommodate an independent use of the area from the existing linac and can operate an accelerator of up to 100 MeV and . The laboratory will be used to prepare an operational spare electron gun for the existing 250 MeV linac. In addition, there are plans to develop RF electron sources for a future branch line to inject into the linac and for possible short pulse production. This paper will give an overview of the ESL space and the first electron guns which plan to be installed and characterized.
The linear accelerator for electrons at TRIUMF is one of the main drivers for its Advanced Rare Isotope Laboratory (ARIEL) project. The electron linac was designed and built for this purpose and is in its final commissioning stage with the capability of 10mA 30MeV CW superconducting machine. ARIEL will expand TRIUMF’s ability to produce rare isotope beams. But the potential for the linac is far more than this one purpose. Taking advantage of its design, the e-linac is in a stage to serve the first experiments. These experiments will happen at different energies and different time scales. The FLASH experiment, running at the first acceleration stage, will investigate the influence of short but high intense radiation pulses for cancer therapy. The DarkLight experiment, running at the second acceleration stage, will investigate the possible existence of a dark fifth-force carrier, which could explain the Atomki anomaly.
Understanding and controlling the properties of 2D materials to our advantage can be contemplated with the development of experimental tools to probe and manipulate electrons and their interactions at the atomic scale. In this talk, I will present scanning tunnelling microscopy and spectroscopy experiments aimed at: elucidating the nature of atomic-scale defects in 2D materials [1], visualizing moiré patterns between crystals with different symmetries [2] and imaging surface and edge states in a magnetic topological system. Moreover, I will discuss how we leverage our expertise in probing and engineering electronic states at surfaces of 2D materials to further the development of graphene-based gas sensors [3] and gated quantum dot circuits based on 2D semiconductors [4].
[1] Plumadore et al., PRB, (2020)
[2] Plumadore et al., Journal of Applied Physics, (2020)
[3] Park et al., ACS Applied Materials & Interfaces (2021)
[4] Boddison-Chouinard, Appl. Phys. Lett., (2021)
The growing interest in nanostructured materials stems from their remarkable properties, such as high conductivity, heat transfer, mechanical and chemical stability, and emerging quantum properties, arising from reduced dimensionality. These exceptional properties have made graphene, the only 2D material in nature, the focus of significant academic research over the past two decades. However, the lack of an electronic bandgap limits its use in electronic applications. This limitation has motivated interdisciplinary research at the intersection of condensed-matter physics, physical chemistry, and materials science to identify ways to design and create candidate nanomaterials with engineered bandgap and electron-spin sites for quantum processors.
Our research focuses on the surface-confined reactions to design molecular-based low-dimensional nanomaterials whose electronic properties can be tailored by their structural design, morphology, dimension, size, building blocks, and the chemical nature of the bonds which hold them together. We present various surface reactions for creating 1D and 2D polymers, metal-organic networks, and organometallic structures on noble metal single crystal surfaces. To identify their morphology and chemical nature, we employ scanning tunnelling microscopy and non-contact atomic force microscopy, and other surface characterization techniques, such as X-ray photoelectron spectroscopy, complemented with density functional theory calculations.
Our research benefits from an interdisciplinary approach for the rational design of electronic structures, known as band-structure engineering. The electronic properties of 1D and 2D nanomaterials can be tailored for smaller and faster transistors, or for quantum processors in carbon-based nanoelectronics.
References
[1] M. Ebrahimi, F. Rosei, Nature 542 (2017) 423-424 (News & Views)
[2] M. Ebrahimi, Nature Chemistry (2021) https://doi.org/10.1038/s41557-021-00868-y
[3] D. P. Goronzy et al., ACS Nano 12 (2018) 7445-7481
[3] M. Ebrahimi et al., Journal of the American Chemical Society 133 (2011) 16560-16565
[4] G. Galeotti et al., Faraday Discussions 204 (2017) 453-469
[5] F. De Marchi et al., Nanoscale 10 (2018) 16721-16729
[6] G. Galeotti et al., Chemical Science 10 (2019) 5167-5175
[7] C. Jing et al., Angewandte Chemie International Edition 58 (2019) 18948-18956
[8] G. Galeotti et al., Nature Materials 19 (2020) 874-880
[9] P. Ji et al., Small 16 (2020) 2002393
[10]. N. Cao et al., Nanoscale 13 (2021) 19884-19889
The doping of conjugated polymers and molecules forming the material class of organic semiconductors (OSCs) is routinely performed to tune their electric properties and electronic structure to meet application specific demands. P-doping is done by adding molecular electron acceptors to initiate charge transfer with the OSC host. The efficiency of this process is found to depend subtly on the degree of charge transfer, the dopant strength and molecular shape, the OSC conjugation length, and the OSC structure upon doping. I will provide an overview of the current understanding of the various phenomena associated with the p-doping of OSCs and discuss parameters that govern the degree of charge transfer (fractional versus integer), focusing on oligothiophenes of chain lengths towards the polymer limit.
Both natural ecosystems and biochemical reaction networks involve populations of heterogeneous agents whose cooperative and competitive interactions lead to a rich dynamics of species' abundances, albeit at vastly different scales. The maintenance of diversity in large ecosystems is a longstanding puzzle, towards which recent progress has been made by the derivation of dynamical mean-field theories of random models. In particular, it has recently been shown that these random models have a chaotic phase in which abundances display wild fluctuations. When modest spatial structure is included, these fluctuations are stabilized and diversity is maintained. If and how these phenomena have parallels in biochemical reaction networks is currently unknown, but is of obvious interest since life requires cooperation among a large number of molecular species, and the origin of life is hotly debated. To connect these phenomena, in this work we find a reaction network whose large-scale behavior precisely recovers the random Lotka-Volterra model considered recently. This clarifies the assumptions necessary to obtain a reduced large-scale description, and shows how the noise must be approximated to recover the previous mean-field theories. Then, we show how local detailed balance and the positivity of reaction rates, which are key physical requirements of chemical reaction networks, provide obstructions towards the construction of an associated dynamical mean-field theory of biochemical reaction networks. We outline prospects and challenges for the future, and argue for a synthetic approach to a physical theory of the origin of life.
Biological systems need to react to stimuli over a broad spectrum of timescales. If and how this
ability can emerge without external fine-tuning is a puzzle. This problem has been considered in
discrete Markovian systems where results from random matrix theory could be leveraged. Indeed,
generic large transition matrices are governed by universal results, which predict the absence of
long timescales unless fine-tuned. Previous work has considered an ensemble of transition matrices
and motivated a temperature-like variable that controls the dynamic range of matrix elements.
Findings were applied to fMRI data from 820 human subjects scanned at wakeful rest. The data
was quantitatively understood in terms of the random model, and brain activity was shown to
lie close to a phase transition when engaged in unconstrained, task-free cognition – supporting
the brain criticality hypothesis in this context. In this work, the model is advanced in order to
discuss the effect of matrix asymmetry, which controls entropy production, on the previous results.
We introduce a new parameter that controls the asymmetry of these discrete Markovian systems
and show that when varied over an appropriate scale this factor is able to collapse Shannon entropy
measures. This collapse indicates that structure emerges over a dynamic range of both temperatures
and asymmetry, and allows a phase diagram of temperature in discrete Markovian systems
to be produced.
Some of the most intriguing thermodynamic phases in nature involve an interplay between multiple types of degrees of freedom. However, multiple types of degrees of freedom are also a common feature of design problems in distributed systems that can be cast in terms of complex networks. Here, we show that generic network models of design exhibit an intricate interplay between configurational and conformational entropy. This interplay produces behaviours in non-matter systems that have direct analogues in conventional condensed matter. We give concrete illustrations of these behaviours using a model drawn from naval architecture, but our results have implications for distributed systems more generally. Our framework provides new tools for describing how competing degrees of freedom shape the space of design choices in complex systems.
In this paper, we study the dynamical properties of the exciton-polaron in the microtubule. The study was carried out using a unitary transformation and an approximate diagonalization technique. Analytically, the modeling of exciton-polaron dynamics in microtubules is presented. From this model, the ground state energy, mobility, and entropy of the exciton-polaron are derived as a function of microtubule’s parameters. Numerical results show that, depending on the three vibrational modes (protofilament, helix, antihelix) in MTs, exciton-polaron energy is anisotropic and is more present on the protofilament than the helix and absent on the antihelix. Taking into account the variation of the protofilament vibrations by fixing the helix vibrations, exciton-polaron moves between the 1st and 2nd protofilaments. It is seen that the variation of the two vibrations induces mobility of the quasiparticle between the 1st and 15th protofilament. This result points out the importance of helix vibrations on the dynamics of quasiparticles. It is observed that the mobility of the exciton polaron and the entropy of the system are strongly influenced by the vibrations through the protofilament and helix. The effects of the one through the antihelix is negligible. The entropy of the system is similar to that of mobility. Confirming that the quasiparticles move in the protofilament faster than in the helix.
Fascinating finger-like patterns are observed at the edge of Pseudomonas aeruginosa bacteria colonies that grow at the effectively two-dimensional interface between agar and glass. We study this pattern formation phenomenon by simulating a dynamical self-consistent field theory. The twitching bacteria are modelled as self-propelled rods pushing against the agar-glass adhesion force, represented as a bath of passive particles. We show that a perturbation to a flat interface between uniform agar and bacteria, which are aligned perpendicular to the interface, is unstable. Fingers emerge from the interface as regular regions of dense, polar-aligned rods that move along the finger axis. By introducing a random spatial variation into our model for the strength of the agar adhesion with the glass, we are able to produce more realistic irregular finger patterns, similar to those observed in experiment. We discuss the impact of various model parameters on the finger properties and propose an interpretation for some of the trends seen in these properties as the agar concentration is varied in experiment.
The Large Hadron Collider located at CERN outside of Geneva, Switzerland uses proton-proton collisions to produce a wide range of particles. W and Z bosons, the mediators of the fundamental weak force, are some of the particles that can be produced in proton-proton collisions and can be used to give a more complete understanding of the Standard Model. One of the ways they can decay is into detectable lepton particles, such as electrons, which can be measured with the ATLAS (A Toroidal LHC ApparatuS) detector. The Drell-Yan process is the production of W/Z bosons in proton-proton interactions with leptonic final states. Its differential cross-section expresses the probability for this process to occur depending on the W/Z bosons’ and decay products’ kinematic variables. It can be separated into eight spin-related ratios, known as the Drell-Yan angular coefficients. The coefficients are coupled to trigonometric polynomials which contain information about the detected leptons. Using the property that the polynomials are orthogonal to each other, it is possible to isolate each coefficient.
All eight of the coefficients for the Z boson have been measured, while only two of these coefficients for the W boson have been measured with limited precision. One reason for this difference is that there is added difficulty for the W boson case as it requires reconstructing the neutrino which goes undetected. This talk will cover my research towards measuring these coefficients for the W boson with special low pileup data sets, which aid in reconstructing the neutrino. This measurement gives both a unique result for many of the coefficients as well as it helps reduce the uncertainty for other measurements like the mass of the W boson.
With only 0.5% of the full projected $50\,\textrm{ab}^{-1}$ dataset, the Belle II detector is already a competitive high luminosity environment in which to study $B$ decays with missing energy. At a centre of mass energy of the $\Upsilon(4S)$ resonance, Belle II is a $B$ factory, producing approximately $1.1\times10^9$ $B\bar{B}$ pairs per $\textrm{ab}^{-1}$. Precise knowledge of one fully reconstructed $B$ meson through the hadronic Full Event Interpretation (FEI) tagging algorithm provides strong constraints for any signal decay studied using the other $B$ meson in the $B\bar{B}$ pair. In this talk, recent measurements of the signal decay $B\rightarrow D^{(*)}\ell \nu$ will be examined alongside the prospects of the $R(D)$ and $R(D^{*})$ measurements, in which Belle II anticipates a result of unprecedented precision with as little as $5\,\mbox{ab}^{-1}$ of data, and a sensitivity that could exhibit indirect New Physics effects.
The Measurement of a Lepton Lepton Electroweak Reaction (MOLLER) Experiment at Jefferson lab will search for new dynamics beyond the Standard Model at low (~100 MeV) and high energies (multi-TeV). MOLLER will measure the parity-violating asymmetry (APV) in the scattering of longitudinally polarized electrons from unpolarized target electrons to an accuracy of 2.4% using an 11 GeV beam in Hall A at Thomas Jefferson National Accelerator Facility. To achieve the expected precision, experimental corrections to the measured asymmetries are required to account for background processes characterized by fractional dilution factors and background asymmetries. Pion dilution factors and asymmetries have significant contributions to the experimental corrections and will be measured in a dedicated pion detector system. The University of Manitoba has been designing, developing, and constructing the pion detector system for MOLLER experiment. The Geant4 simulation toolkit is used to determine the optimal geometry and position of the pion detector system to maximize the signal from pions.
To improve the understanding of uncertainties introduced by experimental corrections, a Bayesian analysis method is investigated to complement the commonly used frequentist methods for background corrections in parity-violating electron scattering experiments. We anticipate that this will allow for a better assessment of the uncertainties in the corrections. This talk will review the MOLLER experiment and the optimization process for the pion detector system. Also, the idea of using the Bayesian method for the experimental corrections will be introduced.
Conventional matter consists of mesons, made of two quarks or baryons, made of three quarks. However, the Standard Model of Particle Physics does not forbid particles consisting of more than three quarks. This analysis focuses on the search for possible exotic hadronic states using strange particles, the kaon meson ($K_s^0$) and the lambda baryon ($\Lambda^0$ or $\bar{\Lambda^{0}}$) with the ATLAS Run 2 data. Bump searching techniques are to be performed on the invariant $K_s^0 K_s^0$, $K_s^0 \Lambda^0$ and $\Lambda^{0} \Lambda^0$ mass spectra to look for possible multiquark states. Summary of the ongoing analysis including the background studies will be presented in the talk.
The Digital Hadronic Calorimeter (DHCAL) and the Silicon-Tungsten Electromagnetic Calorimeter (Si-W ECAL) are both CALICE prototypes originally meant for the International Linear Collider (ILC) experiments. The analysis of the combined response to different particles will be presented. The data was obtained from test runs at Fermilab in 2011. The linearity, energy and spatial resolutions results will be shown, as well as the calibration and alignment of the detectors. Both DHCAL and Si-W ECAL are fine-layered high-granularity detectors with 1cm x 1cm pixel sizes, which allows for much-improved tracking and particle identification, thus for the application of modern particle flow algorithms.
The ATLAS collaboration has been preparing for the imminent Run-3 data-taking period during the long shutdown of the LHC. In this presentation, the upgrades to the detector which have been installed and commissioned during this time will be presented. Aside from these hardware and software improvements, the collaboration has been exploiting the wealth of Run-2 data in order to perform a wide range of measurement and searches. Highlights from these physics analyses will also be presented.
ATLAS-TPX network is a network of 15 pixelated detectors based on Timepix ASICs
which was installed in ATLAS cavern to measure the Radiation Field composition and Luminosity
during Run 2 in the framework of a collaboration between Montreal University and IEAP Czech
Technical University in Prague. The Timepix silicon detectors are two-layered and equipped
with neutron converters (Lithium Floride and Polyethylene). Thanks to the two operation modes available in Timepix ASICs i-e Time over Threshold (ToT) and Time of Arrival (ToA), each detector in the network is capable of measuring Luminosity with 5 different algorithms namely Cluster Counting Algorithm, Hit Counting Algorithm, Total Deposited Energy Algorithm, Thermal Neutron
Counting Algorithm and MIPs (Minimum Ionizing Particles) Counting Algorithm. In addition to
measuring the number of proton-proton collisions at the Interaction Point, finely segmented
detectors (55um Pitch) allow a high-quality track reconstruction which helps to identify the particle types. Timepix Detectors network provide about 150 relative Luminosity measurements for each ATLAS Run. These measurements are then further analyzed to select the measurements with highest precision.
Different algorithms that were developed for Luminosity measurement were tested by
comparing the Integrated Luminosity measurements with other ATLAS Luminosity detectors.
Most algorithms show good agreement with other ATLAS Luminometers, while some
algorithms showed slight disagreements which opened the door for crucial studies like track overlapping correction and Activation measurement in ATLAS cavern. Each algorithm comes with its statistical and systematic uncertainties. We have conducted Long Term stability studies with the ATLAS-Timepix network for the complete Run-2. We propose to present results from the different Luminosity measurement algorithms and Long-Term Stability studies for the year 2016, 2017 and 2018 with ATLAS-Timepix network.
The inclusive top-quark pair ($t\bar{t}$) production cross-section was measured in proton-proton collisions at a center of mass energy of 5 TeV with 257 pb$^{-1}$ of data collected by the ATLAS detector. The $t\bar{t}$ cross-section measurement at a lower center of mass helps to further constrain the gluon Parton Distribution Function (PDF) at high Bjorken $x$. The cross-section is first measured individually in both the dilepton and single-lepton channels of the $t\bar{t}$ decay before being combined. The measurement in the dilepton channel is measured using a “cut-and-count” approach whereas the single-lepton measurement utilizes a Boosted Decision Tree (BDT) trained on Monte Carlo to separate signal from background. The output distribution of the BDT is the fit to data in a profile-likelihood fit leading to the single-lepton measurement being the most precise single measurement of the $t\bar{t}$ cross-section. The combined cross-section improves this measurement by an additional 10%. The results are used to further constrain PDFs at 5 TeV center-of-mass energy.
A search is made for a vector-like $T$ quark decaying into a Higgs boson and a top quark in 13 TeV proton-proton collisions using the ATLAS detector at the Large Hadron Collider with a data sample corresponding to an integrated luminosity of 139 fb$^{-1}$.
The all-hadronic decay modes $H \rightarrow b\bar{b}$ and $t \rightarrow bW \rightarrow bq\bar{q}'$ are reconstructed as large-radius jets and identified using tagging algorithms.
Improvements in background estimation, signal discrimination, and a larger data sample, contribute to an improvement in sensitivity over previous all-hadronic searches.
No significant excess is observed above the background, so limits are set on the production cross-section of a singlet $T$ quark at 95% confidence level, depending on the mass, $m_{T}$, and coupling, $\kappa_{T}$, of the vector-like $T$ quark to Standard Model particles.
This search targets a mass range between 1.0 to 2.3 TeV, and a coupling value between 0.1 to 1.6, expanding the phase space of previous searches.
In the considered mass range, the upper limit on the allowed coupling values increases with $m_{T}$ from a minimum value of 0.35 for 1.07 $ < m_{T} < $ 1.4 TeV up to 1.6 for $m_{T} = 2.3$ TeV.
Measurements of the production cross section of a Z boson decaying to muons and electrons in association with at least one energetic jet (Z+jet) are presented. Then, the Z+jets events are further separated into a topology corresponding to the collinear emission of an on-shell Z boson from a high-energetic jet, corresponding to the radiation of a Z boson from a quark such that their angular separation is small. The measurements are performed in proton–proton collisions at center-of-mass energy of 13 TeV, using data corresponding to an integrated luminosity of 139 inverse femtobarns collected by the ATLAS experiment at the CERN Large Hadron Collider. The fiducial cross sections are compared to state-of-the-art Monte-Carlo predictions, which allows for a detailed study of the mechanism of Z boson production at next-to-next leading order within the framework described by the Standard Model of particle physics.
How event horizons evolve and ultimately combine during a black hole merger has been understood for five decades. The theory appears in Hawking and Ellis (1972) and modern numerical simulations have confirmed those early insights. That text also included some speculation about how apparent horizons merge but left the end stages unresolved: it was known that, once they get close enough, a common horizon forms outside the two initial black holes and that those horizons persist for some time inside. However, their ultimate fate was not predicted by theory and also not resolved by numerical simulations, which always lost track of the initial horizons during the final approach.
In just the last few years, things have changed as new techniques have been introduced to locate and track marginally outer trapped surfaces (MOTS), a generalization of apparent horizons. These have revealed an intricate picture in which, as the merger progresses, a froth of MOTSs pair create, evolve and annihilate deep inside the known horizons. Most of these MOTSs are self-intersecting and, though the picture is complex, strict rules are imposed on possible behaviours by a MOTS stability operator. In particular, it is now known that the initial horizons are annihilated in encounters with members of this previously unsuspected family of objects.
As attention has focused on these exotic MOTS, it has become clear that they are present not only during mergers but also lurk inside most stationary solutions (including an infinite number in Schwarzschild). In this talk I will review recent studies of exotic MOTS and consider what they tell us about mergers as well as the geometry of spacetime inside all black holes.
The advent of gravitational wave detectors had facilitated a constant stream of black hole merger observations. Despite this, black hole mergers are not fully understood. The details of the two apparent horizons becoming one is unclear due to the non-linear nature of the merger process. Recent numerical work had shown that there is an appearance of self-intersecting marginally outer-trapped surfaces (MOTS) during the black hole merger [Pook-Kolb et. al. arXiv:1903.05626]. Following papers have found similarly behaving MOTS in a simpler and static scenario, that of a Schwarzschild black hole, where a seemingly infinite number of self-intersecting MOTS were found [Booth et. al., arXiv:2005.05350]. This talk introduces new phenomena that occur in presence of an inner horizon. For Reissner-Nordstrom and Gauss-Bonnet black holes, we find that the maximum number of self-intersections becomes finite with the MOTS parameter space deeply dependent on the interior structure of the black hole and in particular the stability of the inner horizon [Hennigar et. al., arXiv:2111.09373].
Marginally outer trapped surfaces (MOTS), (closed surfaces of vanishing outward null expansion) provide a useful tool to study the local and global dynamics of black holes. They can be used both to locate black hole boundaries as well as study their internal geometry. Understanding the evolution of these objects can play an important role in understanding realistic black dynamics: in particular their complex dynamics has recently been studied in black mergers. In this talk, I summarize a method that can be used to identify axisymmetric MOTSs with arbitrarily complicated geometries in arbitrary axisymmetric spacetimes. Using this method, I find new MOTSs in dynamical Lemaitre-Tolman-Bondi spacetimes, focusing on the case of a large dust shell falling into an existing black hole. I will present the evolution of the many MOTS (both standard and exotic) that can be observed during this process.
Higher curvature gravity theories have long been known to have a variety of black hole solutions that differ from the standard cases in general relativity. A common feature amongst these solutions is that their horizons have constant curvature. We have recently obtained a class of black hole solutions in Lovelock gravity that do not not have constant curvature horizons. We find that negative mass solutions are possible even in spacetimes with positive cosmological constant. We reveal simple formulas that provide a lower bound on the black hole mass and discuss the implications of these solutions.
Relativistic quantum metrology is a framework that not only accounts for both relativistic and quantum effects when performing measurements and estimations, but further improves upon classical estimation protocols by exploiting quantum relativistic properties of a given system.
Here I present results of the first investigation of the Fisher information associated with a black hole. I review recent work in relativistic quantum metrology that examined Fisher information for estimating thermal parameters in (3+1)-dimensional de Sitter and Anti-de Sitter (AdS) spacetimes. Treating Unruh-DeWitt detectors coupled to a massless scalar field as probes in an open quantum systems framework, I extend these recent results to (2+1)-dimensional AdS and black hole spacetimes. While the results for AdS are analogous to those in one higher dimension, we observe new non-linear results arising from the BTZ mass.
The ground state wave function and energy of a quantum system with a given Hamiltonian may be approximated using perturbation theory or the variational method. Both methods have limitations, the former requiring the Hamiltonian perturbation be small enough for the series to converge while the latter being only as good as the choice of functions used in the expansion, ultimately providing only an approximate ground state whose mean energy is an upper bound to the ground state energy.
The iterative method of Gradient Descent (GD) applied to the energy expectation functional of the wave function can overcome the limitations of the aforementioned methods. Applying GD in an infinite-dimensional space is achievable by careful bookkeeping of only the non-zero components of the state vector in the chosen basis of expansion and those matrix elements of the Hamiltonian in that basis required to calculate the next iteration. For a Hamiltonian with a sufficiently sparse matrix representation in the chosen basis, the calculation is numerically tractable. Unsurprisingly, however, the GD method applied in infinite dimensions suffers from the same convergence problems that it suffers from in finite-dimensional space.
In finite dimensional problems the Conjugate Gradient (CG) method overcomes the GD convergence limitations using improved search directions. CG will be formulated in infinite dimensions for a quantum system with a time-independent Hamiltonian. Polak-Ribière and Fletcher-Reeves versions of CG will be implemented. The method will be used to find energy eigenstates and eigenvalues using three functionals of the wave function, one based on energy expectation, one on its variance, and a third utilizing a Lagrange multiplier. Several simple quantum systems will illustrate the method.
The dynamics of a quantum system in contact with some external surroundings (a `reservoir') is complex. The total system-reservoir evolution is governed by the (global) Schrödinger equation, but the reduced system dynamical equations are not of that form. If the reservoir is vast and has a short correlation time (little memory), then a markovian approximation is known to be valid. The approximate system dynamics is the solution of the famous markovian master equation. The derivation of the master equation is based on initial states in which the system and the reservoir are uncorrelated (of product form). In many situations, however, such initial conditions are not reasonable. In this talk, we address the case of (classically or quantum) correlated initial states and ask: Is the markovian approximation still valid? In this talk, we show that the answer is YES for a standard class of open system models, where a small system is coupled to a reservoir (quantum field) of thermal oscillators.
The talk is based on the work https://arxiv.org/abs/2107.02515
G protein coupled receptors (GPCRs) form a large family of more than 800 transmembrane proteins that serve as signal transducers between extracellular ligands, such as hormones or medicinal drugs, and intracellular mediators, such as G proteins and arrestins. Recent evidence suggests that, as opposed to the classical two-state model of a single unit switching from an inactive to an active state upon ligand binding, GPCRs exist in a dynamic equilibrium between monomers, dimers and higher oligomers, and monomeric receptors exhibit a high degree of intrinsic structural flexibility.
Applying a slew of single-molecule fluorescence (SMF) methods, we identified and characterized oligomers of the muscarinic M2 receptor and of the attendant Gi protein in vitro, demonstrated their presence in live cells and examined their dynamic, ligand-dependent nature as well as their involvement in cell signalling. Single particle tracking data of receptors and G proteins in live cells shows a broad range of diffusion behaviours, pointing to significant contributions from non-random regimes, in particular for the M2 receptors. Using nanodisc-reconstituted samples of a different GPCR from the same family, the adenosine A2A receptor, we have recently measured nanosecond-to-microsecond conformational dynamics in the receptor. The dynamics was recorded at an intracellular site near the interaction region with the G protein, and it appears to be fine-tuned allosterically by the ligand binding at an extracellular site.
Our results point to a new paradigm for GPCRs functioning as an ensemble of multiple, interchanging active and inactive states, in which different ligands shift not only their populations (conformational selection), but also their intrinsic flexibility (dynamics selection).
The ubiquity of the nickel recovery slag deposited in the environment of the Sudbury, Ontario basin gives merit to the study of the impact this foreign material could potentially have on wildlife in the area. In this work, the effects of ingestion of this largely metallic grit source on the bone health of Columbia Livia Domestica pigeons was measured. This was accomplished by controlling the diets of two groups of birds, one given an exclusively limestone grit source, the second given exclusively slag as grit source. After one year of this controlled diet, the subjects were euthanized, their tibiotarsi were subsequently harvested for testing. Tests performed include breaking strength, Young’s modulus, cortical thickness, density, bone mineral density, and mass spectrometry with focus on iron and calcium concentrations. Additionally, conventional micrographs and scanning electron micrographs with accompanying energy dispersive spectrometry were collected. Our analysis of the results are consistent with degraded bone physiology in the slag-fed group.
The vertebrate inner ear achieves high sensitivity and selectivity via active sensors known as hair cells. Hair cells use metabolic energy to generate force to improve their functionality, resulting in self-induced vibrations that can manifest as faint sounds akin to whispers – otoacoustic emissions (OAEs). OAEs are detectable in the ear canal using a sensitive microphone and can arise spontaneously (SOAEs) or be evoked by external tones (eOAEs). Even though OAEs are used extensively for clinical purposes, the underlying mechanics and active dynamics that govern their production, particularly the collective interactions of hair cells, are not well understood. Here we focus on the green anole lizard (Anolis carolinensis) to study the biophysical processes associated with active hearing. Despite simpler morphology relative to the mammalian cochlea, anole lizards show sensitivity and selectivity comparable to many mammals. We present two sets of preliminary OAE results from anole lizards. First, we characterize how changing the level of an external tone at a fixed frequency affects SOAE activity. We observe a broadening frequency range of suppression of the SOAE with increasing tone level, suggesting an entrainment effect where activity effectively synchronizes to the stimulus. Based upon the assumptions of a simple model for tonotopy (i.e., how frequency maps to different spatial locations along the sensory epithelium of the inner ear), we attempt to characterize the spatial extent of the entrainment region. Such could have important constraints for hair cell coupling in theoretical models of the inner ear. Second, we examine intermodulation distortions arising from the presentation of two tones at nearby frequencies. Characterizing such reveals features of the underlying nonlinearities, a key facet in helping constrain mathematical models of the inner ear. Ultimately, these data combined with modeling will help elucidate how the collective behaviour emerges more generally from active spatially-distributed biomechanical systems.
The healthy ear not only detects incident sound, but also generates and emits it as well. These sounds, known as otoacoustic emissions (OAEs), can arise spontaneously (SOAEs) and thus provide salient evidence that there is an active (metabolic-based) process taking place at the level of the inner ear. Such a process appears to enhance the sensitivity and frequency selectivity of hearing. However, a detailed understanding of the underlying mechanisms of OAE generation still remains unclear. Our work here focuses on the inner ear of a lizard, developing a theoretical model to characterize their OAE generation. Broadly, the model consists an array of active oscillators, each of which describes an individual hair cell with its own unique characteristic frequency. They are coupled together in varying fashions (e.g., nearest-neighbor via visco-elastic elements; globally via a rigid/resonant substrate). Broadly, we aim to use the model to elucidate how collective dynamics emerge from the system as a whole, as well as constrain the model (e.g., is the coupling required to get some effect actually physiologically reasonable?). Several general features have thus far emerged. First, coupling allows elements to synchronize into groups, where they share a common (self-sustained) oscillation frequency. Such an effect can explain some qualitative aspects of SOAE features (e.g., presence of spectral peaks), but fails to explain others (e.g., width of said peaks). Second, we explore how variations in coupling might lead to “amplitude death”, where the active oscillators collectively become quiescent. This phenomenon could lead to improved sensitivity and selectivity, as well as explain the observation that not all ears emit SOAEs.
Membranes are an essential building block in cells, and their biophysical properties impact cellular metabolism and functions. Synthetic lipid membranes are widely used as model systems to understand properties of their much more complex biological counterparts. However, the accuracy of this approximation remains an open question. Advancements in sample preparation and instrumentation now allow the study of the structure of native biological membranes with an unprecedented resolution [1]. We isolated the cytoplasmic membrane from human red blood cells (RBCs) and measured its bending modulus κ using Neutron Spin Echo (NSE) Spectrometry and X-ray diffuse scattering (XDS). Despite their high cholesterol content of 50 mol%, we find surprisingly small bending rigidities between 2-6 kBT [2], much smaller than literature values of most single component lipid bilayers. We speculate that this extreme softness results from the presence of highly unsaturated lipids in biological membranes. We also show that this bending rigidity significantly increases during blood storage due to an increased fraction of liquid ordered membrane domains as function of storage time. This effect potentially explains the observed organ dysfunction and the increased mortality in patients who received older blood bags [3].
RBCs are ideal for pharmaceutical applications as they provide access to numerous targets in the human body and superior biocompatibility over synthetic particles. We developed protocols to functionalize RBC membranes to form hybrid membranes [4] that can contain different types of synthetic lipids and proteins. Erythro-VLPs (virus like particles) were designed by embedding the SARS-CoV 2 spike protein into RBC hybrid liposomes that work as COVID vaccine [5].
[1] S. Himbert, et.al. Scientific Reports 7 (39661), (2017)
[2] S. Himbert, et.al. The Bending Rigidity of Red Blood Cell Membranes, submitted
[3] S. Himbert, et.al. Plos one 16 (11), e0259267
[4] S. Himbert, et.al Advanced Biosystems, 1900185.
[5] S. Himbert, et.al. ErythroVLPs: Erythro-VLPs: anchoring SARS-CoV-2 spike proteins in erythrocyte liposomes, accepted for publication in Plos One
In order to search for the physics beyond the Standard Model at the precision frontier, it is sometime essential to account for the Next-to-Next-Leading Order (NNLO) corrections theoretical calculations. Using the covariant approach, we calculated the full electroweak leptonic tensor up to quadratic (one loop squared) NNLO (α3) order, which can be used for the processes like e−p and μ−p scattering relevant to MOLLER (background studies) and MUSE experiments, respectively. In the covariant approach, we apply unitary cut of Feynman diagrams and separate them into leptonic and hadronic currents and hence, after the squaring matrix element, we can obtain the differential cross section up to NNLO level.
In this presentation, I will quickly review covariant approach and provide our latest results for quadratic QED and electroweak corrections to e−p and μ−p scattering processes.
Recent global analysis of Fermi decays, and the corresponding $V_{ud}$ determination, reveal a statistical discrepancy with the well-established SM expectation for Cabibbo-Kobayashi-Maskawa (CKM) matrix unitarity. Theoretical confirmation of the discrepancy would point to a deficiency within the SM weak sector. Necessary for extracting $V_{ud}$ from experiment is calculation of several theoretical corrections to the Fermi transition values. In fact, the development of the novel dispersion relation framework (DRF) for evaluating the nucleon $\gamma W$-box contribution to the electro-weak radiative corrections (EWRC) is at the centre of the recent tension with unitarity. Thus, what remains is to calculate the two nuclear structure dependent corrections: (i) $\delta_C$, the isospin symmetry breaking correction (ii) $\delta_{NS}$, the EWRC representing evaluation of the $\gamma W$--box on a nucleus. These corrections are calculable within the ab initio no-core shell model (NCSM), which describes nuclei as systems of nucleons experiencing inter-nucleonic forces derived from the underlying symmetries of Quantum Chromo-Dynamics (QCD). As we have explored calculations of $\delta_C$ in the past, it is a natural next step to calculate $\delta_{NS}$ in the same approach, providing a consistent evaluation of both nuclear structure dependent corrections to Fermi transitions. Preliminary evaluations of $\delta_{NS}$ have already been made using the DRF, however, while one can capture various contributions to $\delta_{NS}$ in the DRF, the approach cannot include effects from low-lying nuclear states. These contributions require a true many-body treatment and can be directly computed in the NCSM using the Lanczos continued fractions method. Hence, by studying Fermi transitions in light-nuclei, e.g. the $^{10}\text{C} \rightarrow {}^{10}\text{B}$ and $^{14}\text{O} \rightarrow {}^{14}\text{N}$ beta transitions, we may perform a hybrid calculation of $\delta_{NS}$ utilizing the ab initio NCSM and the novel DRF. We aim to present a preliminary calculation of $\delta_{NS}$ for the $^{10}\text{C} \rightarrow {}^{10}\text{B}$ transition.
A novel Machine Learning architecture has been recently developed combining cutting-edge conditional generative models with clustering algorithms. This model relies on information from one reference class and can be deployed for different applications in nuclear and particle physics, e.g., one-class classification, data quality control, and anomaly detection.
The flexibility of the architecture allows also an extension to multiple categories. We explore its utilization for neutron identification in the Barrel Calorimeter at GlueX, along with an anomaly detection method for Beyond Standard Model physics at the Large Hadron Collider.
The MOLLER experiment, in preparation at Jefferson Laboratory, in the United States, aims to constrain physics beyond the Standard Model using parity-violating Moller scattering at 11 GeV. The parity-violating asymmetry between the cross-sections for right- and left-handed helicity beam electrons scattered from the atomic electrons in a liquid hydrogen target is expected to be 35.6 ppb and MOLLER aims for 0.73 ppb precision. The measured asymmetry will be used to determine the weak charge of the electron to a fractional accuracy of 2.3%. Among the most challenging aspects of the experiment will be the detection of the small asymmetry in the detector signal. Consequently, it is very important to decrease the noise of the detector electronics as much as possible, which requires many iterations of simulation, prototyping, and testing of detector systems. This lengthy process is also necessary to fully understand and characterize the electronics for the data analysis at the end of the experiment. This talk will focus on recent developments of the integrating detector electronics chain for the MOLLER main detector system. Specifically of interest are the results of recent beam tests and plans for future design modifications.
We examine the nuclear reactions 7Li(p,y)8Be and 7Li(p,e+e-)8Be from an ab initio perspective.
Using chiral nucleon-nucleon and three-nucleon forces as input, the no-core shell model with continuum technique allows us to obtain an accurate description of both 8Be bound states and p+7Li scattering states.
We investigate scattering, transfer and capture reactions with 8Be as the composite state and compare the cross-sections to data.
The energy freed up by capture is enough to produce electron-positron pairs. The angular distribution of these pairs will be different if the intermediate particle is beyond the standard model rather than the photon, for example, the axion or an axial vector boson.
Computing the standard model background and comparing experimental data with new decay modes is necessary to support or rule out new physics in the ATOMKI anomaly (which posits the existence of a new boson with a mass of 17 MeV).
Supported by the NSERC grants No. SAPIN-2016-00033 and No. PGSD3-535536-2019.
TRIUMF receives federal funding via a
contribution agreement with the National Research Council of Canada.
Experiment S1758 aims to explore the charge dependence of the strong nuclear interaction by probing $^{55}$Ni and $^{55}$Co near the doubly magic $^{56}$Ni. This will be achieved by impinging beams of radioactive $^{20}$Na and stable $^{20}$Ne upon $^{40}$Ca targets to produce $^{55}$Ni and $^{55}$Co, respectively. Charged particles and $\gamma$-rays will be detected by combining the TRIUMF-ISAC Gamma-Ray Escape Suppressed Spectrometer (TIGRESS), the TIGRESS Integrated Plunger (TIP) and the CsI Ball. This trio allows for a higher degree of sensitivity when in unison. Data analysis will involve: transition rate reconstruction using the Doppler-Shift Attenuation Method (DSAM), Doppler-shift lineshape profile extraction from Monte Carlo simulations via the GEANT4 framework, and lifetime extraction from minimizing a $\chi^2$ goodness-of-fit between the measured and simulated lineshapes. The results will paint a clearer picture of the charge dependence of the strong nuclear interaction.
ALPHA - anti-hydrogen spectroscopy, Tim Friesen, Assistant Professor of Univ. of Calgary
Caustics are singularities arising from natural focusing and are well known in optics but also occur in any system that has waves including water and quantum waves. Caustics take on universal shapes that are described by catastrophe theory and dominate wave patterns. My group has been extending these ideas to quantum fields, such as those found in the sine-Gordon and Bose-Hubbard models. Our physical motivation is to describe the dynamics of Bose-Einstein condensates (BECs) following a sudden quench, including the cases of two and three independent BECs that are suddenly coupled together.
Our theoretical simulations [1] of the dynamics of these low-dimensional many-body systems following the quench shows that caustics form in Fock space over time and this seems to be a generic phenomenon. Furthermore, the caustics are singular in the mean-field theory but are regulated and adopt universal interference patterns in the full many-body theory. These caustics represent a form of universal quantum many-body dynamics associated with singularities in the underlying classical dynamics.
[1] Caustics in quantum many-body dynamics, W. Kirkby, Y. Yee, K. Shi and D.H.J. O’Dell, Phys. Rev. Research 4, 013105 (2022).
We numerically study the quantum dynamics of a bosonic Josephson junction (a Bose-Einstein condensate in a double-well potential) in the context of periodic driving of the tunnel coupling. In particular we examine how caustics, which can dominate the Fock space wavefunction following a sudden quench of the undriven system, are affected as the kicking strength is increased. In the limit of weak tunnelling and low number imbalance, the system maps onto the kicked rotor (an archetype of chaotic dynamics). By varying the strength of the kick quasi-randomly, we are able to realize a regime of "branched flow", a paradigm of wave behaviour in random media relevant to electron flow in conducting materials, radiowave propagation through the interstellar medium, and tsunamis in the ocean.
Quantum dots embedded within photonic nanowires can act as highly efficient single-photon generators. Integrating such sources on-chip offers enhanced stability and miniaturization; both of which are important in many applications involving quantum information processing. We employ a "pick and place" technique to transfer nanowires to on-chip waveguides where each nanowire contains a single quantum dot emitter. This approach provides for efficient coupling of the quantum light generated in an InP photonic nanowire to a SiN-based photonic integrated circuit. We have previously demonstrated that such devices can efficiently generate single photons on chip. Here we study the potential for generating indistinguishable photons from such sources. We demonstrate post-selected two-photon interference visibilities of up to 70% between sequential photons emitted from the same quantum dot. These findings show that the proposed approach offers a viable route for the integration of a stable source of indistinguishable photons on chip.
Complex spherical packing phases, namely the Frank-Kasper (FK) phases, have been discovered in various soft matter systems such as block copolymers and surfactant solutions. A generic and simple model for the formation of spherical packing phases in these systems comprises hard spheres with short-range attraction and long-range repulsion (SALR). In the SALR systems, the attractive head promotes the colloids to form clusters, while the repulsive tail prevents the clusters from growing infinitely. The resultant finite-sized clusters could pack onto a crystal lattice forming a cluster crystal. It is anticipated that the ability of the clusters to change their volume and shape could enable the formation of stable complex spherical packing phases. In the current work, the formation of the FK σ and A15 phases in a system of hard spheres with SALR interactions is studied using density functional theory. A set of phase diagrams with different SALR potentials are constructed showing that the stability of σ and A15 phases is highly sensitive to the potential. The key factor stabilizing the FK phases is also discussed. Our results provide a first step in understanding the universality of the existence of the complex spherical packing phases in a broader range of soft matter systems.
Recent experiments have elucidated the physical properties of kinetoplasts, which are chain-mail-like structures found in the mitochondria of trypanosome parasites formed from catenated DNA rings. Inspired by these studies, we use Monte Carlo simulations to examine the behavior of two-dimensional networks (``membranes'') of linked rings. For simplicity, we consider only identical rings that are circular and rigid and that form networks with a regular linking structure. We find that the scaling of the eigenvalues of the shape tensor with membrane size are consistent with the behavior of the flat phase observed in self-avoiding covalent membranes. Increasing ring thickness tends to swell the membrane. Remarkably, unlike covalent membranes, the linked-ring membranes tend to form concave structures with an intrinsic curvature of entropic origin associated with local excluded-volume interactions. The degree of concavity increases with increasing ring thickness and is also affected by the type of linking network. The relevance of the properties of linked-ring model membranes to those observed in kinetoplasts is discussed.
Our laboratory has recently reported the technique of preparing stable glass films of polymers through PVD and the exceptional properties of these materials. This technique is in principle applicable to a wide range of polymers, and it has been demonstrated for polystyrene and poly(methyl methacrylate). Stable glasses are known to have higher density and enhanced kinetic stability compared to ordinary glasses, but less is known about their surface dynamics. We use AFM to probe the surface response of vapor deposited polystyrene stable glasses to the perturbation provided by gold nanoparticles placed on the free surface. The surface response of stable glasses and ordinary glasses (prepared by rejuvenating vapor deposited glass) shows that they have quantitatively similar surface dynamics. By varying the temperature of relaxation, we quantify the dependence of surface dynamics on temperature.
Ionic microgels are soft, permeable, colloidal particles, made of crosslinked polymer networks, that ionize and swell in a good solvent. Their sensitive response to changes in environmental conditions, e.g., temperature and pH, and their capacity to encapsulate drug or dye molecules, have spawned applications of microgels to drug delivery, biosensing, and filtration. Swelling of these soft colloids involves a balance of electrostatic and gel contributions to the single-particle osmotic pressure. The electrostatic contribution depends on distributions of mobile microions and fixed charge. Working within the cell model and Poisson-Boltzmann theory, we derive the electrostatic contribution to the osmotic pressure from the free energy functional and the gel contribution from the pressure tensor. By varying the free energy with respect to microgel size, we also derive exact statistical mechanical relations for the electrostatic osmotic pressure for models of planar, cylindrical, and spherical microgels with fixed charge uniformly spread over their surface or volume. To validate these relations, we solve the Poisson-Boltzmann equation and compute microion densities and osmotic pressures [1, 2]. We show that microgel swelling depends on the nonuniform electrostatic pressure profile inside the particles and on the distribution of fixed charge. Finally, we discuss implications for interpreting experiments.
[1] A. R. Denton and M. O. Alziyadi, J. Chem. Phys. 151, 074903 (2019).
[2] M. O. Alziyadi and A. R. Denton, J. Chem. Phys. 155, 214904 (2021).
ARD was supported by the National Science Foundation (Grant No. DMR-1928073).
MOA acknowledges support of Shaqra University.
Phytoglycogen (PG) is a natural polysaccharide produced in the form of compact, 44 nm diameter nanoparticles in the kernels of sweet corn. Its highly branched, dendritic structure and soft, compressible nature leads to interesting and useful properties that make the particles ideal as unique additives in personal care, nutrition, and biomedical formulations. These applications are particularly dependent on the softness of PG, which can be controlled through chemical modifications. We consider the effect of acid hydrolysis on the softness of PG by characterizing the fragility of acid hydrolyzed PG glasses: as acid hydrolyzed PG particles are dispersed in water at packing densities approaching their soft colloidal glass transition, the dependence of the zero-shear viscosity on effective volume fraction abruptly changes from behaviour well-described by the Vogel-Fulcher-Tammann equation to more Arrhenius-like behaviour. This result is consistent with stronger glass behaviour for acid hydrolyzed PG relative to that for native PG, suggesting that acid hydrolysis of PG makes the particles softer.
Phytoglycogen (PG) is a glucose-based polymer that is naturally produced by sweet corn in the form of compact nanoparticles with an underlying dendritic architecture. Their deformability and porous structure combined with their non-toxicity and digestibility make them ideal for applications in personal care, nutrition and biomedicine. PG nanoparticles can be modified using chemical procedures such as acid hydrolysis, which reduces both the size and density of the particles. We used atomic force microscopy (AFM) force spectroscopy to collect high resolution maps of the Young’s modulus E of acid hydrolyzed PG nanoparticles in water, and we compare these results to those obtained on native PG nanoparticles. [1] Acid hydrolysis produced distinctive changes to the particle morphology and significant decreases in E. These measurements highlight the tunability of the physical properties of PG nanoparticles using simple chemical modifications.
Quantum confinement in two-dimensional (2D) transition metal dichalcogenides (TMDs) offers the opportunity to create unique quantum states that can be practical for quantum technologies. The interplay between charge carrier spin and valley, as well as the possibility to address their quantum states electrically and optically, makes 2D TMDs an emerging platform for the development of quantum devices.
In this talk, we present the fabrication of a fully encapsulated monolayer tungsten diselenide (WSe2) based device in which we realize gate-controlled hole quantum dots. We demonstrate how our device architecture allows us to identify and control the quantum dots formed in the local minima of electrostatic potential fluctuations in the WSe2 using gates. Coulomb blockade peaks and diamonds are observed which allow us to extract information about the dot diameter and its charging energy. Furthermore, we demonstrate how the transport passing through the channel formed by two gates is sensitive to the occupation of a nearby quantum dot. Additionally, we show how this channel can be tuned to be in the charge detection or the Coulomb blockade regime. Finally, we present a new device architecture which exhibits quantized conductance plateaus over a channel length of 600 nm at a temperature of 4 K. Quantized conductance over such a long channel provides an opportunity to incorporate gate defined quantum dot circuits without the nuisance of inhomogeneity within the channel.
Atomically thin materials - or two-dimensional (2D) materials – confine electrons at the ultimate thickness, giving rise to electrical and optical properties that can enable new quantum devices. Developing these devices requires large-area and high-quality monolayers. A limitation thus far has been that samples made by mechanical exfoliation techniques, thinning down crystals made of weakly bonded layers, are restricted in size to only a few microns. Bottom-up growth methods yielding large area monolayers, such as chemical vapour deposition (CVD), however, have lower sample quality. Therefore, it is desirable to develop methods that result in large area monolayers, while preserving the high quality of the crystal.
In this presentation, an exfoliating method based on 150nm Au film successfully used to disassemble bulk van der Waals crystals is presented. Specifically, this presentation demonstrates micron-size crystals of transition metal dichalcogenides such as MoS2, WSe2, and WS2 deposited on the surface of Si/SiO2 wavers. This presentation discusses how different parameters of the process influence the flatness and size of the exfoliated films and how this process can be implemented to create millimeter-size monolayers. To determine the quality of the atomically thin layers obtained with this method, optical and electrical characterization were performed and compared to the results obtained with measurements of mechanically exfoliated flakes and CVD grown films. This method opens the possibilities of producing high-quality macroscopic monolayers that can be used for high quality devices.
Acknowledgment: This work was performed with support from funding from the National Sciences and Engineering Research Council (NSERC) Discovery Grant (No. RGPIN-2016-06717)
Vertically stacking two-dimensional (2D) materials allows for the fabrication of heterostructures with properties not present in their constituent layers, presenting an opportunity to study new quantum phenomena. In twisted bilayers of hexagonal 2D materials, the formation of a moiré pattern can lead to electron confinement and flat bands. In bilayers of hexagonal transition metal dichalcogenides, moiré patterns have been observed at twist angles within approximately three degrees of parallel alignment. At smaller twist angles, in-plane relaxation of the moiré pattern produces a network of stacking order domains bound by domain walls consisting of shear solitons. In these systems, this deformation has been observed to result in ferroelectricity. Additionally, topological edge states have been predicted to exist at the domain walls. In this work, we use scanning tunneling microscopy (STM) and spectroscopy (STS) to study domain networks in mechanically assembled WS$_2$ homobilayers. We report a technique for fabricating rotationally controlled homobilayers with sufficiently clean interfaces for STM measurement. Using STM, we observe triangular stacking order domains. In spectroscopic measurements, the domains show variation in the local density of states. These results are discussed in light of the anti-symmetric ferroelectricity predicted in these materials.
We describe the electronic and optical properties of MoSe2/WSe2 type-II heterostructure using ab initio based tight-binding (TB) approximation and Bethe-Salpeter equation (BSE) [1]. We start with determining the electronic structure of MoSe2/WSe2 from first principles. We obtain type-II band alignment and conduction band minima at Q points. Then we perform analysis of Kohn-Sham wavefunctions allowing to detect leading layer and spin contributions. Next, we construct minimal TB model for MoSe2/WSe2 heterostructure, which allow us to understand orbital contributions to Bloch states and study wavefunctions effect on excitonic spectrum. We accurately solve BSE and determine the exciton fine structure due to type-II spin-split band arrangement [2] and topological moments, considering both A/B, spin bright/dark and intra-/interlayer exciton series using simplified Rytova-Keldysh non-local screening theory. In next step we analyse effect of moiré potential and compare it with fully tight-binding approach to excitons in twisted heterostructures.
[1] M. Bieniek, L. Szulakowska, and P. Hawrylak, Band nesting and exciton spectrum in monolayer MoS2, Physical Review B 101, 125423 (2020)
[2] K. Sadecka, Inter- and Intralayer Excitonic Spectrum of MoSe2/WSe2 Heterostructure, Acta Physica Polonica, to be published (2022)
With current CMOS technologies approaching their performance limits, nanoscale atomic electronics are poised to provide the next-generation of devices and a continuation of Moore's law. Several promising beyond-CMOS platforms, such as dangling bond (DB) circuitry on hydrogen-passivated silicon require precise knowledge of the location of charges within fabricated atomic structures. To achieve this in the past, atomic force microscopy (AFM) measurements have been used to determine the charge population of dangling bonds structures, though these measurements are often cumbersome. Here, we employ a quicker, minimally-perturbative scanning tunneling microscope charge sensing scheme to measure the charge of atomic dangling bond wires and compare the results with AFM data. Two DB wires were sequentially lengthened to form a continuous wire near a sensor DB. IV spectroscopy over the sensor reveals spectral shifts which correspond to the addition of nearby charge with single electron sensitivity. The results show a reduction of charge when the wires are joined and agree with standard AFM based techniques which predict dangling bond wires to be ionic chains.
Vertical stacking of atomically thin materials offers a large platform for realizing novel properties enabled by proximity effects and moiré patterns. Here, a van der Waals heterostructure consisting of monolayer graphene on in-plane anisotropic layered semiconductor ReS$_2$ is prepared using dry-transfer technique. Locally resolved topographic images reveal a striped superpattern originating in the interlayer interactions between graphene's hexagonal structure and the triclinic, low in-plane symmetry of ReS$_2$. Scanning tunneling spectroscopy at low temperature is used to characterize the modulation of the local density of states by this moiré pattern. These results shed light on the complex interface phenomena between van der Waals materials with different lattice symmetries.
We propose and study a multi-orbital lattice extension of the Sachdev-Ye-Kitaev model of a non-Fermi liquid. Using numerical calculations in the large-N limit, we discuss the phase diagram, thermodynamics, and spectral properties of this model which features a first-order thermal transition into a nematic insulator and a continuous thermal transition into nematic metal phase, arising from orbital polarization of an isotropic strange metal. We explore the transport properties of this model, including its resistive anisotropy and elastoresistivity, across the phase diagram. Our work offers a useful perspective on nematic phases, phase transitions, and transport in a correlated multi-orbital system.
In the past few years, several experiments have demonstrated that cation-substituted SrTiO3 can simultaneously sustain both metallicity and ferroelectricity; however, little is known about how the metallicity influences the ferroelectric state. In thin films, for example, nonmetallic ferroelectrics tend to break up into nanoscale Kittel domains of opposite polarization to alleviate large internal electric fields. In this talk, I will show through a mix of numerical simulations and heuristic arguments that the selective screening of these fields by a free electron gas fundamentally alters the structure of the ferroelectric. In particular, I will show that, as the two-dimensional electron density n2D increases, there is a smooth crossover from Kittel domains to a head-to-head domain wall configuration, and that the head-to-head domain wall is energetically preferable when en2D >∼ P0, where e is the electron charge and P0 the polarization amplitude within the domains.
In crystal systems with competing, incongruous, anti-ferromagnetic exchange interactions, geometric frustration is found and often leads to the suppression of long-range magnetic order. On the other hand, in Yb-based systems where the Kondo interaction between local $4f$ and conduction electrons is dominant, hybridization between these also results in the suppression of long-range magnetic order. When the Kondo interaction is strong enough physical hybridization between the $4f$ and conduction electrons occurs, resulting in a quantum mechanically degenerate electronic ground-state, a so-called intermediate valence (IV) state. $\mathrm{YbB_4}$ is a rare system where both mechanisms are plausible explanations for the lack of magnetic order down to at least 0.34 K [1]. $\mathrm{YbB_4}$ crystallizes into a tetragonal crystal structure (space group $P4/mbm$) that can be mapped to the well known geometrically frustrated Shastry-Sutherland Lattice within the ab plane [2]. $\mathrm{YbB_4}$ has also been proposed as a Kondo-dominated system residing in the IV regime but has to date lacked direct confirmation of such via spectroscopic means [3,4]. We study the existence of an IV state in $\mathrm{YbB_4}$ using resonant X-ray emission spectroscopy at the Yb $L_(α_1 )$ transition and study the temperature dependence of the Yb valence from 12 to 300 K. We confirm that $\mathrm{YbB_4}$ exists in an IV state at all temperatures and observe that the Yb valence increases gradually from $ v = 2.61 \pm 0.01$ at 12 K to $v = 2.67\pm0.01$ at 300 K. We compare the temperature scaling of the valence with other Yb-based Kondo lattices and find that $\mathrm{YbB_4}$ and other systems within the IV regime do not obey the universal temperature scale of valence change, $T_v$, observed in weakly mixed-valent Kondo lattices [5]. We find that in the case of IV systems, $T_v$ also does not appear to be linked to the Kondo temperature $T_K$ of the system.
[1] J. Etourneau et al., Journal of the Less-Common Metals 67, 531 (1979).
[2] D. Okuyama et al., Journal of the Physical Society of Japan 74, 2434 (2005).
[3] J. Y. Kim et al., Journal of Applied Physics 101, 09D501 (2007).
[4] A. S. Panfilov et al., Low Temperature Physics 41, 193 (2015).
[5] K. Kummer et al., Nature Communications 9, 2011 (2018).
We describe here the effects of broken sublattice symmetry, and the emergence of a phase transition in triangular artificial graphene quantum dots with zigzag edges. The system consists of a structured lateral gate confining two dimensional electrons in a quantum well into artificial minima arranged in a hexagonal lattice. The sublattice symmetry breaking is generated by forming an artificial triangular graphene quantum dot with zigzag edges. The resulting Hamiltonian of this system generates a tunable ratio of tunneling to strength of electron-electron interactions and a degree of sublattice symmetry with control over shape. Using a combination of tight binding, Hartree-Fock and configurations interaction we show that the ground state transitions from a metallic to an antiferromagnetic phase by changing the distance between sites or depth of the confining potential. At the single particle level these triangular dots contain a macroscopically degenerate shell at the Fermi level. The shell persists at the mean-field level (Hartree Fock) for weak interactions (metallic phase) but disappears for strong interactions (antiferromagnetic phase). We determine the effects of electron-electron interactions on the ground state, the total spin, and the excitation spectrum as a function of filling of the system away from half-filling. We find that the half-filled charge neutral system leads to a fully spin polarized state in both metallic and antiferromagnetic regimes in accordance with Lieb’s theorem. In both regimes a relatively large gap separates the spin polarized ground state to the first excited many-body state at half-filling of the degenerate shell, but by adding or removing an electron, this gap drops dramatically, and alternate total spin states emerges with energies nearly degenerate to a spin polarized ground state.
Molecular Dynamics (MD) is a commonly used technique to simulate the evolution of atomic structures and complex materials. MD based on classical force fields can solve large systems with relatively long time scales. Since the accuracy of MD depends on the quality of the underlying force fields, and there are many situations where complex chemical reactions occur due to electronic interactions, an important research direction is to advance the method of Ab Initio Molecular Dynamics (AIMD) based on the self-consistent Kohn-Sham density functional theory (KS-DFT), to larger length and time scales.
In this work, we present an accelerated AIMD which harnesses its power by two approaches. First, the AIMD is based on our real space KS-DFT method RESCU [1] which can efficiently solve supercells containing many thousands of atoms. Second, we leverage Gaussian Process Regression (GPR) to efficiently extrapolate forces by interpolating between KS-DFT calculations from previous timesteps in the AIMD simulation. By extrapolating forces via GPR when possible, and only calculating forces via KS-DFT when necessary, novel reactive dynamics on increasingly large timescales can be studied using modest computational resources. The accelerated AIMD is applied to simulate the Solid Electrolyte Interphase (SEI) formation in a 2590-atom system consisting of an interface between a lithium slab and liquid organic electrolyte, to time scales of a picosecond or more, where important chemical reactions at the solid/liquid interface are identified.
This talk will discuss my research in data science and astrophysics. We will investigate our Milky Way Galaxy and have a short discussion of how data science can be used to detect faint and sparse objects such as the dwarf satellites and streams that helped form the galaxy we live in. When trying to make detections of these mysterious stars, the advent of greater cloud computing capability means the sky really is no longer the limit (for programming or Astronomy)! We will also cover data science applications for smaller telescopes like the 1m at York University and how small telescopes can support some of these 'big data' science endeavours.
We report the results on the self-evaluation tools used in Canadian STEM outreach activities reported by representatives for English-language NSERC PromoScience programs. The approaches to evaluation are categorized such as output vs. outcome, quantitative vs. qualitative, metrics vs. surveys, and general vs. specific. While qualitative answers are useful for informing changes to the event/program in the short term, quantitative answers may be useful for analysis as data is collected over time. In general, programs tend to favour low-cost methods (i.e. simple metrics recording, brief post-event surveys) and few programs make an effort to measure their long-term impacts (i.e. track actual outcomes, not just potential outcomes). Thus, this study is more able to demonstrate which tools are common, as a potential proxy for what is effective, than demonstrate which tools are effective directly. The directions for future work are discussed.
Graduate student teaching assistants (GTAs) fill many roles in undergraduate education: grading exams and assignments, facilitating laboratory sessions, and leading tutorials, among others. Since GTAs have a high degree of interaction with students in each of these roles, their understanding of educational practices is critical to improving student understanding of course material. Improving GTAs’ teaching strategies can also be important for their research projects, future collaborations, and professions outside of academic research, but is often overlooked in training programs. As part of a department-wide community of practice focused on applying inquiry-guided learning (IGL) strategies in undergraduate physics courses, a new GTA training program was created and deployed in Fall 2021. In contrast to the previous training which focused on presenting the mechanics of properly executing GTA duties, the new training emphasizes applying IGL teaching strategies such as leading questions and scaffolding instruction through group discussions and examples tailored for physics courses. This format has been shown to improve GTA effectiveness from both the student and GTA perspective and was informed by a pilot IGL learning community of eight physics GTAs in the previous semester. After completing the new training program, feedback collected from graduate students showed they appreciated a focus on IGL and found that the group discussion format allowed them to learn strategies specific to physics courses from senior GTAs. The inspiration, format, and outcomes of this new GTA training program will be discussed, with a focus on how GTAs can be introduced to new pedagogical frameworks for their benefit, as well as the benefit of undergraduate students and professors.
Introductory physics courses are required at McMaster for students in three different streams: physical sciences, engineering, and life sciences. While students in the engineering stream are required to take the physics course for their stream, most students in the Faulty of Science can choose between taking physics for life sciences or for the physical sciences, where both options meet all upper-year physics requirements. Examining students’ self-evaluations of their preparedness and motivations provide insight into their experiences, preferences, and reasons for choosing their stream of physics.
In this study, online surveys were distributed to students in all three streams of introductory physics. End-of-term surveys were collected in Dec. 2021 (N=182) and April 2022, and an entry survey was collected in Sept. 2021 (N=239). From these results, we examined students’ study habits to see if there are trends across different streams, genders, or other demographic groups that may influence course performance. These results show that most students emphasize retrieval practices such as practice problems and practice tests in their studying, while there are some differences between different demographics and streams. Additionally, students were asked to rate their preparedness and change of preparedness throughout the semester. Interestingly, preparedness in the life sciences stream follows a unique trend because the cohort contains students with varying high school physics backgrounds. Preparedness is also compared to students’ predictions of their final grades and their comfort with the mathematics taught in the course to look at any trends between these factors. Additionally, the motivation of students in the life sciences and physical sciences streams are examined to see what influences them when choosing their stream of physics. For these students, external recommendations and previous high school physics experiences are prominent factors in their decision.
These results provide insight into the background of students and factors that influence their performance and enjoyment of introductory physics courses at McMaster. We can utilize these results as a tool for improving the performance and experience of students taking these courses.
TBD
As a university proud of the education we offer on our campus, we often speak about the “Magic of MIT”. Each of you has an analogous phrase. But, what do we mean? So much of the magic of the (MIT) on-campus university experience lies in the unscripted in-person engagement that happens among our community members, whether it be students working together on problems or projects or students and instructors engaging in seminars, discussions, solving problems, lab classes, research, …. Why, then, have MIT faculty put so much energy into building MOOCs? Standard answers include reach — bringing MIT to the world — and reputation and impact, within a field or more broadly. But among the physics faculty who have developed MOOCs these motivations come second to using MOOCs to enhance the learning experience of our on-campus students. I’ll describe some of the ways in which physics instructors at MIT are using MOOCs — or elements thereof — to deliver some of the more scripted parts of our teaching so as to create more time and space for the active, engaging, interactive, components from which the magic originates.
The existence of S-wave neutron superfluidity in the inner crust of neutron stars is well established and it affects the thermal properties and the cooling of the stars. In this talk, I will present a detailed ab initio study of the S-wave pairing gap and the equation of state of superfluid neutron matter. These calculations were carried out using the auxiliary field diffusion Monte Carlo method for finite systems and the results were extrapolated to the thermodynamic limit. I will also discuss how we quantify the error of this extrapolation using phenomenology, such as the symmetry-restored BCS theory of superconductivity. These results can be used in calculations of thermal properties of neutron stars and they can be probed in cold atom experiments utilizing the universality of the unitary Fermi gas.
The incompleteness of the Standard Model demands new physical models, and one of the most tested approaches is perturbative Quantum Field Theory (QFT), where we can calculate observables from a given Lagrangian. It is well known that at a given order of perturbation theory, matrix elements can be calculated using Feynman calculus. Existing Mathematica packages such as FeynArts and FormCalc help us create those diagrams from a pre-programmed model file in the package, which can perform a wide variety of calculations for the Standard Model. This presentation will present a short overview of a new Wolfram Mathematica package, FeynArtsHelper, intended to help create those model files for FeynArts using arbitrary given Lagrangian. As a result, the package can be employed for the models Beyond Standard Model and produce results up to one-loop order.
Elusive neutrinos are a window to the interior of compact objects, potentially unveiling the behavior of phenomena such as neutron star mergers, core-collapse Supernovae, and the synthesis of elements. As standalone detections or in the context of multi-messengers signals, neutrinos offer opportunities to understand our Universe in unprecedented ways. Interpreting neutrino observations relies on models of neutrino emission and their interaction with highly dense matter. In this talk, I shall discuss neutrino emission from collapsars and neutron-star mergers, and the possibility of overcoming challenges in nuclear models through their detection.
Investigating neutrino flavor oscillations under the influence of curved spacetime is more involved when the mass eigenstates of the superposition ̶ out of which each neutrino flavor is made ̶ are taken to be wave packets. The subtleties behind applying the wave packet formalism to neutrino flavor oscillations in curved spacetimes, as opposed to the plane wave formalism, will be discussed. Applications to various spacetime metrics from GR and from modified gravity models are included. I will then expose, separately, the problem of neutrino flavor oscillations within conformal coupling models, both within the plane wave formalism and within the wave packet formalism.
The light-front wavefunction of mesons is the product of the transverse and longitudinal modes. Holographic QCD leads to a Schr\"{o}dinger-like equation for the transverse mode. We show that, when the longitudinal mode is obtained from the 't Hooft equation, the resulting wavefunction predicts remarkably well the meson spectroscopic data.
The discovery of topological phases of matter has revolutionized our understanding of condensed matter. Recently, the idea of emulating these phases in synthetic materials, e.g. cold atoms in optical lattices or photons in dielectric nanostructures, has proven to be an extremely powerful approach for exploring topological physics beyond what is physically reachable in the solid-state. This includes the development of new functionalities like topological lasers, but also more fundamental aspects including the discovery of exotic phases involving drive, dissipation, disorder or synthetic dimensions. In this talk, I will present recent works we have realized on a new type of synthetic topological matter involving polaritons, a hybrid light-matter quasiparticle with unique properties inherited from its dual nature.
In the presence of certain symmetries, three-dimensional Dirac semimetals can harbor not only surface Fermi arcs, but also surface Dirac cones. Motivated by the experimental observation of rotation-symmetry-protected Dirac semimetal states in iron-based superconductors, we investigate the potential intrinsic topological phases in a $C_{4z}$-rotational invariant superconducting Dirac semimetal with $s_{\pm}$-wave pairing. When the normal state harbors only surface Fermi arcs on the side surfaces, we find that an interesting gapped superconducting state with a quartet of Bogoliubov-Dirac cones on each side surface can be realized, even though the first-order topology of its bulk is trivial. When the normal state simultaneously harbors surface Fermi arcs and surface Dirac cones, we find that a second-order time-reversal invariant topological superconductor with helical Majorana hinge states can be realized. The criteria for these two distinct topological phases have a simple geometric interpretation in terms of three characteristic surfaces in momentum space. By reducing the bulk material to a thin film normal to the axis of rotation symmetry, we further find that a two-dimensional first-order time-reversal invariant topological superconductor can be realized if the inversion symmetry is broken by applying a gate voltage. Our work reveals that diverse topological superconducting phases and types of Majorana modes can be realized in superconducting Dirac semimetals.
We consider the transport of Majorana zero modes across a 1D topological superconductor by applying local gate voltages across sections of the superconductor. This “piano key” method allows for sections of the superconductor to switch between the trivial and topological phases, thereby facilitating the motion of a Majorana zero mode. As a single section, or piano key, undergoes a phase transition, it is possible for the ground state to experience excitations, especially near criticality. The excitation probability has been studied for a large piano key in Ref. [1] which casts the problem in terms of a simple Landau-Zener transition. In our work, we consider the excitation probability when a Majorana zero mode is transported using a series of smaller piano keys. We calculate the excitation probability numerically by simulating a sequence of piano keys. Furthermore, we demonstrate an analytical calculation of the excitation probability and compare this to the numerical results.
[1] B. Bauer et al., SciPost Phys. 5, 004 (2018)
The phase diagram of the kagome metal family AV$_3$Sb$_5$ (A = Cs, Rb, K) features both superconductivity and charge density wave (CDW) instabilities, which have generated tremendous recent attention. Nonetheless, significant questions regarding the nature of the CDW states remain. In particular, the temperature evolution and demise of the CDW state has not been extensively studied, and little is known about the co-existence of the CDW with superconductivity at low temperatures. We report an x-ray scattering study of CsV$_3$Sb$_5$ over a broad range of temperatures from 300 K to $\sim$ 2 K, below the onset of its superconductivity at $\textit{T}_\text{c}\sim$ 2.9 K. Order parameter measurements of the $2\times2\times2$ CDW structure show an unusual and extended linear temperature dependence onsetting at $T^*$ $\sim$ 160 K, much higher than the susceptibility anomaly associated with CDW order at $\textit{T}_\text{CDW}=94$ K. This implies strong CDW fluctuations exist to $\sim2\times\textit{T}_\text{CDW}$. The CDW order parameter is observed to be constant from $T=16$ K to 2 K, implying that the CDW and superconducting order co-exist below $\textit{T}_\text{c}$, and, at ambient pressure, any possible competition between the two order parameters is manifested at temperatures well below $\textit{T}_\text{c}$, if at all. Anomalies in the temperature dependence in the lattice parameters coincide with $\textit{T}_\text{CDW}$ for $\textit{c}(T)$ and with $T^*$ for $a(T)$.
CsV3Sb5 a member of the recently discovered class of Kagome superconductors AV3Sb5 (A=K, Rb, Cs), which provide a rich environment to study topological band structure and charge density wave (CDW) order in an ideal vanadium Kagome lattice. In this work we performed muon spin rotation/relaxation (uSR) measurements on high-quality single crystal samples in the normal and superconducting states. In our measurement of we find no evidence of broken time reversal symmetry behavior associated with the superconducting state. Our measurements of the magnetic field penetration are well described by a two-gap model. Measurements of the normal state reveal changes to the internal field distribution at the muon site below approximately 60K and 30K, indicating the presence of several changes in the electrodynamics of CsV3Sb5 in addition to its charge density wave and superconducting order.
In recent years, the drug discovery industry has seen a steady decline in productivity due partially to the difficulty in rationally designing and exploring novel search spaces. At the same time, cutting-edge deep learning techniques offer the possibility of rapidly enhancing design of novel biomolecules, but suffer from a lack of interpretability. By employing physics-based techniques, including multi-scale molecular dynamics and manifold theory, my lab group hopes to engage with these problems to perform physics-based design of novel search spaces for therapeutics. In this talk, I will discuss our initial attempts to design an interpretable search space for (i) small molecule antibiotics, and (ii) short peptides. We focus on assessing search space quality and on the characterization through molecular dynamics of potential initial design points. I will also discuss our future work in creating an integrated and transferable platform for search space design.
Carbon nanotube field-effect biosensors (CNT-bioFETs) are ultraminiaturized devices that can be used to measure single-molecule kinetics of biomolecules. They monitor time scales going from a few microseconds to several minutes, as demonstrated for nucleic acid folding and enzyme function. The sensitivity of CNT-bioFETs originates from the interplay between the nanotube’s conductance, which is monitored by the device, and the electrostatic potential generated by the biomolecule under investigation, which is localized on the nanotube. Yet, the origin of this electrostatic gating of the carbon nanotube by a single biomolecule is not well understood at the molecular scale.
To bridge this gap, we employ molecular dynamics (MD) and Hamiltonian replica exchange (HREX) simulations to unveil: (1) the interactions between the biomolecule and the nanotube to which it is attached to the device and (2) the electrostatic potential on the nanotube as the state of the biomolecule changes. We address these questions by considering three prototypical cases: the function of the Lysozyme protein, the hybridization of 10-nt DNA sequence and the folding of a DNA G-quadruplex, which were previously characterized using CNT-bioFETs.
Our simulations show that this protein and these DNAs interact differently with the nanotube to which they are attached. Consequently, the electrostatic potential (ESP) on the nanotube is very sensitive to the type and state of the biomolecule. When compared to experiment, the ESP distribution for the with-ligand and without-ligand states of the Lysozyme protein are in line with the two-level conductance measured by CNT-bioFETs. For the DNAs, however, the ESP distribution for its folded and unfolded states does not agree with the two-level conductance measured. To agree, the DNA strand should not interact with the nanotube, which is not what our simulations suggest. The reason for this apparent conflict could arise from the impact of the external electric field impose by the gate electrode in CNT-bioFETs on highly charged systems such as DNAs, as supported by our recent simulations.
In response to the retirement of the NRU Reactor at Chalk River Laboratories, a group of Canadian neutron scatterers, cancer clinicians and researchers, and accelerator physicists are carrying out an initial design of a prototype Canadian compact accelerator-based neutron source (PC-CANS) in Canada. The PC-CANS will help mitigate the challenges to maintain and expand the scientific resources needed for research using neutron beams by Canadian researchers and companies. Here, we will provide an update on the high-level design and strategies to secure resources to realize the PC-CANS in Canada. Our approach is staged with the first stage offering a medium neutron-flux, linac-based source for neutron scattering on the next generation of materials. The first stage will serve as a prototype for a second stage: a higher brightness, higher cost facility that could be viewed as a national centre for neutron applications.
Gravitational-wave and multi-messenger astronomy shed light on the astrophysics of black-holes and neutron-stars and also allow for unique probes of fundamental physics. I will discuss recent results on how the mergers of neutron stars and the death of massive, rotating stars give rise to the formation of heavy elements in the universe. In particular, I will discuss recent results at the interface of numerical relativity, neutrino physics, and nuclear astrophysics, and highlight how multi-messenger astronomy may lead to the answer of a 70-year old fundamental question in physics: How does the Universe create the heaviest elements?
In astrophysically realistic black holes – for instance, binary black hole mergers – the surface of most obvious interest is the Event Horizon. However, this surface is often computationally difficult to locate due to its global definition. Instead, it is useful to turn to quasi-local characterizations of black hole boundaries, such as Marginally Outer Trapped Surfaces (MOTS), which have the benefit of being defined for a single time slice in a spacetime, while the outermost of which is also (generally) the apparent horizon. My talk, which was the subject of my master’s thesis, will describe work which seeks to understand MOTS in the interior of five-dimensional black holes; in particular, I will focus on our results in studying the rotating case (Myers-Perry). Similar to the four-dimensional Schwarzschild case studied by my collaborators, and the five-dimensional static case I presented last year at CAP, we find self-intersecting MOTS, and in doing so provide further support for the claim that self-intersecting behaviour is rather generic. I will conclude by discussing new oscillating MOTS-like surfaces, first seen in this study of 5D rotating black holes, and now reproduced for other types of rotating black holes in other dimensions.
Black holes evaporate through Hawking radiation but without a full quantum treatment of gravity the endpoint of this process is not yet entirely understood. For example it's been suggested that information that enters a black hole is irreversibly lost after it evaporates, an apparent contradiction with quantum mechanics. Studying the behaviour of information in black hole evaporation in effective models of gravity may provide insight into theories of quantum gravity. Of particular interest are non-singular black holes since quantum theories of gravity are expected to resolve the singularities that are pervasive in general relativity.
Two dimensional theories of gravity are useful as toy models for studying black hole dynamics. This talk will discuss a generalized model of collapsing and evaporating black holes incorporating backreaction in 2D dilaton gravity, including non-singular black holes. A numerical code that solves generic systems on the full spacetime is presented.
Two particle detectors locally interacting with a quantum field can be correlated, even if they are spacelike separated, due to pre-existing field correlations. Such an extraction protocol is called entanglement harvesting. Less well-studied is extraction of more general correlations, as parametrized by mutual information (the total classical and quantum correlations). We investigate the mutual information harvested by two pointlike particle detectors (or qubits) from a massless scalar field in a black hole spacetime. We consider the (2+1)-dimensional BTZ black hole, placing the detectors at different separations from and angles around the black hole. We compute the mutual information for these various settings. In conjunction with previous studies of harvested entanglement for this case, we obtain a more complete picture of the structure of scalar field vacuum correlations in the vicinity of a black hole.
The goal of this presentation is to create an awareness about the C14 – Physics Education Commission of the International Union of Pure and Applied Physics (IUPAP) activities. The IUPAP is a unique international physics organization, the only one that is founded and run solely by the physics community. Its members are nominated by physics communities in countries representing different regions around the world. The broad mandate of the International Commission on Physics Education (ICPE) is to: “promote the exchange of information and views among the members of the international scientific community in the general fields of Physics Education”. The complete mission statement can be found at https://iupap.org/who-we-are/internal-organization/commissions/c14-physics-education/#mission-mandate. One of the core activities of the Commission is the organization of the International Conference on Physics Education (ICPE), often in partnerships with other international or regional physics/science education societies. The Commission awards Physics Education Medal every year and produces the ICPE Newsletter. IUPAP celebrates its centennial anniversary in 2022-2023 with Canada being one of the founding members of IUPAP. The Commissions of IUPAP (including C14) will promote the educational and scientific activities of IUPAP around the world.
Physicists study subjects such as quantum mechanics and relativity that capture the popular imagination, yet physics departments often struggle to recruit and retain enough students to satisfy cost-conscious administrations. The Effective Practices for Physics Programs (EP3) Initiative of the American Physical Society and the American Association of Physics Teachers has tapped the expertise and experience of over 250 members of the physics community to create a Guide to help physics programs face challenges and enact change. In this workshop we’ll review some of the lessons Canadian physics departments can take from the EP3 Guide in order to help themselves build vibrant and growing undergraduate physics programs.
We, teachers, curriculum designers and educators, should encourage students’ intellectual engagement and motivation and consider progressive and effective factors in their learning (Wentzel & Watkinz, 2016; Anderman & Dawson, 2011; Csikszentmihalyi, 1990, 1996; 1997; Shernoff et al., 2003). According to Dewey’s theory of learning (1916), learners need to become active participants in their own learning processes; and the individuals’ direct personal experiences in activities have a significant role in learning outcomes (Dewey, 1916). Thus, students must be provided with moments and opportunities through the teacher’s teaching (both curriculum and pedagogy) that would respond to and fulfill all those constructive and progressive experiences in learning. Here, I raise a question: How can we support and provide learners with constructive and active learning opportunities and approaches to learning quantum mechanics? It is emphasized that for teaching and learning the most complicated and abstract concept in physics like quantum mechanics, not only students but also teachers, particularly science teachers without a physics background knowledge, require simplified and visualized educational instructional resources such as guided activities accompanied with simulations (McKagan et al., 2008; Zollman et al., 2002; Baily & Finkelstein, 2009; Yulianti et al., 2021; Faletič & Kranjc, 2021). Today, the effectiveness of the basic and classical simulations, visualized instructional resources, and simulation-based inquiry learning (de Jong, 2011; Mayer & Alexander, 2016; Day & Goldstone, 2009) in quantum mechanics is significant (McKagan et al., 2008; Zollman et al., 2002; Faletič & Kranjc, 2021; Baily & Finkelstein, 2009; Yulianti et al., 2021). For instance, wave-particle behaviour of light and quantum objects is not something that can be easily imagined and conceived by students from the actual experiment itself (Olsen, 2002; Duit et al., 2014; Müller & Wiesner, 2002). The results of my studies, practices, and observations from a science program for adolescents (designed and developed by myself in British Columbia, Canada) acknowledge the discussions and arguments. One of the reasons that adolescents could successfully progress their learning from waves principles to quantum mechanics is the significant effectiveness of the PhET simulations on both curricular resources as well as the pedagogical approaches utilized for students’ physics learning (Yulianti et al., 2021; Faletič & Kranjc, 2021; Baily & Finkelstein, 2009; McKagan et al., 2008; Zollman et al., 2002). In brief, these approaches are strongly recommended and developed in teaching the fundamentals of quantum physics, guiding, engaging, and encouraging adolescents in learning quantum mechanics.
TBD
McMaster University is home to a unique suite of facilities in a Canadian university environment, welcoming researchers from across Canada and abroad. In addition to Canada’s most powerful nuclear research reactor, the McMaster Nuclear Reactor, McMaster University is the location of six particle accelerators which enable experimental programs in non-invasive assessment of biological composition; effects of radiation on biological systems; production of radioisotopes; and imaging of materials in support of nuclear power generation aging management. Accelerator configurations are flexible depending on experimental requirements, and within the scope of regulatory requirements. Presented is a brief history, current state and projects, and future plans of the accelerator facilities.
Compact Accelerator-based Neutron Sources (CANS) offer the possibility of an intense source of pulsed neutrons with a capital cost significantly lower than spallation sources. A prototype, Canadian compact accelerator-based neutron source (PC-CANS) is proposed for installation at the University of Windsor. The PC-CANS is envisaged to serve two neutron science instruments, a boron neutron capture therapy (BNCT) station and a beamline for fluorine-18 radioisotope production for positive emission tomography (PET). To serve these diverse applications, a linear accelerator solution is selected, that will provide 10 MeV protons with a peak current of 10 mA within a 5% duty cycle. The accelerator is based on an RFQ and DTL with a post-DTL pulsed kicker system to simultaneously deliver macro-pulses to each end-station. Several choices of Linac technology are being considered and a comparison of the choices will be presented.
Accelerator Mass Spectrometry (AMS) provides high sensitivity measurements (typically at or below 1 part in 10$^{12}$) for rare, long-lived radioisotopes. These high sensitivities are achieved when isobars, elements with the same atomic weight as the isotope of interest, are eliminated. AMS laboratories use established techniques to suppress interfering isobars of some light isotopes. However, smaller, low energy (≤ 3 MV) AMS systems are unable to separate abundant isobars of many isotopes. Larger accelerators are still unable to separate the interfering isobars of some heavier isotopes.
The Isobar Separator for Anions (ISA) is a radiofrequency quadrupole (RFQ) reaction cell that provides selective isobar suppression in the low energy system of the accelerator beamline. The ISA accepts a 20-35 keV mass analyzed beam from the ion source and reduces the energy to a level that the reaction cell can accept, using a DC deceleration cone. The reaction cell is filled with an inert cooling gas, to further lower the ion energy, and a reaction gas to preferentially react with the interfering isobar. RFQ segments along the length of the cell create a potential well which confines the traversing ions. DC offset voltages on these segments maintain a controlled ion velocity through the cell. The beam is then reaccelerated before exiting the ISA chamber. The ISA has been integrated into a second injection line of the 3 MV tandem accelerator at the A. E. Lalonde AMS Laboratory, University of Ottawa.
The ISA-AMS system has facilitated the measurement of chlorine-36, which is typically not achievable by smaller accelerators due to the interference of its abundant isobar, sulfur-36. The cooling gas has been experimentally selected based on chlorine beam transmission through the ISA. Using nitrogen dioxide as the reaction gas, seven orders of magnitude reduction of sulfur to chlorine has been observed. A chlorine-36/chlorine abundance sensitivity of ~1x10$^{-14}$ was achieved by combining the sulfur suppression from the ISA and the degree of dE/dx separation in the detector offered by the 3MV-AMS system. The linearity and stability of the system have been tested over a range of chlorine-36/chlorine ratios using a diluted NIST chlorine-36 standard.
The Nab collaboration aims to make the world’s most precise, by about a factor of 10, measurement of the electron-neutrino angular correlation parameter “a” and the Fierz interference term “b” in cold neutron beta decay. Along with the neutron lifetime, this will allow the testing of various extensions to the standard model and will help home in on a correct theory describing what makes up our world. Nab is 4m tall asymmetric time of flight spectrometer with custom 100 mm^2, 127 pixel Si detectors on either end. Nab is currently in its commissioning phase at the Spallation Neutron Source at Oak Ridge National Lab in the USA and will collect physics data from 2022-2025. The Canadian Nab group is responsible for testing the novel large area Si detectors used in the experiment where we have built a steerable 30 keV proton accelerator at the University of Manitoba for this purpose. This talk will motivate and provide an overall status of the Nab experiment and present the 30 keV proton source at UofM with recent detector testing results.
A non-zero electric dipole moment (EDM) of the free neutron violates CP symmetry. Searching for this elusive quantity can thus reveal information about the matter-antimatter asymmetry of our Universe. The TUCAN collaboration intends to improve the current upper limit on the neutron EDM by one order of magnitude and push into the low $10^{-27}$ ecm sensitivity regime. During a neutron EDM measurement, electric and magnetic fields are applied, and the spin precession of polarized neutrons is observed.
To achieve competitive sensitivity it is crucial to have precise control over the magnetic fields. Accurate knowledge of its properties, such as stability and homogeneity, as these properties affect both the systematic and statistical precision of a neutron EDM measurement.
In this presentation I want to introduce ongoing development work at a magnetics laboratory at TRIUMF. We are working on several prototype setups of magnetic field and magnetometry infrastructure, such as a small-scale magnetic shield, a magnetization detection device, a non-magnetic robot to create field maps, and others. I will discuss how these activities inform the detailed design of the next generation TUCAN neutron EDM spectrometer.
Discovering a nonzero neutron electric dipole moment (nEDM) provides some of the tightest constraints on extensions to the Standard Model that attempts to explain the mechanisms underlying \textit{CP}-violation. The objective of the TUCAN (TRIUMF UltraCold Advanced Neutron) collaboration is to search for a permanent EDM of the free neutron, $d_n$, with a sensitivity of $\sigma(d_n) \leq 10^{-27} e$cm. The typical experimental method to measure the nEDM uses polarized ultracold neutrons (UCN) and employs the Ramsey method of separated oscillatory fields. Because of their slow movement, measurement of the spin precession frequency of UCNs requires very homogeneous electric and magnetic fields in space and time over the experimental area. A large multi-layer room called Magnetically Shielded Room (MSR) shields the main precession magnetic field produced by an internal coil from the environment magnetic fields. In the nEDM measurement, many possible sources of systematic error can manifest as a false EDM signal. Historically, the dominant systematic errors have come from magnetic field inhomogeneities, reducing the statistical precision of the experiment. Providing a picture of the magnetic field environment within the experiment would help control the system's homogeneity. This presentation will discuss the simulation of mapping the magnetic field inside the MSR to extract quantities relevant to the compensation of systematic effects in the experiment.
The TUCAN EDM experiment aims to measure the neutron electric dipole moment (EDM) to a precision of $1\times 10^{-27}~e$cm. The experiment is a precise relative measurement of the spin precession frequency of ultracold neutrons stored in a bottle, placed in homogenous magnetic and electric fields. The magnetic field is shielded from external influences by conducting the experiment in a magnetically shielded room (MSR). A main precession field of $B_0=1~\mu$T is produced by an internal coil. Magnetic field inhomogeneity in the coil/MSR system will cause the neutron spins to dephase as they precess, reducing the statistical precision of the experiment. Controlling the homogeneity is also important for false EDM signals. A system of square shim coils, mounted on the surface of a cube surrounding the experiment, is being developed to make adjustments to the field. This presentation will discuss quantitatively the magnetic homogeneity requirements, and demonstrate the ability of the shim coil design to meet them.
The matter-antimatter asymmetry in the universe is one of the core physics questions that remains unsolved in the modern era. While there have been attempts to delve into the cause of this mystery, none have yet to provide a comprehensive solution. One possible explanation is linked to the combined violation of charge-conjugation (C) and parity (P) symmetry, an example of which would be the presence of a permanent electric dipole moment (EDM) of a fundamental particle or system. MIRACLS is an experiment in development at CERN and TRIUMF to identify molecules with unprecedented sensitivity in EDM searches.
Performing laser spectroscopy on atoms and even molecules is not revolutionary, but there are two components which set MIRACLS apart from previous searches. The first is its ambition to study radioactive molecules which have recently introduced as intriguing precision probes for new physics, including EDM searches. The second component is its cryogenic Paul trap and Multi-Reflection Time-of-Flight (MR-ToF) device used in the ion-trapping. Containing the radioactive ionic species exposes them to a much longer study-period, allowing the sensitivity and/or precision of the spectroscopy measurements to be much greater.
The result of this new probing mechanism is a superior sensitivity in a most intriguing research. The aforementioned techniques and concept of the experiment will be discussed in further detail. A brief outlook to a dedicated precision laboratory for radioactive molecules at TRIUMF will be given.
The NEWS-G direct dark matter search experiment uses spherical proportional counters (SPC) with light noble gases to explore low WIMP masses. The first results obtained with an SPC prototype operated with neon gas at the Laboratoire Souterrain de Modane (LSM) have already set competitive results for low-mass WIMPs. The next phase of the experiment consists of a large 140 cm diameter SPC installed at SNOLAB with a new sensor design, lots of improvements in detector performance and data quality. Before its installation at SNOLAB, the detector was commissioned with pure methane gas at the LSM, with a temporary water shield, offering a hydrogen-rich target and reduced backgrounds. After giving an overview of the several improvements of the detector, preliminary results of this campaign will be presented, including UV laser and Ar-37 calibrations that allowed for precision characterization of the detector’s energy response at the single-ionization regime.
The DEAP-3600 experiment (Dark matter Experiment using Argon Pulseshape discrimination) at SNOLAB in Sudbury, Ontario is searching for dark matter via the elastic scattering of argon nuclei by dark matter particles as they traverse through the detector. The detector uses 255 photomultiplier tubes (PMTs) looking at ~3300kg of liquid argon in a spherical acrylic vessel. In addition to being sensitive to weakly interacting massive particles (WIMPs), DEAP-3600 is also sensitive to super-heavy dark matter candidates with masses up to the Planck scale. Sensitivity at such high masses is limited by the number density of dark matter rather than the cross-section. DEAP-3600 has the largest cross-sectional area amongst all the dark matter detectors which enables it to reach the Planck masses.
In this talk, we present the search for these superheavy candidate particles in three years of data (using a blind analysis), looking for multiple-scatter signals. A dedicated search is carried out since this multi-scatter signal is entirely different from the standard WIMP signal (usually a single scatter). Regions of interests are defined and background estimates are presented. No signal events were observed leading to direct detection constraints for dark matter masses between 8.3e6 and 1.2e19 GeV and dark matter-nucleon cross section between 1e-23 and 2.4e-18 cm^2
The DEAP-3600 experiment located at SNOLAB is a single phase liquid argon detector looking to confirm the existence of dark matter via direct detection. The energy signature of the dark matter may be examined with 255 photomultiplier tubes (PMTs) measuring the scintillation signal produced via nuclear recoils of argon nuclei by a dark matter particle. As a result, modelling background channels that produce nuclear recoils in the detector is critical in ensuring a well understood dark matter search region. In particular, understanding the scintillation signature of alpha particles in liquid argon will aid immensely in the development of a proper background model.
Alpha particles produce a reduced scintillation signal compared to electrons of the same energy, an effect known as “quenching”, which is in general energy dependent. In this talk, we will discuss progress on measurement of alpha particle quenching using Argon-1, a modular single phase liquid argon cryostat located at Carleton University, in Ottawa, Ontario.
A bubble chamber using fluorocarbons or liquid noble gases is a competitive technology to detect a low-energy nuclear recoil due to elastic scattering of weakly interacting massive particle (WIMP) dark matter. It consists of pressure and a temperature-controlled vessel filled with a liquid in the superheated state. Bubble nucleation from liquid to vapor phase can only occur if the energy deposition is above a certain energy threshold, described by the “heat-spike” Seitz Model. The nucleation efficiency of low-energy nuclear recoils in superheated liquids plays a crucial role in interpreting results from direct searches for WIMPs-dark matter. In this research, we used molecular dynamics simulation to study the bubble nucleation threshold, and we performed a Monte Carlo simulation using SRIM to obtain the nuclear recoil efficiency curve. The goal is to construct a physics model to explain the discrepancy observed between the experimental results and the current Seitz model. The preliminary results will be presented and compared with existing experimental data of bubble chamber detectors.
The Spherical Proportional Counter (SPC) is used in NEWS-G to search for low-mass Weakly Interacting Massive Particles (WIMPs). UV laser and Ar37 data calibrations were previously taken at the Laboratoire Souterrain de Modane (LSM) with a 1.35m diameter SPC filled with pure CH4 gas. To verify our understanding of the detector behaviour and the physics model we are using, a simulation of the SPC response to these two sets of calibration data is needed. The primary electrons originating from the same event will drift toward the high voltage sensor and a current will be induced by the motion of secondary ions drifting away from the sensor. How much diffusion a swarm of electrons undergoes is parametrized by the “rise time” of the integrated charge pulse. Both rise times and drift times of electrons can be affected by the “space charges” which are secondary ions created near the sensor distorting the overall electric field within the detector. The simulation results will be compared with the calibration data and the effect due to space charges will be discussed. Finally, I will talk about the implication of the simulation results in cut efficiencies and WIMP signal acceptance to further extract the dark matter cross-section exclusion limits.
NEWS-G is a direct detection dark matter experiment specializing in low mass (sub ~1 GeV) WIMP (Weakly Interacting Massive Particles) searches. NEWS-G uses spherical proportional counters (SPCs), a type of gas-ionization detector capable of observing the signal from single-electrons via a small (~1 mm radius) high-voltage anode sensor. While SPCs primarily use noble gases as their target medium, methane (CH4) is also a suitable gas due to its high concentration of hydrogen atoms – optimal for observing low mass WIMP interactions. 300 mbar of pure CH4 was even used as the target medium during the 2019 measurement campaign at the Laboratoire Souterrain de Modane with “SNOGLOBE” – NEWS-G’s 140 cm SPC. However, a disadvantage of NEWS-G’s detectors is that there is currently no reliable way of monitoring the absolute concentrations of gases inside, crucial for accurately determining the target mass. At the University of Alberta, we have been working on improving our gas sensing capabilities by developing a laser absorption spectroscopy (LAS) system designed for measuring concentrations of CH4 in circulation with a 30 cm SPC. In this talk, I will outline the development and testing of this new LAS system for live CH4 monitoring and use alongside NEWS-G’s radon trapping setup.
The amplification of intense, ultrashort laser pulses nearly four decades ago revolutionized ultrafast and strong field physics, creating many active fields of research such as femtosecond and attosecond science, and laser-based surgeries. More recently, optical parametric amplifiers (OPAs) are driving the next generation of ultrafast and intense light sources because of their phase stability, wavelength tuneability, and high pulse contrast. However, the bandwidth of OPAs is limited by the phase matching of the crystal, increasing the pulse duration. In this talk, we theoretically and experimentally investigate the amplification of few-cycle pulses by exploiting the nonlinear index of refraction, which we refer to as Kerr instability amplification (KIA).
We find that there is a modification to the phase matching condition in KIA, which leads to the possibility of single-cycle pulse amplification. As in all nonlinear effects phase matching plays a vital role in the efficiency of the process. In KIA, the frequency dependent index of refraction, the nonlinear index of refraction, the pump intensity, and the transverse momentum of the signal all determine the phase matching. For example, in our simulations in magnesium oxide (MgO), when pumped at intensities $> 10^{13}$ W/cm$^2$ in the near-infrared (IR), the phase matching is optimized at 4$^\circ$ over an octave of spectrum, allowing for the amplification of 5 fs pulses. When pumped in the short-wave IR, we calculate multi-octave amplification from $1 - 6$ $\mu$m, well-suited for ultrafast strong-field physics experiments in condensed matter.
We verify our simulations experimentally. We find compression through amplification in the case of 100 fs pulses. Using a 100 fs Ti:Sapphire laser as the pump, we amplify pulses by nearly four orders of magnitude from the visible to the infrared in a 0.5 mm MgO crystal. The amplification of these longer pulses leads to spectral broadening, and when measured with a frequency resolved optical gating setup (FROG), we find that the pulses are nearly transform limited. The experimental findings, such as resulting dispersion, amplification, tuneability, and angle dependence agree with our simulations.
Ultrashort femtosecond to attosecond laser pulses of electromagnetic radiation are an essential tool for measuring ultrafast phenomena. Such pulses due to their short duration, can have high intensity and minimal heat transfer. When working with ultrashort pulses it is crucial to characterize them by determining their amplitude and phase. There are numerous methods to characterize these pulses. Frequency resolved optical gating (FROG) is one method which is widely used to characterize ultrashort pulses. The FROG trace obtained through measurement of an ultrashort pulse is processed to obtain the phase and intensity of the pulse. Conventional processing methods generally require a full spectrogram and are iterative, taking several seconds to execute. A computationally efficient signal analysis method, based on convolutional neural networks, has been developed to provide fast ultrashort pulse characterization with low signal-to-noise ratio and without a full spectrogram.
Deep learning with neural networks is a technique for solving complex nonlinear problems. Convolutional neural networks were optimized to invert the FROG trace to obtain the pulse amplitude and phase. Simulations were used to train the network based on pulses of different widths passing through a dispersive medium with up to fourth order dispersion. Additional noise was added to the phase, to increase the diversity of sample pulse shapes, and to the FROG trace to improve robustness. The performance of this algorithm was evaluated on simulated FROG traces and compared to a conventional singular value decomposition method. The neural network was able to characterize pulses times three orders of magnitudes faster compared to the traditional method and does not requiring a full spectrogram to be sampled.
A new geometry of Focal Cone High Harmonic Generation (FCHHG) for generation of High Harmonic radiation is presented by focusing the incoming cone of light through a gas sheet leading to a focusing beam of harmonic radiation. Using 100 TW to 1 PW laser pulses, high energy, microjoule to millijoule, high harmonic pulses should be achievable. Such a focusing geometry generates a converging cone of high harmonic radiation producing a high intensity high harmonic hot spot (HHHS) at focus. An experimental investigation of this scheme was carried out at the Centro de Láseres Pulsados (CLPU) in Salamanca Spain. We will present the initial findings of this study using a rectangular gas sheet target of argon gas generated by a puffed gas jet. The rectangular gas sheet is chosen to provide a region of uniform areal density over which the laser can interact. The interaction area is scaled to maintain the interaction intensity in the optimum range of 1-2 x 1014 W cm-2 for efficient harmonic generation, so as not to exceed the saturation intensity for argon. A number of diagnostics were employed to characterize the emission including spatial imaging with an XUV CCD camera, quantitative XUV diode measurements, x-ray transmission grating measurements of the spectra, divergence measurements using patterned aperture plates and spatial coherence measurements using knife edge diffraction. The effect of a non-uniform gas region was also explored by scanning the laser beam away from the gas jet exit to regions where the gas jet expands and becomes more non-uniform. In all cases, the primary laser light was blocked using multiple layers of 800nm thick aluminum foil, which led to significant attenuation of the high harmonic signal in the current experiments. The initial results will be presented and scaling to efficient high energy, high harmonic pulse sources will be discussed.
With the invention of lasers, the intensity of a light wave was increased by orders of magnitude over what had been achieved with a light bulb or sunlight. This much higher intensity led to new phenomena being observed, such as violet light coming out when red light went into the material. After Gérard Mourou and I developed chirped pulse amplification, also known as CPA, the intensity again increased by more than a factor of 1,000 and it once again made new types of interactions possible between light and matter. We developed a laser that could deliver short pulses of light that knocked the electrons off their atoms. This new understanding of laser-matter interactions, led to the development of new machining techniques that are used in laser eye surgery or micromachining of glass used in cell phones.
Biological processes are stochastic reaction networks that operate far from thermodynamic equilibrium. Furthermore, even the best-known biological processes are not completely characterized in terms of mechanistic interactions between components. This combination makes analyzing noise properties challenging because small differences in rate functions or network topology can drastically affect stochastic dynamics in complex systems. Instead of ignoring or guessing unknown details we analyze classes of systems that share some features but are left to vary arbitrarily in all unknown features. Such an approach allows us to derive inequalities that can reveal general trade-offs in controlling noise in biological processes. I will present proven or conjectured bounds on stochastic fluctuations in systems that achieve robust steady states, systems with finite molecular lifetimes, and systems that attempt to suppress spontaneous fluctuations in specific components.
Intrinsically disordered proteins (IDPs) play critical roles in regulatory protein interactions, but detailed structural/dynamics characterization of their ensembles remain challenging, both in isolation and they form dynamic ‘fuzzy’ complexes. Such is the case for mRNA cap-dependent translation initiation, which is regulated by the interaction of the predominantly folded eukaryotic initiation factor 4E (eIF4E) with the intrinsically disordered eIF4E binding proteins (4E-BPs) in a phosphorylation-dependent manner. Fluorescence spectroscopy provides crucial insights into the dimensions and dynamics of IDPs which inform the molecular basis of their function. Single-molecule Förster resonance energy transfer showed that the conformational changes of 4E-BP2 induced by binding to eIF4E are non-uniform along the sequence; while a central region containing both motifs that bind to eIF4E expands and becomes stiffer, the C-terminal region is less affected. Fluorescence anisotropy decay revealed a nonuniform segmental flexibility at different sites along the chain. Dynamic quenching of these fluorescent probes by intrinsic aromatic residues measured via fluorescence correlation spectroscopy report on transient intra- and inter-molecular contacts on ns-s timescales. The chain rigidity around sites in the C-terminal region far away from the two binding motifs significantly increased upon binding to eIF4E, suggesting that this region is also involved in the highly dynamic 4E-BP2:eIF4E complex. Our time-resolved fluorescence data paint a sequence-level rigidity map of three states of 4E-BP2 differing in phosphorylation or binding status and distinguish regions that form contacts with eIF4E. We are now conducting single-molecule experiments aimed at resolving site-specific interactions and kinetics of the eIF4E:4E-BP2 complex. Our results constitute an important step towards a mechanistic understanding of the biological function of IDPs via integrative modelling.
We report the results of experimental investigations involving photobiomodulation (PBM) of living cells, tubulin, and microtubules in buffer solutions exposed to near-infrared (NIR) light emitted from an $810$ nm LED with a power density of $25$ mW/cm$^2$ pulsed at a frequency of $10$ Hz. In the first group of experiments, we measured changes in the alternating current (AC) ionic conductivity in the $50$ - $100$ kHz range of HeLa and U251 cancer cell lines as living cells, exposed to PBM for $60$ minutes, and observed increased resistance compared to the control experiments. In the second group of experiments we investigated the stability and polymerization of microtubules under exposure to PBM. The protein buffer solution used was a mixture of Britton-Robinson buffer (BRB80 aka PEM) and microtubule cushion buffer. Exposure of Taxol™-stabilized microtubules ($\sim 2$ $\mu$M tubulin) to the LED for $120$ minutes, resulted in gradual disassembly of microtubules observed in fluorescence microscopy images. These results were compared to controls where microtubules remained stable. In the third group of experiments we performed turbidity measurements (absorbance readings at $340$ nm) throughout the tubulin polymerization process to quantify the rate and amount of polymerization for exposed versus unexposed tubulin samples, using tubulin re-suspended to final concentrations of $\sim22.7$ $\mu$M and $\sim45.5$ $\mu$M in the same buffer solution as before. Compared to the unexposed control samples, absorbance measurement results demonstrated a slower rate and reduced overall amount of polymerization in the less concentrated tubulin samples exposed to PBM for $30$ minutes with the same parameters mentioned above. Paradoxically, the opposite effect was observed in the $45.5$ $\mu$M tubulin samples, demonstrating a remarkable increase in the polymerization rates and total polymer mass achieved after exposure to PBM. These results on the effects of PBM on living cells, tubulin, and microtubules are novel, further validating the modulating effects of PBM and contributing to designing more effective PBM parameters. Finally, potential consequences for the use of PBM in the context of neurodegenerative diseases are discussed.
Protein clustering occurs in living cells, often involving phase transitions, and can be an essential step in signal transduction in cells. Protein-protein interaction strength and diffusion times dictate when and how clusters occur, and with diffusion times dependent on geometric properties of cell compartments. The evolution of protein cluster sizes, and any signals sent by the clusters, can be controlled by coarsening dynamics.
We investigate the effects of geometry on controlling phase behaviour of proteins on the endoplasmic reticulum (ER), a tubular network that spans much of the cell. The protein IRE1α resides on the surface of the ER and performs essential signaling as part of the Unfolded Protein Response, which is critical for the healthy function of the cell.
Using stochastic simulations, we explore how the geometry of the tubular surface of the ER, and the network that they form, affects the diffusion and the clustering of the IRE1α proteins on the ER’s tubular surface. The simulation applies a kinetic Monte Carlo algorithm to represent the IRE1α proteins as a lattice gas on a single tube. We find that clustering substantially increases on tubes that are narrower than the typical ER tube diameter of 100nm. Furthermore, the simulations yield IRE1α protein clustering at physiological IRE1α protein concentrations, estimated to be only 1 ∼ 10 proteins per micrometre of tube length. We also explore the role of tube geometry in determining the typical cluster formation times.
IRE1α signaling is integral to cell health and function, and its malfunction is tied to the development of neurodegenerative diseases and cancer. We aim to further our understanding of protein clustering on the ER to provide insight into geometric regulation of phase behaviour and cellular signaling.
The 120-residue 4E-BP2 (BP2) protein undergoes a transition from disordered to partially folded upon multi-site phosphorylation, reducing its binding affinity with eIF4E (4E) and thus regulating the initiation of translation in neuronal cells. Although BP2 is an attractive target of anticancer drugs, its disordered nature makes it challenging to model. An initial ensemble of BP2 conformers were generated using FastFloppyTail (FFT), a Rosetta-based program. The non-phosphorylated (NP) conformers were generated by applying the FFT algorithm to the entire 120-residue chain, while the 5-phosphorylated (5P) structures were produced by fixing the 18-62 folded domain and applying FFT sampling to the N- and C-terminal tails only.
To obtain a compromise between uncertainties in the biophysical experiments and in the initial conformational ensemble, a Bayesian method of structural refinement was applied through the Bayesian-Maximum-Entropy (BME) method. The degree of reweighting is determined by optimizing agreement with various restraints on both local and nonlocal scales: Small-Angle X-ray Scattering (SAXS), Chemical Shifts (CS) and single molecule Forster Resonance Energy Transfer (smFRET). Paramagnetic Relaxation Enhancement (PRE) data were withheld and used as validation criteria, external to the refinement process. By implementing differential weighting of restraints and mitigating overfitting, the resulting NP and 5P BP2 ensembles were found to be in good agreement with all available experimental data. Secondary structure analysis reveals local structure of biological relevance for both BP2 phosphoforms and the replication of the canonical 4E-binding helix in NP BP2. Applying clustering algorithms to partition the conformational landscape leads to distinct and significantly populated structural states that provide new insights into the extended dynamic interaction interface between 4E-BP2 and eIF4E.
A key challenge of systems biology is to translate cell heterogeneity data obtained from single-cell sequencing or flow cytometry experiments into causal and dynamic interactions. We show how static population snapshots of gene expression reporters can be used to infer causal and dynamic properties of gene regulatory networks without using perturbations. For instance, we derive correlation conditions that detect causal interactions and closed-loop feedback regulation in gene regulatory networks from snapshots of transcript-levels. Furthermore, we show how oscillating transcription rates can be identified from the variability of co-regulated fluorescent proteins with unequal maturation times. Our approach exploits the fact that unequal fluorescent reporters effectively probe their upstream dynamics on separate time-scales such that their correlations implicitly encode information about the temporal dynamics of their upstream regulation. Synthetic genetic circuits provide exciting opportunities to verify these co-variability conditions with well characterized engineered systems. Lastly we report on ongoing experiments where we quantitatively test our theory with variants of a synthetic oscillator, the Repressilator, in single-cells using time-lapse microscopy and microfluidics.
Plasma provides the unique processing conditions for the synthesis of nanocatalysts and gas conversion, as well as the direct coupling of renewable electricity with chemical processing. Catalysis unlocks efficient reaction pathways and enables the performances necessary for process industrialization. Both combined provide unique avenues for chemical process electrification, an essential transition vector of the sustainability transition. Over the last twenty years, our laboratory has developed elementary units and accompanying processes linking the green electron from the electrical outlet to the green chemical and process of the energy transition. The journey begins with electrical power supply and reactor design to achieve uniquely controlled plasma chemistry. I will describe our recent work on nanosecond-radiofrequency (RF) plasma sources for transient/repetitive plasma generation under the challenging conditions of reactive gas mixtures and pressures above atmospheric. I will describe how pulsed laser ablation combined with RF plasma functionalization is used to synthesize unique nanocatalysts with reduced environmental footprint. Preliminary results with the dry reforming of methane and ammonia synthesis will be presented. In the second part of the talk, I will introduce the limits of the state-of-the-art gas conversion plasma reactor technologies and introduce promising opportunities enabled by topological design and recent advances in additive manufacturing. These novel approaches pave the way to plasma process intensification via reactor miniaturization, parallelization and integration.
Dielectric barrier discharges are an easy way to generate cold atmospheric pressure plasmas. For millimeter-range gas gap, a streamer breakdown generally occurs resulting in filamentary discharges. They are made of several short-lived plasma channels randomly distributed along the gas gap, which can be a serious drawback for example in the frame of surface coating processes when a homogeneous and dense layer is required [1].
Nevertheless, the possibility to obtain homogeneous discharges in similar conditions has been evidenced for a long time [2]. To do so, it is necessary to promote a Townsend breakdown by slowing down the ionization process; it can be done by supplying seed electrons before the discharge ignition. These so called pre-ionization mechanisms generally result from the previous discharges and are thus called memory effect. The latter strongly depends on the operating conditions such as the background gas or the dielectric materials properties and can occur both in the gas bulk or at the surface.
Discharges generated in different mixtures of N2 and O2 provide a good representation of the mechanisms diversity. In pure nitrogen, it is now well accepted that N2(A) metastable molecules play a significant role. As they diffuse towards the dielectric surface, they can be responsible for the release of trapped electrons from the surface. When a very little amount of O2 (up to 500 ppm) is added, the number of seed electrons dramatically increases suggesting that a new mechanism arises. A possible explanation involve associative ionization reactions between N(2P) metastable atoms generated by N2(A) and O(3P) atoms [3,4]. For larger concentration of oxygen, the strong quenching of N2(A) dramatically reduces its lifetime. In these conditions, it is very likely that surface processes such as spontaneous electron desorption are responsible for the pre-ionization of the gas.
During this presentation, a non-exhaustive overview of the different pre-ionization mechanisms will be provided. This understanding will then be used to address the main keys allowing to obtain operate homogeneous discharges in various gases such as Nitrogen, Nitrogen + oxidizing gazes, Air, CO2 …
References
[1] https://doi.org/10.1002/ppap.201200029
[2] https://doi.org/10.1051/epjap/2009064
[3] https://doi.org/10.1088/1361-6463/ab7518
[4] https://doi.org/10.1088/1361-6463/aad472
Synthetic polymers are well known to be hydrophobic (non-wetting) in their natural state, due to their inherently low surface free energy, γ (in mN/m). Surface modification of polymers by exposure to cold plasma for enhanced wettability is practiced on vast industrial scales (i) by atmospheric pressure (AP) “corona” discharges since the 1940s; (ii) by other cold plasma processes, either at AP or under partial vacuum in more recent decades. Hereby, polar functional groups, usually bearing O and/or N atoms, become covalently grafted to the outermost polymer surface.
A well documented drawback of such grafting reaction by (i) and (ii) above is known as “hydrophobic recovery” or “ageing”: thereby, the increased γ of freshly treated polymer partially reverts to its initial low value (about 28 mN/m for polyethylene or polypropylene, from values as high as > 50 mN/m immediately after treatment). The reason for this thermodynamically driven phenomenon is that polar moieties become buried up to tens of nm below the outer surface by macromolecular “reptation”, motion that occurs at normal (non-cryogenic) temperature.
This laboratory has for many years been modifying polymer surfaces by plasma to promote adhesion of living cells for biomedical applications. The solid polymers have been either (a) normal films, typically 50 µm thick and possessing smooth top surfaces; (b) fibrous mats composed of > 90% void random networks of sub-micrometric electro-spun fibrils. We have certain evidence, based on time-dependence of water contact angle (WCA) measurements, that (b) might resist ageing more than (a). A possible reason for this might be much higher surface-to-volume ratio of (b), which favours surface-near cross-linking of polymer chains by ion bombardment and/or VUV irradiation. We present preliminary results based on WCA and surface analytical (XPS) measurements.
Microwave plasmas are hugely-studied plasmas, they have characteristics that make them unique, they can be generated for low and high-pressures, they have relatively high densities of charged particles, and can be generated in different cavity geometries. A new way to ignite microwave plasmas was recently developed using time reversal and a nanosecond pulsed generator. This method allows the dynamic control of the plasma position and the study of plasmas on timescales rarely studied. The ignition method of such plasmas was investigated [1], but the time- and space-resolved plasma characteristics remain unexplored. Some imaging measurements were performed for different pressures. It was found that in nominally pure argon plasmas, the space-integrated light emission intensity sharply increases over a few tens of nanoseconds and then decreases with timescales in the hundred of nanoseconds. In this work, optical emission spectroscopy of argon 4p-4s transitions coupled with collisional-radiative modeling [2] is used to examine the behavior of the electron temperature and excited states populations during ignition and extinction stages. For pressures between 1.5 and 4 Torr, even with maximum plasma dimensions in the centimeter range, it is found that radiation trapping play a significant role on the analysis of argon line emission intensities. In addition, the populations of argon 4s states and charged species also influence discharge ignition through a so-called memory effect between subsequent discharges.
Cold atmospheric plasma science is a continuously growing domain. Agriculture, material synthesis, medicine, air and surface decontamination, food processing, among many more, applications fields of this omnipresent, yet invisible to broad society, technologies seem limitless. The knowledge about plasma sources and the underlying physics is constantly improved with new designs and multidisciplinary applications. The ability of cold atmospheric pressure plasma to generate reactive species (RONS) relevant for the most prominent applications, such as wound healing, pathogen inactivation, methane reforming, originates from the electric field characteristics of the plasma. It is thus of the utmost importance to have an efficient, sensitive, and high-resolution detection technique to determine the plasma electric field in time and space. The method of choice is electric field-induced second harmonic (E-FISH), a by now well-established nonperturbative technique for measuring the amplitude and orientations of cold atmospheric plasma electric fields. It exploits the appearance of hyperpolarizability in gas when subject to an electric field. A laser is used to probe the medium and the optical second harmonic signal is detected to determine the electric field in the gas. Although E-FISH allows tunable time resolution, only limited by the pulse duration of the used laser, which can go down to the femtosecond, it has been shown that E-FISH presents some issues. Spatial resolution along the beam axis is of the order of the interaction length of the beam and the plasma, and sensitivity only goes down to the order of 100V/cm using a PMT for detection. Work on enhancing these two characteristics of E-FISH have been made by our team and collaborators. Using a femtosecond laser, novel approaches were developed and optimized. The presented results confirmed the improvement of the electric field detection technique, the E-FISH, and will certainly deepen our knowledge on the spatio-temporal electric field distribution of cold atmospheric plasma.
I review the physics motivation for colliders that come after the LHC, including Higgs properties, the matter asymmetry of the Universe, and the hierarchy problem. The reach of several classes of proposed colliders (electron-positron, hadron, muon) in well motivated scenarios is described.
Scheduled to begin data taking in 2029, the High-Luminosity Large Hadron Collider (HL-LHC) will be the pre-eminent energy frontier collider for the forseeable future. Its unique dataset of unprecedented size will allow for a huge range of precision measurements and searches for new physics. This talk will outline the physics opportunities for the ATLAS detector in utilizing this dataset, highlighting in particular what we expect to learn about the Higgs boson and the mechanism of electroweak symmetry breaking. Challenges related to operating at extremely high pile-up will be discussed, as well as plans to exploit the capabilities of the upgraded ATLAS detector. Finally, the complementary and critical role that the HL-LHC plays in the landscape of future colliders will be described.
Scheduled to begin operation early in 2029, the High-Luminosity LHC (HL-LHC) will be the largest collider ever built. With an instantaneous luminosity ten times larger than that of the LHC, it will allow for an exciting physics program and for discoveries that could signal new physics beyond the Standard Model. These excellent possibilities do come with major experimental and technological challenges. To address these the ATLAS detector will undergo an extensive upgrade program known as the Phase-II detector upgrades. In this presentation, an overview of the detector upgrades will be provided with emphasis on those with extensive Canadian participation. As an outlook, new technology opportunities for future collider applications and the opportunities for Canadian involvement will be briefly outlined.
The Standard Model is the most comprehensive present day precision theory of particle interactions. Nonetheless, many key questions in subatomic physics and cosmology remain unanswered. The discovery of the Higgs boson at the Large Hadron Collider (LHC) has raised new questions. The International electron-positron Linear Collider (ILC) is ready to be deployed as the next high-energy world facility for particle physics. First, the ILC project status will be summarized. Then, a set of other potential lepton colliders, that could operate in the energy region from the Z boson mass to the TeV scale, will be presented . These colliders have a common goal of producing large samples of Higgs bosons, although they can also be operated to study other physics phenomena. Precision experiments at future colliders will be essential in unambiguously interpreting LHC physics discoveries. TeV scale physics demands much better performance than previous or current collider detectors have achieved. The collider and detector challenges will be described with focus on specific tracking, calorimetry and accelerator R&D activities in Canada. The overview will also cover a potential TRIUMF accelerator wire-corrector systems for the HL-LHC, depict ILC opportunities, and look at ways to nurse instrumentation for a new generation of particle detectors.
All are welcome to this session that will honour the life of Professor Werner Israel and his enormous contributions to theoretical physics.
Active-learning techniques are as useful for teaching graduate-level quantum field theory (QFT) as they are for introductory physics courses. This talk will describe the speaker’s experience using these techniques in a QFT course. Students completed readings and online questions ahead of each class and spent class time working through problems that required them to practice the decisions and skills typical of a theoretical physicist. The instructor monitored these activities and regularly provided timely feedback to guide their thinking. Instructor-student interactions and student enthusiasm were similar to that encountered in one-on-one discussions with advanced graduate students. Course coverage was not compromised. The teaching techniques described here are well suited to other advanced courses.
This symposium is organized by the CAP's Director of Professional Affairs, Daniel Cluff, and Director of Private Sector Physics, Ian D'Souza, in collaboration with the Division of Applied Physics and Instrumentation (DAPI).
The Electron-Ion Collider (EIC) is a pioneering new particle accelerator that will be built on the current site of the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. It will provide high energy collisions of polarized electrons with polarized protons and ions, allowing for experiments that probe the nature of strong interactions to unprecedented precision. The EIC Project has grown and evolved rapidly since the official launch by the U.S. Department of Energy in 2020. This talk will discuss the primary physics themes driving the EIC effort, the recent milestones achieved by the project and the efforts to establish two complementary detectors at adjacent interaction regions.
Understanding the properties of nuclear matter and its emergence through the underlying partonic structure and dynamics of quarks and gluons requires a new experimental facility in hadronic physics known as the Electron-Ion Collider (EIC). The EIC will address some of the most profound questions concerning the emergence of nuclear properties by precisely imaging gluons and quarks inside protons and nuclei such as their distributions in space and momentum, their role in building the nucleon spin and the properties of gluons in nuclei at high energies. In January 2020 the EIC received CD-0 and Brookhaven National Laboratory was selected as site, and June 2021 CD-1 was granted to the EIC Project. This presentation will highlight the experimental program, the plans to have two complementary general purpose detectors to be built by the vibrant international EIC user community around the world.
As part of large international collaborations, several Canadian universities are shaping the development of the Electron-Ion Collider, its experiments and their detectors technologies. In this presentation I will give an overview of the current and future Canadian activities from coast to coast, and present opportunities for researchers to join these efforts.
For living cells to maintain spatial organization and functional capacity, they must deliver certain proteins to particular organelles and distribute the proteins within the organelles. This talk will focus on the physics of protein localization in mitochondria, an organelle that forms dynamic spatial networks that can span much of the cell volume. I will describe how protein translation and cellular geometry combine to push localization of mRNA to mitochondria out of equilibrium. Small mRNA numbers cause the nature of mRNA association to mitochondria to impact the scale of protein concentration fluctuations within mitochondria, which can be smoothed out with the help of mitochondrial fusion and fission dynamics. From these mitochondrial dynamics emerge spatial networks, formed from extended and branched mitochondrial tubes, that facilitate protein transport. I will describe how spatial network characteristics control the diffusive search time to a target. Overall, diffusion, geometry, and nonequilibrium conditions can combine to regulate protein localization to mitochondria.
Living cells are divided into functional compartments called organelles. In eukaryotes, lipid membranes separate organelles from the cytoplasm such that each compartment maintains a distinct biochemical composition that is tailored to its function. In contrast, prokaryotes typically lack internal membranes and instead must use other mechanisms to spatially organize the cell. Using fluorescence imaging and single-molecule tracking, we show that E. coli RNA polymerase (RNAP) organizes into clusters through liquid-liquid phase separation (LLPS). RNAP clusters, or "condensates", increase cell survival during stress, and appear to regulate ribosome biogenesis in response to nutrient availability. Our results demonstrate that bacteria, like eukaryotic cells, use LLPS to generate membraneless organelles that spatially organize biochemical processes to optimize cell fitness in various environments.
Prion proteins are proteins that can fold in different structures, where one fold (the prion form) can self-propagate by converting their normally folded proteins into the prion form. In mammals, prions are the cause of untreatable neurodegenerative diseases such as Creutzfeldt-Jakob disease. Intriguingly, prion domains (often disordered sequences) are commonly found in yeast but have also recently been found in bacteria and higher eukaryotes, where they act as a non-pathogenic bistable switch to propagate a functionally distinct cellular state. In bacteria, the prion can be propagated for hundreds of cell divisions, but is stochastically lost through an unknown mechanism in a fraction of the population. It is also unknown whether these bacterial prion domains can attain different prion folds (known as strains or variants) like their mammalian counterparts, and whether the presence of the prions has a general physiological impact on the cell. In this talk, we answer these questions by following thousands of single cells propagating prions for dozens of cell divisions using a microfluidic device and quantitative time-lapse microscopy. We build a stochastic model of the chemical reaction kinetics to recapitulate the properties of the system. I will end by discussing how our findings can provide insights on the biological role of prions in bacteria and on the molecular mechanisms of prion propagation in other organisms.
Protein diffusion plays a ubiquitous role in molecular signalling pathways. As our understanding of the organization and compartmentalization of the cellular environment progresses, we start to better appreciate the complexity of the diffusive transport of proteins, and the control this transport may exert in signalling. One striking example of a diffusion-controlled process is the search of transcription factors for their target genes inside cell nuclei. We have investigated the target search strategies of two specific transcription factors active in the early fly embryo, Bicoid and Capicua, using different fluorescence methods. We observe the existence of a slow fraction of these proteins, which we attribute to the formation of small mobile phase-separated molecular condensates. I will discuss here how condensate formation may help target search efficiency, and increase both the speed and the precision with which gene expression can be activated or repressed.
DNA topology-relaxing enzymes in the cell nucleus produce simplified topology, allowing entangled duplex DNA strands to pass one through another, essential for many critical cell nuclear processes including successful DNA segregation during cell division. Here, we carry out 1-point and 2-point microrheology on a model system of DNA at physiologically-relevant concentrations with and without enzyme activity of topoisomerase II. We find that the aggregate, incoherent effect of the enzyme activity creates randomly fluctuating forces, which drive diffusive-like, non-thermal motion. We combine these measurements of random motion with independent micro mechanical measurements and show that the enzyme-driven fluctuations are quantitatively consistent with $1/f$ noise, far from what is expected for thermal motion, and of a completely different `colour' from non-equilibrium fluctuations in the cytoplasm driven by processive cytoskeleton motors. Our measurements at different energy fluxes could shed light on the connection between the enzyme's maintenance of the system away from thermodynamic equilibrium and its simplification of topology over large length scales so key to enhancing nuclear transport for many processes.
Molecular motors are essential for powering directional motion at the cellular level, including transport and sorting of cargo, cell locomotion and division, and remodelling of the extracellular environment. Such molecular motors are made out of proteins whose directed motion is coupled to the consumption of chemical free energy. Inspired by such biological machines, significant strides have been made to design and implement synthetic devices capable of directed motion on the nanoscale and the microscale. While these have been impressive achievements, thus far, directed motility of a synthetic protein-based motor has not been demonstrated. In this talk I will present our synthesis and characterization of a novel protein-based microscale motor we dub the lawnmower. It is comprised of a spherical hub decorated with trypsin enzymes; its “burnt-bridge” motion is directed by cleavage of a peptide lawn, which promotes motion towards fresh substrate. We characterize the dynamics of the lawnmower on a 2D surface and in a 1D confined geometry; we characterize its dynamics via its mean-squared displacement and speeds. The lawnmower is the first example of an autonomous protein-based synthetic motor purpose-built using nonmotor protein components. (Current paper draft: arXiv:2109.10293v2)
The classical view holds that proteins fold into a three-dimensional structure or native state, which determines the biological function of the protein. According to the energy landscape theory, folding proceeds on a rough funnel-shaped multidimensional free energy surface to the native conformation. However, some proteins have recently been discovered to reversibly switch between two entirely different native states, which are exceptions to this rule. Do these so-called metamorphic proteins exhibit energy landscapes with multiple deep funnels corresponding to the different native states? We used an all-atom hybrid model with a potential energy function formed as a linear mixture of physics-based and structure-based potentials. As a case study, we focus on the C-terminal domain (CTD) of the transcription factor RfaH. The CTD undergoes a large-scale structural transition from an α-helical hairpin fold to a 5-stranded β-barrel fold upon dissociation from the N-terminal domain (NTD), which remains structurally stable. We show that our hybrid model demonstrates the crucial thermodynamic behavior of RfaH CTD, i.e., a switch in the global free energy minimum from one fold to the other upon domain dissociation. Our model suggests that for the isolated CTD, the free energy landscape has a single funnel related to the β-barrel fold and no detectable funnel for the α-helical state. This behavior is consistent with data from NMR on the isolated CTD and shows that a multi-funnel landscape cannot be assumed for metamorphic proteins.
Important insights about the signaling mechanisms of G Protein Coupled Receptors (GPCRs) can be learned from their supramolecular assembly. Recent studies in our lab have shown that the M2 muscarinic receptor (M2R), as well as its cognate G protein (Gi), can be purified as oligomers, yet the size and the dynamics of these oligomers, as well as their function are not fully understood in vivo. We used single-molecule fluorescence techniques, such as single-particle tracking (SPT) and single-molecule photobleaching (smPB) to identify the oligomers of M2R in live HEK293 cells. The receptors have been expressed with a HaloTag at their extracellular interface, allowing for labelling with fluorophores with HaloTag ligand (HTL), such as JF549 HTL. The movement of M2 receptors in the cell membrane is spatially and temporally heterogeneous, transitioning between normal and anomalous diffusion regimes. As controls, SPT measurements were performed on pure monomeric (CD86) and dimeric (CD28) membrane proteins. Intensity traces of immobile, single receptor complexes in the membrane of fixed cells was analyzed using in-house smPB code based on change-point and Bayesian algorithms. The results show a distribution of multiple stepwise decreases and indicate that the M2R mediated signaling proceeds, at least in part, via oligomers of receptors and G proteins.
The dependence of the mode-coupling instability threshold in two-dimensional complex plasma crystals is studied. It is shown that for a given microparticle suspension at a given discharge power there exist two thresholds in pressure. Above a specific pressure p_cryst , the monolayer is always in the crystal phase. Below a specific pressure p MCI , the crystalline monolayer undergoes the mode-coupling instability and the monolayer is in the fluid phase. In between p_MCI and p_cryst, the crystal will be in the fluid phase when increasing the pressure from below p_MCI until it reaches p_cryst where it recrystallises, while it remains in the crystal phase when decreasing the pressure from above p_cryst until it reaches p_MCI. A simple self-consistent sheath model is used to calculate the rf sheath profile, the microparticle charges and the microparticle resonance frequency as a function of power and background argon pressure. Combined with calculation of the lattice modes the main trends of p_MCI as a function of power and background argon pressure are recovered. The threshold of the mode-coupling instability in the crystalline phase is dominated by the crossing of the longitudinal in-plane lattice mode and the out-of-plane lattice mode induced by the change of the sheath profile. Ion wakes are shown to have a significant effect too.
References
[1] L. Couëdel and V. Nosenko, Stability of two-dimensional complex plasma monolayers in asymmetric capacitively-coupled radio-frequency discharges, Phys. Rev. E 105, 015210(2022)
The inference of basic plasma parameters such as density and temperature, is a century old problem, starting with the seminal work of Tonks and Langmuir in the early 1900. Several theories have been developed to determine probe characteristics; that is, collected currents as a function of bias voltage, under diverse laboratory and more recently, space plasma conditions. The advantage of the resulting theoretical analytic characteristics, is that they enable relatively simple algorithm, making it possible to infer plasma parameters quickly, with modest computing resources. On the downside however, all theories rely on simplifying assumptions in order to make the solution of the probe characteristic problem analytically tractable. In actual plasma, these assumptions are typically not all satisfied, which results in errors and uncertainties which are often difficult to quantify. A solution to this predicament would consist of using computer simulations to determine probe characteristics under more realistic conditions, and from there, infer parameters of interest. A direct use of simulations is unfortunately not practical, because of the large computing resources (days to months of computing on supercomputers, depending on the complexity of the system) required to determine a single characteristic. An alternative to the direct simulation approach is to use simulations to pre-compute probe, and more generally particle sensor responses in a range of relevant plasma conditions, and construct a solution library consisting of probe collected currents with, for example, corresponding plasma density, temperature, flow velocity, or ion effective mass. This synthetic solution library in turn can be used to train regression models from which inferences with quantifiable uncertainties. Examples are presented where such models are constructed and applied to in different situ space plasma measurements.
During quiet times, Earth’s ionosphere is characterized by relatively cool temperatures of 2,000 K (~0.2 eV) and less. However, the ionosphere can be highly disturbed in the presence of the aurora, which during active periods deposits hundreds of GW into the high-latitude atmosphere via the ionosphere. This energy comes from the magnetosphere in the form charged particle precipitation, Joule or frictional heating in the lower ionosphere, and wave-particle interactions at higher altitudes. The latter pathway can result in ion temperatures of the order of a million K – comparable the temperature of the solar corona. While such extremes have been measured for many decades in the magnetosphere, until recently they were not reported below 500 km altitude – within the main ionosphere – presumably due to damping and dissipation caused by collisional interaction with the neutral atmosphere. However, high-time-resolution imaging of particle distribution functions made possible by the Swarm and ePOP satellite missions has in fact revealed the presence of extreme temperatures within the main ionosphere (Shen et al.,2018) - typically in highly localized regions of the order or less than 1 km wide, which are traversed in only a fraction of a second by a satellite in low Earth orbit. This talk will describe a new generation of particle instrument that has made possible the detection and characterization of these extreme regions, and their importance to geophysics and plasma physics.
Shen, Y. et al. (2018). Low‐altitude ion heating, downflowing ions, and BBELF waves in the return current region. Journal of Geophysical Research: Space Physics, 123(4), pp.3087-3110.
Ten years have passed since the discovery of the Higgs Boson back in 2012 at the Large Hadron Collider (LHC), in that time the properties of single Higgs production has been extensively probed and has all shown to be in an astounding agreement with the Standard Model (SM) and as a result no new physics. However due its significantly lower cross section the pair production of the Higgs boson has yet to be observed and have its properties studied. The pair production of the Higgs due to its self-interaction is of particular interest since it helps directly determine the shape of the Higgs potential which in turn has profound theoretical consequences. For example, the minimum the universe currently finds itself in within the Higgs potential might not be the true minimum depending on the Higgs potential shape, and so the universe could consequently transition via quantum tunnelling to this true minimum which could result in a complete alteration of the universe and its physical laws. The shape of the Higgs potential also tells a great deal about how it transitioned from the shape it had during the early stages of universe to the shape it has today, and the possibility of electroweak baryogenesis happening in between, which could explain the matter-antimatter asymmetry we also observe today. Projection studies of non-resonant Higgs boson pair production in the $b\bar b b \bar b$ final state with the ATLAS detector are presented here. Based on the Run 2 analysis, these studies are extrapolated to conditions expected at the High-Luminosity LHC (HL-LHC) and show a substantial improvement over previous results.
The MoEDAL experiment deployed at IP8 on the LHC ring was the first dedicated search experiment to take data at the LHC in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive slowly moving charged particles and long-lived massive charge SUSY particles. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), approved by CERN’s Research Board in now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to milli-charged particles with charge as low as 10-3 e (where e is the electron charge) and, in conjunction with MoEDAL’s trapping detector, to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. We shall also briefly discuss the MAPP-2 upgrade to the MoEDAL-MAPP experiment planned for the High Luminosity LHC (HL-LHC) in the UGC1 gallery near to IP8. This phase of the experiment is designed to maximize the MoEDAL-MAPP sensitivity to long-lived neutral messengers of physics beyond the Standard Model.
Although Long Lived Particles (LLPs) are predicted in many models of physics beyond the Standard Model, general purpose accelerator-based experiments are limited in their ability to directly detect them, as they typically decay outside of the tracking acceptance of the detectors. While "missing energy" searches are possible, these are limited in scope by resolution effects and high background rates, particularly for the relatively light masses of LLPs favoured by many "dark sector" models. MATHUSLA is a dedicated LLP detector proposed for the HL-LHC, designed to directly detect the decays of LLPs across a broad range of masses and lifetimes. The detector is foreseen as a 100mx100mx25m instrumented decay volume constructed on the surface approximately 100m from the CMS interaction point. Decays of LLPs within this volume are reconstructed and vertexed by tracking their decay products. In this presentation I will present the physics case for such an experiment, and discuss the ongoing detector development activities within Canada and internationally.
Upgrading the SuperKEKB e+e− collider with polarized electron beams is under consideration as it opens a new program of precision electroweak physics at the $\Upsilon(4S)$. This Chiral Belle physics program includes determining $\sin^2\theta_W$ via separate left-right asymmetry ($A_{LR}$) measurements in $e^+e^−$ annihilations to pairs of electrons, muons, taus, charm and b-quarks using the Belle II detector. The precision that can be obtained matches that of the LEP/SLC world average and enables the probing of neutral current couplings with unprecedented precision in a manner sensitive to their running. At SuperKEKB, the measurements of the individual neutral current vector coupling constants to b-quarks, c-quarks and muons in particular will be substantially more precise than current world averages and the current $3\sigma$ discrepancy between the SLC $A_{LR}$ measurements and LEP $A_{FB}^b$ measurements of $\sin^2\theta_W^{eff}$ can be addressed. It can also provide the highest precision measurements of neutral current universality ratios. In addition, having a polarized electron beam enables measurements of tau lepton properties, including the tau g-2, with unrivaled precision. This presentation will cover the physics motivation and status of the R&D necessary for the upgrades to achieve and measure the SuperKEKB e- beam polarization.
Gauge theories are the basis of our understanding how the elementary constituents of matter such as quarks and gluons interact and form therefore the backbone of the standard model of particle physics. Numerical simulations of gauge theories are key for understanding the physics of the standard model and have developed into a thriving and extremely successful field. There are however important problem classes that are plagued by sign-problems, and that are therefore out of reach for current simulation methods, even for future supercomputing centers.
Quantum computers represent an enormous scientific opportunity to make inroads towards answering fundamental open questions that are insurmountable for current computing methods. But doing so requires developing the theoretical framework and concrete protocols that will allow quantum computers to simulate fundamental particles and their interactions. This talk will be devoted to recent developments in quantum computing that strive to develop quantum-enhanced simulation methods for simulating particle physics.
Time and causality are two of the most fundamental concepts in physics, and yet they remain ill-understood. In my research I use general relativity and quantum mechanics, two cornerstone theories of physics with great theoretical and experimental success, to investigate one of the most exciting and thought-provoking questions about time and causality: whether causality can be violated.
The two most commonly known manifestations of causality violation are faster-than-light (FTL) travel and time travel. In time travel, the traveler directly violates causality by traveling to their own past. In FTL travel, the traveler merely travels so fast that they can causally influence events they could not have otherwise - but as it turns out, FTL travel can often be used to facilitate time travel.
Can these concepts be transformed from science fiction into real science, even just in principle? The answer to this question is currently unknown, and this indicates a major deficiency in our understanding of the universe. A positive answer would revolutionize physics and require substantial rewriting of our existing theories. A negative answer would provide valuable insights into the inner workings of our theories, by figuring out the mechanisms by which our universe protects causality, as first conjectured by Stephen Hawking.
In this talk I will discuss the possibility of FTL travel and time travel within the established framework of general relativity and quantum mechanics, including recent progress made by myself and my students.
The Standard Model has been successful in describing phenomena that we observe from galactic down to subatomic scales. Nevertheless, it is not complete. The extreme weakness of gravity or the nature of Dark Matter are examples of puzzles that suggest the presence of new physics. Traditionally, we look for answers at colliders. In the last few years, we realized some of these answers may come from precision experiments that look for the tiny signals with which new physics may manifest itself. In this talk, I will review some of these ideas and the motivation behind them.
This talk will go over the different skills that physicists will acquire during their undergraduate and graduate studies. An overview of the different career paths will be given as well as tips to network. Finally, we will discuss salaries and how to prepare for interviews.
Cette conférence passera en revue les différentes compétences que les physiciens acquerront au cours de leurs baccalauréat,maîtrise et doctorat. Un aperçu de différentes carrières possible sera donné ainsi que des astuces pour réseauter. Enfin, nous discuterons de la préparation de l'entrevue et des salaires.
The narrow beam divergence of an optical communication link results in low probability of interference, improved privacy, and licence-free operation. It also allows significantly higher data rates than traditional RF communication, with lower power consumption. These advantages are of interest for satellite mission applications such as deep-space communication and earth observation satellites which generate large data volumes, and are critical to LEO mega-constellation communication networks which require hundreds of intersatellite links to support terrestrial communication for the general public on Earth. To support the mega-constellation business concepts, the satellite terminals must achieve demanding performance objectives despite aggressive targets for cost and production rates.
Honeywell has leveraged decades of experience in reliable space optics and mass production of space hardware to develop a low-cost optical communication terminal designed for manufacturability. Multiple iterations of our baseline terminal design have been built and tested, and we are now expanding into customized terminals for specific use cases. This presentation will describe the Honeywell baseline terminal and discuss some of the options needed for specific mission applications. It will also look at the development process and some of the key challenges for creating high performance optical instruments for use in space.
For the last 20-30 years, quantitative finance has been a prime destination for physicists moving away from academic careers and to modeling of an increasingly complex financial system. During the 2008 financial crisis the role of physicists and mathematicians promised to be redefined in risk management: What could physicists do to prevent the next financial crisis? Did they cause the last one?
I will discuss some of the questions of interest to physicists entering a new career in finance and risk management at this time: What are some of the interesting questions coming up in finance in the next few years? How do I get started? What background do I need? Should I do data science? Will an MBA help? Will I have impact?
The Electron-Ion Collider will be a new discovery machine for unlocking the secrets of the "glue" that binds the building blocks of visible matter in the universe. The EIC will consist of two intersecting accelerators, one producing an intense beam of electrons (Electron Storage Ring), the other a high-energy beam of protons or heavier atomic nuclei (Hadron Storage Ring), which are steered into collisions of spin-polarized beams in the Interaction Region. The EIC design will make use of existing ion sources, a pre-accelerator chain, a superconducting magnet ion storage ring, and other infrastructure of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. The Rapid Cycling Synchrotron will provide injection into ESR, while preserving beam polarization. The Strong Hadron Cooling system will preserve emittances of the proton beam during collision run. The EIC project has recently received Critical Decision 1 (CD-1) approval from DOE, and the project team is now working on the next milestone – CD-2. The EIC project will be delivered in a collaboration of domestic and international partners. In this talk, the status of EIC accelerator will be reviewed.
An understanding of how the properties of matter originate from the deeply fundamental constituents of QCD is the primary goal of nuclear physics and the motivation for a new facility, the Electron-Ion Collider (EIC). The EIC will be constructed at Brookhaven National Lab and will take advantage of the entire existing Relativistic Heavy Ion Collider (RHIC) facility, but requires challenging modification and additions to provide unprecedented beam intensities while maintaining a high degree or polarization. The well-established beam parameters of the present RHIC facility are close to what is required for the highest performance of the EIC, with the exception of three times the hadron beam current that will be achieved by increasing the number of bunches. The addition of an electron storage ring (ESR) inside the present RHIC tunnel will provide polarized electron beams up to 18 GeV for collisions with the polarized protons or the heavy ions of RHIC. The EIC accelerator design must satisfy all the requirements of the science program while having acceptable technical risks, reasonable cost, and a clear path to achieving design performance after a ramp-up period.
Over the last decade, theoretical advances by Giorgio Parisi, Francesco Zamponi and coworkers have provided an exact solution to the glass problem in the limit of infinite spatial dimension. Interestingly, the dynamical arrest this work predicts is consistent with the mode-coupling theory of glasses, and the ensuing entropy crisis at the Kauzmann transition with the random first-order transition scenario. However, what survives of these features and what other processes contribute to the dynamics of three-dimensional glass formers remain largely open questions. In this talk, I present our recent advances toward a microscopic understanding of the finite-dimensional echo of these infinite-dimensional features, and of some of the activated processes that affect the dynamical slowdown of simple yet realistic glass formers.
Phase behavior of polymer blends (i.e., the miscibility or phase separation of the two or more polymer chemistries in the blend) can be tuned by incorporation of functional groups that allow for favorable association between the polymers in the blend. In this talk, we will present our current work involving Polymer Reference Interaction Site Model (PRISM) theory and coarse-grained molecular dynamics simulations to predict the blend morphology (i.e., macrophase separated, disordered with concentration fluctuations, microphase separated) as a function of placement and fraction of associating groups along polymer chains at varying strengths of association. The features in structure factors [S(k) vs. k] calculated using PRISM theory for varying polymer design and association strengths are used to identify the morphologies within the phase diagram. For the disordered morphologies that exhibit concentration fluctuations, we calculate how the length scales of concentration fluctuations change with the associating group placement for similar fraction of association groups. Then, we use molecular simulations to visualize and quantify the molecular packing that explain these results obtained from PRISM theory. Using this combination of PRISM theory and molecular simulations we are able to explore a large polymer design space with reduced computational intensity and more reliable structure factors than would be possible with an approach involving only molecular simulations.
This work was funded by U.S. Department of Energy, Office of Science (DE-SC0017753)
Supramolecular soft crystals are periodic structures formed by the hierarchical assembly of macromolecular constituents and occur in a broad variety “soft matter” systems, from polymers and liquid crystals to biological matter. Often the building blocks consist of groups of molecules, termed “mesoatoms,” such collections are readily reconfigurable individually and collectively at the sub-unit cell scale, strongly coupling to periodic symmetries at supra-unit cell scale. In this talk I describe structure formation of soft crystals deriving from the assembly block copolymer (BCP) melts, a prototype for a broader class of supramolecular materials. While supramolecular crystals are observed to form crystal symmetries whose complex symmetries rival their hard atomic counterparts, rational frameworks for understanding and guiding these complex symmetry based on properties of the molecular constituents lag far behind. I will describe theoretical models that map thermodynamics of soft crystal formation in BCP onto geometric models which encode two competing tendencies. On one end, generically repulsive interactions favors minimal area of the inter-material dividing surface (IMDS) between unlike chemistries with mesoatomic domains. At the same time, the entropic cost of extending polymeric blocks to fill space evenly in these domains tends for favor uniformity in domain “thickness”. I will describe how assembly thermodynamics maps onto models that integrate generalizations of the Foam (or ‘Kelvin’) problem , and the Quantizer problems which seek, respectively, tessellations of space that minimize area and minimize second moments of distance within cells.
I will discuss two applications of this geometric formulation of thermodynamic principles. In the first, I will briefly describe a model of complex crystals of “quasi-spherical” mesoatomic units that describes the thermodynamic competition between complex phases including the Frank-Kasper phases, which have recently been observed in BCP and number of supramolecular systems [1]. Second, I will describe recent attempts to generalize the “mesoatomic picture” to BCP crystals of polycontinuous and inter-catenated network topologies. I will describe a basic framework to deconstructing these more complex domain topologies into elementary units whose non-convex shapes and packing may shed new light on the process of their formation. Additionally, I will describe how complex and non-uniform network domains motivate a generic picture for space filling in arbitrary complex BCP domains, known as the medial packing [2]. I will describe a (strong-stretching) theoretical model for medial packing in triply-periodic double network crystals (e.g. the double-gyroid and double-diamond) phases, whose predictions suggests this geometric principle may be the key to resolving a long-standing problem in BCP assembly regarding the their thermodynamic stability [3].
References:
1) A. Reddy, M. B. Buckley, A. Arora, F. S. Bates, K. D. Dorfman and G. M. Grason, "Stable Frank-Kasper phases of self-assembled, soft matter spheres", Proceedings of the National Academy of Sciences USA 115, 10233-10238 (2018).
2) A. Reddy, X. Feng, E. L. Thomas and G. M. Grason, "Block Copolymers Beneath the Surface: Measuring and Modeling Complex Morphology at the Subdomain Scale ", Macromolecules 54, 9223-9257 (2021).
3) A. Reddy, M. S. Dimitriyev and G. M. Grason, "Medial packing and elastic asymmetry stabilize the double-gyroid in block copolymers", submitted, arXiv: 2112.06977 (2022).
We experimentally characterize the 1D linear to 2D zigzag structural transition for arrays of ions confined in a linear Paul trap and cooled to near their ground state of motion. Raman sideband spectroscopy is used as a probe to reveal both the energy level structure and the motional population distribution of the ion crystal near the critical point. The nature of the transition will be discussed, prospects for coherence assessment near the critical point as well as potential applications in in-situ sensing of electric field noise.
Over the last half century, cancer has remained a major cause of death in Canada and worldwide. Although therapy-induced cure rates have gradually improved for some cancers and early detection has improved survival for others, cancer is still among the most straining healthcare burdens.
Radiotherapy has contributed to improvements in treatment through technological advances and refinements of dose fractionation and is currently responsible for approximately 80% of all non-surgical cancer cures. However, about half of patients treated with radiotherapy are not cured, creating a significant unmet need for continued improvement to therapeutic options.
Recent enthusiasm in the radiotherapy community surrounding the concept of FLASH radiotherapy, delivering large doses of radiation as a single dose at ultra-high rates, is founded on the enormous potential impact of FLASH on radiotherapy cure rates and improved quality of life for patients.
The current interest in FLASH was catalyzed by recent publications reporting a significant increase in the therapeutic index compared with conventional radiotherapy. The key observation driving further research is that, in FLASH, normal tissue damage is reduced whilst tumour control is maintained, enhancing the therapeutic index. Obviously, if borne out in clinical trials, FLASH radiotherapy would be a momentous step forward in radiotherapy, providing opportunity for improvement in response, cure rates, access to treatment, treatment capacity and healthcare economics.
While the FLASH radiation concept has generated significant interest, advancing the data is somewhat limited by the availability of suitable accelerator systems and comparability of existing experimental data. Many groups are pursuing FLASH radiation research with a plethora of sources and models with mixed results, making interpretation of the precise conditions in which FLASH-mediated normal tissue sparing occurs difficult. Together with its partners, TRIUMF possesses unique expertise, technology and capabilities to conduct comprehensive and systematic studies to investigate the FLASH phenomena using protons, photons and electrons in a single biomedical reference environment. Dedicated infrastructure for generating FLASH-relevant dose rates has recently been commissioned or is under construction at TRIUMF. The key technical cornerstones of this campaign, as well as dosimetry and early biological results will be presented.
Gold nanoparticles (AuNPs) have unique physical and optical properties that make them ideal for various medical uses such as biomedical imaging, photothermal therapy, and drug delivery. With higher concentrations used in cancer therapy, it is imperative to understand both the benefits and potential side effects of AuNPs. Several studies have been done to quantify the toxicity of naked AuNPs. Still, it is unclear whether the trends in toxicity can be attributed to variations in the cell line, size, and shape of the AuNPs, or to the absolute gold nanoparticle mass taken up by the cell. Utilizing the total reflection X-ray fluorescence (TXRF), rapid and precise uptake quantification for trace-levels of gold, complemented with a cell assay to measure short-term toxicity, is proposed. By incubating breast cancer cells MDA-MB-231 with different sizes, concentrations, and shapes of naked AuNPs, while measuring total cellular uptake of gold, the correlation between these parameters is investigated. Following the incubation, cell toxicity is measured using flow cytometry to draw conclusions regarding toxicity trends. We trust that this work will provide insight on the safety of AuNP use in vitro, which could be extrapolated to the safe in vivo clinical use of AuNP.
The 5/6 nephrectomy is a prevalent model in the analysis of chronic kidney disease. It often takes the form of a surgical resection of 5/6 of the renal mass in two, distinct surgical procedures. The first step involves the resection of 2/3 renal mass from the left kidney. The second stage is a complete resection of the right kidney after one week. The initial 2/3 resection is critical to the success of the model overall and has a large impact on downstream data collection. With increased variability between procedures and operators comes an increase in phenotype variability, with a high discard rate of 36% and waste of animals. We developed a program, along with a fully supported hardware and firmware suite consisting of a high-resolution camera connected to a laptop or tablet. The software identifies the kidney in the image and provides cut points overlaid to the camera image in real-time to the surgeon who then traces along those lines to complete the procedure. Augmented reality and image processing are done using a deep learning approach. Through this research, we hope to significantly increase the precision and reproducibility of the surgery to increase the success rates and decrease the number of animals that meet the resection goal. Software and setup will be made publicly available and shared with research groups worldwide.
Objective: When 3-dimensional conformal radiotherapy is eventually replaced by intensity modulated radiotherapy, the flattening filter (FF) can be removed from the medical linear accelerator (Linac). Although the flattening-filter-free (FFF) photon beam has some advantages such as higher beam output and less head scatter in dose delivery, there is a dosimetric concern over the low-energy photons in the FFF beam. This study investigated dosimetric changes when FF is removed from the Linac in doses of skin, bone and mucosa, beam angle and skin dose enhancement, when patient used topical cream during radiotherapy.
Methods: Monte Carlo simulations using the EGSnrc-based code were carried out using various water and heterogeneous phantoms containing bone, air and water. The mean doses on the phantom surface, and at the bone and mucosa were determined with various beam energies (6-10 MV), beam angles (0-90 degree) and presence of FF in the Linac. In addition, the photon energy distribution on the phantom surface and mean photon energies of the bone and mucosa were determined.
Results: For the water phantom, the output of the FFF photon beam was found more than two times of the FF beam. The dose at the phantom surface for the FFF photon beam was higher than the FF beam, and the results varied with the beam obliquity. Moreover, lower mean bone dose was found for the FFF photon beam compared to the FF beam, and the FFF beam contained more low-energy photons than the FF beam on the phantom surface. With application of topical cream in the phantom, dependence of dose enhancement on the cream thickness was found sensitive to the beam angle.
Conclusion: It is concluded that dosimetric changes are present on the photon beam when FF was removed from the Linac. This change is mainly due to the presence of low-energy photons in the FFF beam.
Objective: An AI Chatbot was created for radiation safety training in radiotherapy. The Bot was for radiation staff, namely, radiation oncologists, medical physicists and radiotherapists, working in a cancer center, so that they could learn and refresh their radiation safety knowledge without attending the classroom session in the center. This is in particular important in the pandemic period, when face-to-face communication between hospital staff should be kept to a minimum.
Methods: The Bot was created on the IBM Watson Assistant Cloud platform. For a human-like communication between the Bot and the user, machine learning feature such as Natural Language Processing provided by the tool of Intent in the Watson platform, was used to determine the specific intent of the user’s input. The Bot contained fifteen radiation safety questions, which could be customized according to training needs and timed to fit into the attention span of the end-user. For fine-tuning and commissioning, the Bot was pre-tested in various virtual meetings and conferences. Feedbacks from the test were used to further update and upgrade the Bot continuously.
Results: Using the Watson Cloud platform the Bot could be integrated into different channels such as Webchat, WhatsApp and Discord. The Bot was user friendly, and intentionally asked the name of the user and would use the name for further communication. When the user could not provide the expected response from the question, the Bot would provide guidance to the user and help him/her to give the correct answer. Finally, the Bot would report to the user the final results of the training and test, and provide suggestions to the user for further improvement.
Conclusion: A chatbot for radiation safety training in radiotherapy was created. The Bot could be accessed from any Internet of things to provide a convenient and efficiency knowledge transfer in radiation safety.
The Laser-Induced Breakdown Spectroscopy (LIBS) technique involves serval fields of science, such as laser-matter interaction, plasma physics, atomic physics, plasma chemistry, spectroscopy, electro-optics, and signal processing. The LIBS plasma is transient, unlike an inductively coupled plasma, arc plasma, or glow discharge plasma, which are all stationary plasma. This characteristic makes the LIBS technique suffers from poor sensitivity by comparison to other optical emission spectroscopy techniques. During the last three decades, extensive research has been carried out to improve LIBS sensitivity and performances by several approaches such as double pulse mode, combining LIBS with laser induced Fluorescence (LIF), combining LIBS with microwave among other techniques. The approach of LIBS combined with LIF (LIBS-LIF) is an emerging analytical tool that has the potential to analyse rapidly and in-situ with little or no preparation of any kind of material. LIBS-LIF is therefore a good candidate to fulfil the needs for real time analysis of contaminant traces for environmental applications.
The LIBS-LIF approach uses a first conventional laser tuned to a fixed wavelength to ablate the sample and generate the plasma. Then, a second tunable laser (such as an optical parametric oscillator (OPO)) selectively excites the plasmas ablation and thus enhances the emission of spectral lines of interest. Different combination of excitation! Lines as well as plasma generation creation conditions were studied to optimise the performances of the LIBS-LIF for spectrochemical analysis in our laboratory and elsewhere. In this presentation, we will discuss the most significant research contributions for improving the quantitative analysis by LIBS-LIF in terms of sensitivity and accuracy for environmental, agriculture and mining applications. We will present some novel approaches aimed at the improvement of the analytical figure of merit of LIBS-LIF. Finally, a view point on the LIBS and LIBS-LIF combination and their future will be given and discussed.
Pulsed spark discharges in dielectric liquids have various applications such as precise machining, nanomaterial synthesis, or for liquid depollution/reformation. Discharge in liquids produce highly reactive species, in addition to shock waves, heat, and radiation. Discharges at the interface of two immiscible liquids have been recently introduced, and they showed great interests for fundamental as well as for applied studies. For instance, due to E-field enhancement at the interface, it was possible to sustain discharges with 100% of probability of occurrence. These discharges simultaneously dissociate the two liquids, which opens the way to a novel plasma concept.
In this work, spark discharges are produced between two-copper electrodes, mounted parallel to the interface of two liquids: distilled water and heptane. The discharges were produced using pulsed high voltage with amplitude of 20 kV and pulse width of 500 ns, at low repetition rate (5 or 50 Hz). The waveforms of the voltage and the current of each discharge were acquired, and then automatically processed, using algorithm, to determine some characteristics. For instance, we determined the temporal evolution of the probability of discharge occurrence as a function of electrode-interface distance. The results have shown that the highest probability is obtained when the electrodes are at the interface. This is because the electric field is intensified by the interface. Moreover, the liquids changed color and became milky. Such change is due to the production of emulsion, i.e. droplets of heptane in water. The size distribution of the emulsion has been determined using dynamic light scattering (DLS). The production of the emulsion is due to the generation of cavitation bubbles that oscillate (series of explosion-implosion motion) at the interface. DLS measurements showed that the spark-generated emulsion has a size distribution range between few tens of nanometer to few micrometers.
Large volume, atmospheric pressure non-thermal plasma volumes are desired for uniform plasma processing applications. Nanosecond (ns) pulsed plasma sources are effective at igniting and sustaining plasmas in atmospheric pressure gases and gas mixtures. These pulses produce large quantities of excited species and highly reactive radicals participating in the desired chemical reaction pathways. When sufficiently separated in time, the power delivery of each pulse is relatively discrete resulting in minimal memory effect. The rapid quenching of the electron and excited species densities causes the discharge to essentially face re-ignition conditions every pulse. This dynamic load impedance leads to the efficiency of power delivery from electrical mains to plasma to be sufficiently low. On the other hand, conventional RF discharges can provide high electrical power-to-plasma chemical energy conversion efficiency, however sustaining a uniform discharge at atmospheric pressure proves to be challenging. Commercially available RF power supplies cannot reach the breakdown voltage thresholds required to ignite electrical discharges at atmospheric pressure in most gas mixtures and useful interelectrode gaps. We are particularly interested in a rather new approach of the combination of a ns pulsed high-voltage source with a continuous RF. The ns pulser causes gas breakdown and electrical discharge formation in the interelectrode gap while supplying a high density of energetic electrons to initiate energetic plasma chemistry. Between ns pulses, the sub-breakdown continuous RF field takes over and provides the typical RF processing characteristics such as large diffuse volumes and moderate energy plasma chemistry. Preliminary testing was performed in parallel-plated geometry with argon as the plasma forming gas at 1 atm. Preliminary results demonstrated the ability to produce a repetitive ns discharge and formation of a uniform glow at sub breakdown voltages in between pulses. Gas mixtures containing increasing amounts of N2 and H2 are introduced to see the effect on the plasma characteristics as well as power delivery. Introducing molecular gasses will give insight on the possibility of using this method of power delivery for reactive gas mixtures. We will report on the efficiency of power delivery as well as general dynamics of the discharges.
A streamer discharge is a highly reactive and dynamic non-thermal plasma. It has been used in many applications, including environmental remediation, medicine, and material processing. Although the physics of streamer discharges in gaseous media is well understood, its interaction with a solid and liquid dielectric surface remains under investigation, in particular when quantitative data are searched for. Here, we investigate the propagation of pulsed discharge at the surface of distilled water, in pin-to-plate geometry and under various experimental conditions of gap distance and applied voltage. The former has been adjusted between 10 and 1000 µm, while the latter was adjusted from 8 to 20 kV; the pulse width was 100 ns. The discharge was characterized electrically, using high voltage and current probes, and optically, using time resolved imaging technique (ICCD camera with temporal resolution of 1 ns).
The results have showed that the discharge is ignited at the anode tip and propagates towards the water surface. Initially, it has a disk-like shape that evolves (after a few nanoseconds) to a ring. Another few nanoseconds later, the ring breaks into dots that propagate on the water surface. Because of its stochastic nature, a large number of discharges was performed to address the influence of the applied voltage and the gap distance on the number of plasma dots produced, as well on the injected charge. As expected, for a given applied voltage, the breakdown voltage is found to increase with the gap distance. Moreover, the total injected charge decays linearly with a rate of ~8-9 nC by 200 µm of gap distance increase, while the number of dots decreases linearly with the gap distance at the rate of ~1 dot by 200 µm of gap distance increase. Based on the measurement of the propagation velocity of the plasma dots and on the estimation of the electric field in the medium, an average mobility of plasma dots of ~1.5 cm2/Vs is evaluated. From both, this value and the instantaneous measured propagation velocity, the temporal evolution of the charge per dot is determined. The observations reported here are of interest for fundamental studies as well as for applications where well-controlled charge transfer to surfaces is crucial.
Rupture of the supply chain caused by the COVID-19 pandemic highlighted the need for increased local food production. Coupled with population increase, there is steadily increasing demand for fresh local produce. While hydroponic growth allows year-long environmentally controlled production, its humid environment brings undesirable side effects like the proliferation of fungi. For example, Pythium and Phytophtora fungi lower the yield of Boston lettuce production; to combat these pathogens we envisage water treatment with non-thermal air plasma, a method where chlorination and ozone have failed. This enables other beneficial reactions, nitrogen fixation, which help reduce the need for commercial fertilizers.
Before tackling the larger-scale use of plasma in Quebec-based green houses, our team is first conducting the following laboratory-scale investigation. We study the evolution of lactua savita var. capitata in a batch type hydroponics system, from seedling to a fully mature growth. A comparison is conducted between plasma treated and untreated contaminated water, with or without added nutrient, and positive (tap water + nutrients) and negative controls (tap water only), to assess the impact of plasma treatment on plant growth and possible phytotoxicity. Plant growth indicators such as root length and foliage size are investigated; evolution of the water and its nutrient content are monitored with pH, conductivity and colorimetric essays for both NO_2^- and H_2 O_2 with Griess reagent and titanium sulfate stabilized with sodium azide. Electrical parameters for plasma generation are correlated to those resulting chemical moieties in the water.
A polarized electron beam is being considered as an upgrade for the SuperKEKB accelerator, which would enable a new precision electroweak physics program at Belle II. Many of these electroweak tests are preformed with experimental measurements of the left-right asymmetry, $A_{LR}$, where the expected level of precision at Belle II dictates at least one loop calculations from theory. We have tested the level of agreement in NLO calculations of $A_{LR}$ for Bhabhas, against a Monte Carlo generation of the asymmetry with the new ReneSANCe generator. For future experimental measurements of $A_{LR}$ the expected limiting uncertainty is the average beam polarization. A new technique, Tau Polarimetry, has been shown to be capable of measuring the average beam polarization to better than half a percent. This has been implemented at the $B\kern-0.1em{\small A}\kern-0.1em B\kern-0.1em{\small A\kern-0.2em R}$ experiment, a precursor experiment to Belle II, and the average beam polarization of it's associated accelerator, PEP-II, precisely measured. This presentation will present the technique, including its systematic uncertainties, using the full $B\kern-0.1em{\small A}\kern-0.1em B\kern-0.1em{\small A\kern-0.2em R}$ $\Upsilon$(4S) dataset.
The nature of dark matter and its relationship to the Standard Model is one of the highest priority open questions in particle physics today. Accelerator-based experiments are a powerful tool in the search for dark matter and the new bosons that may mediate its interactions with the known particles. The DarkLight experiment will search for such a new boson with preferential couplings to leptons in an important uncovered mass range. DarkLight will be based at the TRIUMF e-linac, and this project includes planned upgrades to the accelerator that will both increase its energy and make it accessible to other future experiments.
The neutron itself is an ideal laboratory for studying various beyond-the-standard-model theories. Precise measurements of the neutron lifetime can shed light on light element abundances in the universe, searches for electric dipole moments (EDMs) could reveal mechanisms that created the apparent matter-antimatter asymmetry in the universe. The key to these studies are long observation times of the neutron and high neutron densities in experiments. The first is achieved by using very slow, ultracold neutrons (UCN) that can be studied and manipulated for hundreds of seconds, the latter is achieved by superthermal sources of ultracold neutrons.
At TRIUMF the TUCAN collaboration is combining a cyclotron-driven spallation neutron source with a liquid-deuterium moderator and superfluid-helium converter cooled down to around 1 K by a high-power helium-3 cryostat. The UCN are extracted near-horizontally into vacuum guides and transported to a room-temperature EDM experiment. A state-of-the-art magnetically shielded room and self-shielded coils provide a stable magnetic field environment essential for a precise measurement.
The presentation will introduce the key principles of source and experiment and provide a status update.
Most of the visible mass in the universe consists of quarks and gluons bound in protons and neutrons. But, there remain several big questions about some surprisingly basic properties of the protons and neutrons (or nucleons, collectively). How does the mass of the nucleon arise from the much lighter quarks and massless gluons? How does the spin ½ of the nucleon arise from the spin ½ quarks inside it? What are the emergent properties of dense gluons systems? To investigate these questions, the US Department of Energy is building the Electron-Ion Collider (EIC) at Brookhaven National Laboratory on Long Island, NY. Polarized electrons will be accelerated to 5-18 GeV and collide with polarized protons, light ions, or unpolarized heavy nuclei accelerated to 40-275 GeV. The expected peak luminosity will be as high as 1034 cm-2s-1 to allow for precision “nuclear femtography.” As part of large international collaborations, several Canadian universities are shaping the development of the EIC experiments and their detectors.
In the past decades, the standard model of cosmology, the inflationary lambda CDM model, has had remarkable success at predicting the observed structure of the universe over many scales of space and time. However, to this day, very little is known about the fundamental nature of its principal constituents: the inflationary field(s), dark matter, and dark energy. In the coming decade, new surveys and telescopes will provide an opportunity to probe these unknown components. These surveys will produce unprecedented volumes of data, the analysis of which can shed light on the equation of state of dark energy, the particle nature of dark matter, and the nature of the inflaton field. The analysis of this data using traditional methods, however, will be entirely impractical. In this talk, I will share recent advances in cosmological data analysis, specifically focusing on the development and the application of machine learning methods. I will show how these methods can allow us to overcome some of the most important computational challenges for the data analysis of the next generation of sky surveys and open a new window of discoveries for cosmology.
In this talk I will discuss relevant environment effects (i.e., accretion disk, tidal gravitational field from close objects) that influence the formation and dynamics of extreme-mass-ratio inspirals (EMRIs), which are important sources for space borne gravitational wave detectors such as LISA. I will show that disk-assisted EMRIs may be more commonly seen by LISA. They can be distinguished from EMRIs formed through cluster multibody scattering by eccentricity measurements. The disk force and tidal gravitational field from nearby objects may also leave observable imprints on the gravitational waveform of the EMRIs. With environmental effects properly accounted for, multi-messenger observations of EMRIs provide new opportunities in probing dark matter, primordial black holes and accretion flows at galactic centers.
Planets in our solar system can be divided into rocky terrestrials as large as the Earth vs. gassy giants as small as Neptune. Planets outside of our solar system, on the other hand, look nothing like our own, with most of these detected exoplanets falling right in between the size of the Earth and Neptune. I will describe the underlying physics that drives the huge diversity in the observed exoplanetary population and discuss how future missions will help us better understand the formation and evolution of solar and extrasolar planets.
Mining is at the fundamental base of the technologies needed to manage Climate Change; Canada has recently recognised the importance of implementing a critical metals strategy to secure the future. As we search for more metals we are going deeper, at depths of 2000 m or more the current chilling systems are no longer efficient or even capable of providing the needed chilling. Cryogenic liquids are an energy storage vector that can convert the heat of the mine to electricity and has the unique feature of being a pumped liquid; therefore, chilling can be delivered to the zone where it is needed without having to chill the entire mine air supply. We will present results from our latest test in a real mine setting, which elevates the TRL level from 5 to 7, and outline plans for a large scale test in the next phase of development on the pathway to commercialisation. The presentation will outline the physical mechanisms of cryogenic chilling and energy storage, provide results of measurements during the real time test and include a short video of duration 4:20.
I will give an overview of selected topics where the EIC could give a substantial improvement to our current understanding of hadron structure.
The Electron-Ion Collider (EIC) will uniquely address questions about the origin on nucleon mass and spin, properties of dense systems of gluons, as well as opportunities to connect to neutrino physics, astrophysics, and fundamental symmetries at higher energies.
Canadian theorists are valued collaborators complementing the experiment efforts worldwide, and they are currently taking roles in EIC working groups and committees and offer a broad range of contributions such as e+A gluon saturation, GPDs and TMDs, radiative corrections and Lattice QCD.
The talk will briefly outline related efforts and expertise of Canadian theory groups, and how Canadian subatomic physics community gathers to outline its vision for the next five years and beyond, placing Canadian contributions within a long-term international context.
Holographic light-front wave functions augmented with a dynamical spin structure are used to predict the electromagnetic form factors as well as the decay constant and charge radius for pion and kaon.
High entropy oxides (HEOs) are a new class of disordered materials that exhibit great promise for a range of applications due to their enhanced structural stability. The “entropy” in an HEO originates from the random mixture of five or more metal ions sharing a single crystal lattice. These phases can only form at high temperatures, when configurational entropy can overwhelm the enthalpy of formation for a conventional ordered phase. However, the actual degree of configurational disorder, its role in stabilizing the HEO phase, and its effect on other physical properties such as magnetism all remain open questions. To shed light on these questions, we have selected the spinel HEO (Mn,Fe,Cr,Co,Ni)3O4 as our model system. This material possesses two unique advantages over other HEOs: (i) the spinel structure has two distinct metal sites in its lattice, allowing us to directly probe entropic forces vs. site selectivity and (ii) all five metal ions are magnetic, meaning that we can independently study the effect of disorder and magnetic dilution. Our study makes use of experimental probes with sensitivities that extend over many orders of magnitude in length scale, which is important for characterizing the true degree of randomness. In my talk, I will present our findings on the nature of the role of entropy in determining the structure of the spinel HEO and the relationship between configurational disorder and magnetism.
Most of us are familiar with ferromagnetic and antiferromagnetic materials.
Although in some cases quantum fluctuations can be strong in such systems we
would usually say that the ground state is ordered and described by a non-zero
local order parameter. In such systems, the interaction between the quantum
spins do not depend on the bond direction. Today, there is a growing class of
magnetic materials where it is believed that the interactions indeed are
bond-dependent in a way first imagined by Alexei Kitaev thereby opening a way
for realizing so called topological phases. Bond-dependent interactions are
strongly frustrating for the system and hinders conventional ordering.
However, in these Kitaev materials other interactions are also often present,
among them the well known Heisenberg coupling and also off-diagonal Gamma (Γ)
terms giving rise to an unusally rich phase diagram. Even for the simplest
models of Kitaev materials it is extremely difficult to arrive at a precise
understanding of this complex phase-diagram. Hence, in order to obtain
accurate results it is often useful to restrict the analysis to
low-dimensions and here we mainly discuss chains and two-leg ladders. Using
advanced numerical techniques, it is possible for such models to determine
the phase-diagram with very high precision, including the effects of an
applied magnetic field. An astonishing abundance of phases arises from the
combination of frustration and applied field. In this talk I will focus on
some of these phases that appear disordered, without any conventional local
magnetic ordering, but where a hidden string-order can be identified.
Surprisingly, such string-order was first suggested in the context of surface
roughening.
Understanding the nature of the quantum spin liquids (QSL) is the holy grail of quantum condensed matter physics with a broad range of implications to other research fields. Many materials, such as the kagome lattice Heisenberg antiferromagnet (KLHA) consisting of Cu$^{2+}$ ions with spin S=1/2 arranged in a corner sharing triangle geometry, have been proposed as the model system for the QSL. However, they all suffer from various complications, such as the phase transition into the long-range ordered ground state (which should not take place in the real QSL). The few materials that do not undergo a long-range order tend to have structural disorder. Recent research indicated that structural disorder often affects the properties of the proximate QSL materials in a profound manner, making the interpretation of the experimental findings non-trivial. Nuclear magnetic resonance is a local probe, and in principle suited for characterizing the disorder effects in materials. In practice, the distribution of the NMR spin-lattice relaxation rate $1/T_1$ induced by disorder prevented proper data interpretation for decades. In this talk, we will explain how one can deduce the distribution function $P(1/T_1)$ of $1/T_1$ based on inverse Laplace transform (ILT) of the nuclear magnetization recovery [1]. $P(1/T_1)$ provides rich information, such as the fraction of spin singlets in the KLHA [2].
[1] P.M. Singer et al., Phys. Rev. B 101, 174508 (2020).
[2] J. Wang, W. Yuan et al., Nature Physics 17, 1109-1113 (2021). DOI: 10.1038/s41567-021-01310-3
Approximately half of all cancer patients require radiation therapy at some point during the management of their disease. Radiation detectors are tools for the quantitative characterization of fields of ionizing radiation used for radiation therapy and are essential for their safe and effective use. The goal of dosimetry measurements is to quantify the amount of energy deposited in the body (dose). Therefore, a perfect detector would respond to radiation the same way as human tissue. However, most radiation detectors are not tissue equivalent, which poses a major challenge.
Organic electronics are attractive candidates for radiation detectors due to their ability to have highly customizable configurations, can be made flexible, and can be fabricated with a wide selection of materials (i.e. tissue equivalent). In this talk I will present our investigation of a novel detector, the stemless plastic scintillation detector (SPSD), which couples an organic photodiode to a plastic scintillator. Plastic scintillation detectors (PSDs) offer properties that are ideal for the measurement of small fields (high spatial resolution, tissue equivalence, real-time measurements, etc.). However, a limitation of PSDs is Cerenkov radiation (created in the optical fiber), which contaminates the signal and requires a correction. The SPSD detector eliminates the need for an optical fiber to carry the signal, which could allow it to have the benefits of a PSD, while removing the main drawback.
The development of this detector will be presented in 4 steps. First, an organic photodiode was operated as a direct radiation detector, exhibiting linearity with dose rate and output factors which agreed with commercial detectors. Second a novel method for the correction of extraneous signal (Compton current) in the organic photodiode will be described. Third, an organic photodiode was coupled to an organic scintillator, creating a single-element SPSD. Several radiation dependencies of the SPSD were measured, which included: linearity with dose, instantaneous dose rate, energy dependence, and directional dependence. The dependencies measured were promising for employment as a radiation detector. Lastly, the culmination of this work was the fabrication of a 1D SPSD array. The array accurately measured small field profiles and output factors.
Introduction: The eye’s optics change in those with type 1 diabetes mellitus. Known optical changes could impact both vision and imaging of diabetes-related changes to blood vessels, which are sight threatening. Here we investigate retinal image quality in those with diabetes and healthy controls.
Methods: Using novel methods, retinal image quality was derived for 1200 healthy eyes and 46 participants with type I diabetes mellitus with 47 age-matched controls. For each eye, a phase plate, generated from previously measured Zernike polynomials, was placed in an eye model in CODEV. Individual point spread functions (PSFs) and modulation transfer functions (MTFs) were generated. Image quality metrics were determined from PSFs: their diameter at 50% Encircled Energy (EE), Strehl Ratio (SR), and FWHM depth resolution and from MTFs: area under the Hopkins ratio (AHR).
Results: Expected decreased image quality with age was seen in the larger healthy dataset but not in the age-matched healthy controls. Lens thickness increased significantly with age with an additional effect of diabetes duration, in age-matched controls and those with diabetes. In those with diabetes, for at least one metric, image quality worsened with an increase in lens thickness and with variables related to diabetes: lack of diabetes control (glycated hemoglobin, HbA1c) and diabetes duration. A semi-log fit to lens thickness and HbA1c gave the best multiple variable fit of SR and AHR, global metrics of image quality, and good fits of depth and lateral resolution (EE and FWHM). Multiple variable linear fits of metrics of lateral and depth resolution (EE and FWHM) to HbA1c and diabetes duration gave the best fits.
Conclusions: Compared to healthy control eyes, image quality in eyes of those with diabetes worsens with increasing lens thickness, diabetes duration and lack of diabetes control (HbA1c). The lens thickness increases with diabetes duration. Reduced image quality may explain poorer vision in those with diabetes and may affect the sensitivity of retinal screening for sight threatening conditions. Extending this work could yield improved imaging instruments.
A quantitative real-time in vivo evaluation of ionizing radiation delivered to patients during a radiotherapy procedure is critical to assure patients receive treatments with rigorous quality control. Current dosimeters are not well suited for simple and direct measurements due to atomic composition, requiring correction to dose distribution, and probe size limitations. We are developing a fibre optic probe dosimeter based on a radiochromic sensor for real-time in vivo dosimetry. The calibrated change in optical density of the radiochromic sensor is used to quantify the absorbed ionizing radiation. The radiation sensitive material is composed of lithium-10,12-pentacosa diynoate (LiPCDA), which polymerizes upon exposure resulting in an increased optical density. We have observed that monomers of LiPCDA have two distinct dose-sensitive crystalline forms with distinct polymerized optical absorption maxima at 635 nm (635-LiPCDA) and 674 nm (674-LiPCDA). We have characterized and compared the dose sensitivity and dose rate response of the two crystal morphologies produced by adjusting the Li+ concentration using a linear accelerator (LINAC). Alternatively, in dense tumours near sensitive organs, direct ionization through charged particles (Hadrontherapy) may be used as an effective treatment. We investigate here the dose response of both radiochromic LiPCDA crystal forms comparing dose response behaviour to X-ray vs. proton ionizing irradiation. This enables our dosimeter to expand its application to a broader variety of new radiotherapy methods. Radiochromic crystals were fabricated to produce both 635 nm and 674 nm forms by adjusting the ratiometric concentration of Li+ to active material and exposed to 50-7000 cGy using a clinical LINAC with either a 6 MV X-ray beam (University Health Network) or a cyclotron producing a tunable 74 MeV proton beam (TRIUMF). Preliminary results from photon and proton irradiation show that 674-LiPCDA crystals are significantly less sensitive to dose but have a broader dynamic range. In conclusion, we demonstrate that radiochromic LiPCDA crystals can be preferentially grown to exhibit differing dose response based on their crystal structure under photon irradiation, and this dosimeter can be generalized to proton therapies (including FLASH).
Structural optical imaging within tissues is potentially useful for medical screening of various diseases, and particularly suitable for superficial and easily accessible oral cancers. Optical coherence tomography (OCT) is a non-invasive, low coherence imaging technique that allows micron-scale resolution for structural determinations within tissues. Intensity data from OCT images can be used to determine the optical properties of samples, such as the attenuation coefficient.
While a highly promising technique, the imaging depth of OCT is limited to only several millimetres in most light diffusing tissues. In order to overcome this limitation, we have examined the use of optical clearing agents with chemical penetration enhancing techniques to increase the axial depth of resolution at which signals can be obtained. We examined the use of penetration enhancers on porcine tongue tissue based on the time-dependent effects of the clearing effect and depth for which 50% signal intensity is lost through the tissue.
We have collected OCT data from a prospective study on lesions obtained from recently excised human oral tissues biopsied for histopathology analysis, as well as archived tissue samples embedded in paraffin. By modeling the OCT data using a form of the Beer-Lambert Law, 2D attenuation coefficient maps were computed. We have studied the attenuation coefficients obtained from the intensity decay data of 250 excised human tissue samples from our prospective study, diagnosed as non-cancerous (i.e. hyperkeratosis) and squamous cell carcinoma through histopathological analysis. The calculated attenuation coefficients were then correlated to the histopathological diagnoses (from hyperplasia to cancer). Our results suggest it may be possible to use OCT as a fast and non-invasive oral cancer screening tool.
Magnetic Resonance Imaging (MRI) is a non-invasive medical imaging modality that
provides excellent soft tissue contrast and resolution. MRI cell tracking effectively monitors cell migration in various immunotherapies where cells are labelled with high susceptibility iron oxide particles to create a negative contrast in the image. However, it is not possible to quantify the number of cells as the number of particles within each cell can vary significantly. Quantitative analysis of the cell migration requires evaluating the number of particles within a cluster. Iron oxide microparticles are also explored in hyperthermic treatments of cancer, where the thermal dose is defined by the particle quantity.
The microparticle quantity correlates with the magnetic field distortions. Severe field distortion leads to image artifacts in conventional MRI. It is therefore very challenging to quantify the particles with such methods. Image artifact can be effectively removed by reducing the signal evolution time in the pure phase encoding (PPE) MRI. The technique can accurately measure the magnetic field distortion around the particle cluster and quantify the particle. PPE methods were successfully employed to correlate iron microparticle cluster mass with magnetic field distribution in vitro using a 1 T small animal scanner. Excellent linearity and theoretical agreement were observed.
The upcoming MOLLER (Measurement Of a Lepton Lepton Electroweak Reaction) experiment at Jefferson Lab will provide a precision measurement of the parity-violating asymmetry in polarized electron-electron scattering. This should yield the most precise measurement of the weak mixing angle at low energy, and would be sensitive to new physics contributions in the interference between the neutral current and electromagnetic amplitudes as small as 0.1% of the Fermi constant. This would provide discovery reach for new physics in flavor and CP-conserving processes at the multi-TeV scale.
Acceleration of particle beams by induced wakefield in plasmas is a possible solution on a path to push the energy frontier of experimental high energy physics by constructing compact machines with acceleration rates in excess of GV/m. The Advanced Wakefield Experiment (AWAKE), a plasma wakefield acceleration experiment, driven by the 400 GeV proton beam from the CERN SPS synchrotron is unique among plasma wakefield acceleration projects in its selection of protons as the driving particles. The efficiency and reach of energy transfer from 400 GeV protons to electrons confer a clear advantage over electron or laser driven alternatives. The AWAKE collaboration, including a team from Canada, was formed in 2013 as a proof-of-principle experiment and has already produced a wealth of results. The Run 1 of the experiment yielded the discovery of Self-Modulation of the SPS proton bunch in plasmas and acceleration of externally injected electrons to the GeV energy level. Starting in 2021 the experiment has proceeded with a decade-long Run 2 program. The goals for the Run 2 are the stable acceleration of a quality electron beam with high gradients over long distances and proof of scalability of the design principles to very high beam energies. This will allow the AWAKE collaboration to contemplate first applications of the experimental scheme to high-energy physics.
The Proton Improvement Plan II (PIP-II) project is an essential upgrade to Fermilab’s particle accelerator complex to enable the world’s most intense neutrino beam for the international Long Baseline Neutrino Facility (LBNF)/Deep Underground Neutrino Experiment (DUNE), and a broad particle physics program for many decades to come. PIP-II will deliver 1.2 MW of proton beam power from the Main Injector upgradeable to multi-MW capability, and will provide capabilities for Continuous Wave (CW) beam operation, and multi-user delivery.
The central element of PIP-II is an 800 MeV linac, which comprises a room temperature front end, up to 2.1 MeV, followed by a SRF section. The front end up to ~20 MeV has been constructed and was commissioned in the PIP-II Injector Test facility. The SRF accelerator consists of five different types of cavities/cryomodules, including Half Wave Resonators, Single Spoke and elliptical resonators operating at state of the art parameters.
PIP-II is the first U.S. accelerator project to be constructed with significant contributions from international partners, including India, Italy, France, the United Kingdom and Poland. DOE’s Argonne, Berkeley and Jefferson laboratories are also contributing key technologies. The project received CD-1 approval in July 2018, CD-2 in Dec 2020 and CD-3 start of construction in April 2022. The project will be completed in 2028.
The Kamiokande, Super-Kamiokande (Super-K) and SNO+ experiments have established large-scale water Cherenkov detectors as powerful tools for the study of neutrinos and the search for new physics processes. Operating since 2009, the T2K experiment has used an accelerator source of neutrinos to study neutrino oscillations with the Super-K detector. In 2020, the successor to T2K and Super-K, Hyper-Kamiokande (Hyper-K), was approved in Japan. Hyper-K will have a sensitive mass 8 times larger than Super-K, and receive a neutrino beam with 2.5 times the intensity of T2K. The unprecedented statistics collected at Hyper-K will allow for precision measurements of neutrino oscillations, including the most sensitive search for CP violation. Hyper-K will also have significantly improved sensitivity for nucleon decay searches, burst and diffuse supernova neutrino detection and dark matter searches, amongst a broad physics program. In this talk, I will review the status of the T2K experiment and the status and plans for the construction of the Hyper-K detector and experiment. I will highlight the Canadian contributions to the Hyper-K project, including contributions to the Intermediate Water Cherenkov Detector, photosensors, calibration systems, and data analysis techniques using machine learning.
Why there exists an asymmetry between matter and antimatter is one of the great mysteries in understanding the evolution of the universe. The discovery of neutrino oscillations by the SNO and Super-Kamiokande experiments opened up an avenue to explore the differences between neutrinos and antineutrinos, potentially shedding light on the mystery. Teasing out this small difference and understanding complicated neutrino interactions will require unprecedented levels of precision provided by a succeeding, next-generation water Cherenkov experiment called Hyper-Kamiokande. To achieve this, I will present the R&D and implementation of a cross-disciplinary approach known as photogrammetry. This talk focuses on the hardware design for Hyper-Kamiokande and analysis pipeline currently being applied to the Super-Kamiokande detector. Through this, we are able to take images of the detectors and aim to pinpoint the positions of their features to the sub-cm level, effectively reducing systematic uncertainties in the modeling of the detectors due to geometrical distortions.
Hyperbolic lattices are a new form of synthetic quantum matter in which particles effectively hop on a discrete tiling of two-dimensional hyperbolic space, a non-Euclidean space of negative curvature. Hyperbolic tilings were studied by the British-Canadian geometer H.S.M. Coxeter and popularized through art by M.C. Escher. Recent experiments in circuit quantum electrodynamics and electric circuit networks have demonstrated the coherent propagation of wave-like excitations on hyperbolic lattices. While the familiar band theory of solids adequately describes wave propagation through periodic media in Euclidean space, it is not clear how concepts like crystal momentum and Bloch waves can be extended to hyperbolic space. In this talk, I will discuss a generalization of Bloch band theory for hyperbolic lattices and stress the intriguing connections it establishes between condensed matter physics, high-energy physics, number theory, and algebraic geometry.
Wolfgang Pauli called solid-state physics "the physics of dirt effects", and this name might appear well-deserved at first sight since transport properties are more often than not set by extrinsic properties, like impurities. In this talk, I will present solid-state systems in which electrons behave like a hydrodynamic fluid, and for which transport properties are instead set by intrinsic properties, like the viscosity. This new regime of transport opens the way for a “viscous electronics”, and provides a new angle to study how quantum mechanics can constrain and/or enrich hydrodynamics.
As living organisms age, they stochastically move through high-dimensional "health-space". Developing simple and predictive models that captures aging dynamics is challenging because the organism is not homogenous: there are many thousands of distinct physiological attributes that could be measured. We pursue three strategies to simplify aging while embracing its complexity. First we develop simple one-dimensional summary measures of health. These predict mortality surprisingly well, but not health-trajectories. Second we develop minimal models of networked health that still capture the heterogeneity of the data. These "generic network models" allow us to model how the heterogeneity of health affects aging, but also the effects of disease. Finally, we use machine-learning to identify natural coordinates for describing aging, and to identify simple interactions between health attributes.
For many years of working in quantum optics research I have always had an interest in the applications of quantum technologies, and in particular in their transition to the commercial domain. I will discuss my two endeavours into two business that I co-founded, Universal Quantum Devices and QEYNEt. I will try to illustrate how keeping an open mind in the research laboratory can help identify business opportunities.
A short presentation of the proposed changes to the professional physicist designation. Physicists are abundantly qualified for many jobs that are secured by an act of parliament that prevents them from attaining these high paying jobs. Particularly physicists are working as engineers in increasing numbers. An outline of how these proposed changes will provide an increased level of credibility to the P.Phys. and a plan to provide a pathway to the P. Phys, for students starting after the completion of the second year of an approved program. This presentation/discussion is of particular interest to students and early career physicists or established physicists considering acquiring a P.Phys. or a career change.
The Electron-Ion Collider (EIC) is a cutting-edge accelerator experiment proposed to study the origin of mass and the nature of the "glue" that binds the building blocks of the visible matter in the universe. The proposed experiment will be realized at Brookhaven National Laboratory in approximately 10 years from now, with the detector design and R&D currently ongoing. Notably EIC can be one of the first facilities to leverage on Artificial Intelligence (AI) during the design phase. Optimizing the design of its tracker is of crucial importance for the EIC Comprehensive Chromodynamics Experiment (ECCE), which proposed a detector design based on a 1.5T solenoid. The optimization is an essential part of the R&D process and ECCE includes in its structure a working group dedicated to AI-based applications for the EIC detector. In this talk I describe the implementation of an AI-assisted detector design using full simulations based on Geant4. Our approach deals with a complex optimization in a multidimensional design space driven by multiple objectives that encode the detector performance, while satisfying several mechanical constraints.
We describe our strategy for optimisation, discuss the exploration of different AI-based approaches, and illustrate the set of tools developed to "navigate" interactively the obtained Pareto front. We finally show the results of the AI-assisted tracking system in ECCE.
One of the most puzzling aspects of the standard model is that the overwhelming majority of the mass of hadronic systems arises from massless and nearly massless objects. How this occurs is poorly understood, and remains a major open question of the standard model. Developing our understanding of hadronic mass generation mechanisms is one of the three key physics questions for the upcoming Electron-Ion Collider (EIC). From the little that we do understand, we know that mass generation is intricately connected to the internal structure of hadronic systems. Somewhat counter intuitively, it is some of the lightest hadronic objects, the charged pion and kaon, that may be able to fill in the missing piece of the puzzle. Advancing our understanding of the internal structure of these objects is crucial if we are to begin to untangle how this structure emerges from the dynamical nature of the interactions that govern it.
One potential window into the internal structure of the charged pion and kaon is their elastic electromagnetic form factors, $F_{\pi}(Q^{2})$ and $F_{K}(Q^{2})$. Electromagnetic form factors are fundamental quantities which describe the spatial distribution of partons within a hadron. Determining these form factors, as well as how they vary with $Q^{2}$, is an important step on our road to understanding the internal structure of these objects. The EIC opens up the possibility of studying $F_{\pi}(Q^{2})$ and $F_{K}(Q^{2})$ to very high $Q^{2}$. The $Q^{2}$ reach of these measurements is deep into unexplored territory, these cutting edge measurements could help disentangle the mass generation puzzle of QCD. In this talk, I will outline the opportunities and challenges of pion and kaon form factor measurements at the EIC. I will also present the latest projections for these measurements, which are based upon recent detector simulations.
The Electron-Ion Collider (EIC) is a future facility, which will be uniquely poised to address questions related to the origin of mass and spin of the nucleon and the emergent properties of dense systems of gluons.
EIC Comprehensive Chromodynamics Experiment (ECCE) will be building the detector for EIC based on a 1.5T solenoidal magnet. During its proposal, ECCE leveraged on Artificial Intelligence (AI) to design the tracking detector subsystem. ECCE was one of the first-large scale experiment to use AI during its design phase.
In this talk, the ECCE tracking system will be presented, as well as the AI-assisted optimization process employed to optimize the dimensions and locations of the inner tracker elements. Details related to Multi-Objective Optimization (MOO) using an AI-based evolutionary algorithm will be shown.
Finally we present the results of the various optimization phases for the tracker in ECCE-EIC.
Geometrically frustrated magnets form a broad class of materials where competing interactions lead to the partial or complete suppression of classical magnetic order. While short-range magnetic correlations exist in the absence of long-range order, these systems remain disordered and fluctuating, exploring a largely degenerate and complex energy landscape. Such state is commonly named a spin liquid, as the magnetic moments behavior is analogous to the one of particles in a liquid. Classical spin liquids are driven by thermal fluctuations and exhibit slow dynamics at low temperatures. In contrast, quantum fluctuations can generate long-range entanglement of the magnetic moments, a state of matter called quantum spin liquid. The fundamentally quantum nature of this state attracts great interest, in particular because it is a playground to study emergent gauge theories with fractionalized excitations and of its potential relevance in quantum computing. Proving experimentally the existence of this quantum state however remains challenging.
Rare-earth based pyrochlore magnets form a large family of geometrically frustrated magnets that exhibit analogous effects to the proton disorder in water ice. I will discuss our experimental work on two quantum spin liquid candidates on this pyrochlore lattice. Using a combination of experimental techniques, and in particular neutron scattering, we evidenced that Pr$_2$Hf$_2$O$_7$ and Ce$_2$Sn$_2$O$_7$ exhibit many key features expected from a quantum spin liquid state. Interestingly, both systems are fundamentally different at the microscopic level. In Pr$_2$Hf$_2$O$_7$, while the magnetic dipoles are correlated, the quantum fluctuations are driven by interactions between the electric quadrupoles. In contrast, in Ce$_2$Sn$_2$O$_7$ it is the interactions between the magnetic dipoles that generate quantum fluctuations while the magnetic octupoles are entangled. These two examples illustrate that multipolar degrees of freedom provide novel routes to quantum fluctuations and quantum spin liquids.
In conventional metals like aluminum or copper, the behaviour of electrons is well described by traditional methods of solid state physics. However, these methods cannot be used to study strongly correlated materials in which the interactions between electrons are significant. It instead becomes important to take into account large classical and quantum fluctuations. This is the case in the electron-doped cuprates, in which electron-electron interactions lead to significant antiferromagnetic spin fluctuations. In this talk, I will explain the role spin fluctuations play on the physical properties of the electron-doped cuprates. I will then discuss our recent work on the interplay of spin fluctuations and disorder in a theoretical model of electrons on a two-dimensional lattice where the temperature, the interaction strength the number of electrons can be varied. More specifically, we apply this model to the study of the electron-doped cuprates and show that disorder suppresses spin fluctuations.
Many two-dimensional physical systems, ranging from atomic-molecular condensates to low-dimensional superconductors and liquid crystal films, are described by coupled XY models. The interplay of topology and competing interactions in these XY systems drives new kinds of emergent behavior relevant in both quantum and classical settings. Such coupled U(1) systems further introduce rich physics, bringing topology into contact with fractionalization and deconfinement. In this talk, I will focus on the realization of these systems in a liquid crystal setting, where the theoretical description of 2D crystallization involves the binding of topological defects, accompanied by smooth thermodynamic transitions. However, the isotropic liquid crystal 54COOBC thin films are found to solidify via an intermediate “mystery” phase associated with a sharp specific-heat anomaly.
I will show that this hidden-order phase can be understood as the relative ordering of the nematic and hexatic molecular degrees of freedom. This insight comes from the finite-temperature phase diagram of a minimalist hexatic-nematic XY model obtained through extensive large-scale Monte-Carlo simulations. A small region of composite three-state Potts order above the vortex binding transition is identified; this phase is characterized by relative hexatic-nematic ordering though both variables are disordered. I will show that the Potts order results from a confinement of fractional vortices into extended nematic defects and discuss the broader implications of fractional vortices and composite ordering in the wider class of coupled XY condensates, relevant to both soft and hard condensed matter fields.
Ultrafast science is a branch of photonics with far reaching applications in and out- side the realm of physics. Ultrashort laser pulses on the order of femtoseconds (1 fs = 1 × $10^{−15}$ s) are widely used for ultrafast science. Many lasers can produce pulses on the order of 100 fs, with state of the art, high end lasers being capable of producing pulses around 30 fs. However, many experiments require pulses around 10 fs or shorter. Few-femtosecond pulses are typically generated using spectral broadening via self-phase modulation, followed by dispersion compensation. The most common spectral broadening technique exploits the nonlinear interaction of intense pulses focused into gas-filled hollow-core fibres. More recently, multiple crystal plates have been used to broaden the spectrum while using a self-focusing relay to maintain the beam quality. We have researched substituting solids and gases with liquid alcohols. By using a series of 1 cm cuvettes filled with 1-decanol, we have compressed a pulse from 83.6 fs down to 31.3 fs with a spectrum capable of supporting 25 fs pulses, all whilst avoiding filamentation. Liquids have proven to be useful due to the ease in which they can be set up and achieve broad spectra as well as their ability to remain intact when exposed to high intensities. In contrast with gases, alcohols provide an inexpensive material for spectral broadening, providing a compact and easy to use setup unhindered by the length of hollow-core fibres. We have shown that alcohols provide a compact, inexpensive alternative to solids and gases for pulse compression that is not susceptible to permanent damage.
High-Harmonic Sidebands for Time-Resolved Spectroscopy
In the semi-classical picture of high-harmonic generation (HHG), a strong ($10^{18}-10^{20}W/m^2$) laser field is applied to an atom, repeatedly accelerating one of its valence electrons into the continuum and back to the parent atom. Upon recollision with the parent atom, the electron emits photons of odd-integer harmonics of the driving field [1]. When a second weaker field is applied to the system, the trajectory of the electron is perturbed, causing sidebands to occur in the harmonic spectrum that are characteristic of the perturbing field energy [2],[3]. Consequently, HHG can be used as a method of upconverting mid-infrared light for sensitive spectroscopy, as high-harmonics of infrared (IR) light lie in the visible regime and beyond.
We perform numerical simulations of HHG in order to inform future experimental work that will use HHG cross-correlation to measure time-resolved fields in the mid-IR. HHG cross-correlation involves the mixing of the strong femtosecond driving field with a weak mid-IR field in a nonlinear material, leading to the sidebands that appear in the harmonic spectrum. The position of the sidebands in the spectrum indicates the frequency of the optical free-induction decay [2].
This is a particularly pragmatic approach to IR spectroscopy because it precludes the need for expensive detectors that need to be cooled. Furthermore, the emitted bursts of radiation are on the order of attoseconds (10-18s) in duration, opening the possibility of ultrafast spectroscopy extending into the mid-and far-IR.
References
[1] P. B. Corkum. Plasma perspective on strong field multiphoton ionization. Physical Review Letters 71, 1994–1997 American Physical Society (APS), 1993.
[2] T.J. Hammond et al.. Femtosecond time-domain observation of atmospheric absorption in the near-infrared spectrum. Physical Review A (2016).
[3] G. Vampa et al.. Linking high harmonics from gases and solids. Nature 522, 462–464 Springer Science and Business Media LLC, 2015.
We consider a dilute gas of bosons in a slowly rotating toroidal trap, focusing on the two-mode regime consisting of a non-rotating mode and a rotating mode corresponding to a single vortex. With the help of the single-particle density matrix we track the presence of Bose-condensates in this system which can occur in one mode, both modes or superpositions of the two. We also compare an enhanced mean-field theory which uses the truncated Wigner approximation comprising multiple classical trajectories with a fully quantum many-body description. Following a sudden quench, we find quasi-periodic dynamics where the condensates oscillate between the modes and identify cusp-shaped structures in the wavefunction as quantum versions of elementary catastrophes.
Quantum Key Distribution (QKD) has reached a level of maturity sufficient for commercial implementation. However, to-date transmission distances remain curtailed due to absorption losses. Satellite links have been proposed as a solution to scale up the distances of quantum communication networks. By using orbiting satellites as nodes between ground stations, the signal-to-noise ratio is improved as most of the photons' propagation path is in empty space. Yet such satellite based quantum links currently suffer from low photon count rates, due to the atmospheric effects and the limitations of current entangled photon sources. Furthermore, while a practical quantum network necessitates connectability to multiple users, most QKD implementations so far are limited to two communicating parties. To address this issue, we investigate the use of a wavelength-multiplexing entangled photon source. In this work, we simulate the performance of such a multi-channel operation and show that by using multiple wavelength channels one can improve the secure key rate linearly up to several orders of magnitude whilst maintaining the same quantum bit error rate. Taking advantage of the inherent hyper-correlations produced by the entangled photon source, one can deterministically separate wavelength correlated photon pairs into different detection channels. Hence, every pair of frequency channels can be considered as an independent communication link. In doing so, we can not only circumvent the timing limitation of the photon detectors which leads to an intrisincally increased key rate while maintaining the same signal-to-noise ratio but also enable the interconnectability of a single satellite link with multiple user end-points on the ground. These results indicate the possibility of achieving a very high brightness photon pair generation rate, suitable for satellite-based QKD, without saturation in the detectors. Thus, this method proposes scaling potential to improve quantum communication distances and networkability.
Integrating single photon sources to existing telecommunication (telecom) networks is an on-going challenge due to the wavelength mismatch between the photon sources and the telecom optical fibers. A solution is to develop frequency conversion devices that can convert the optical frequency of the photon sources to the appropriate telecom frequencies. However, these solutions are difficult to implement and can require a large overhead in equipment and expertise to operate. Here, we demonstrate the direct distribution of near-infrared time bin entangled photons along a telecom fiber for the purpose of quantum key distribution. The near-infrared entangled photon pairs of 785nm and 832nm wavelengths are generated by an entangled photon source. The 832nm photons are sent to a local polarization analyzer. The 785nm photons were coupled into a standard telecom single mode fiber with lengths up to 3.1km and measured using a field widened unbalanced Mach Zehnder interferometer. The results indicate that, despite the multi mode nature of the telecom fibers for the 785nm photons and the associated modal dispersion that the different modes experience when propagating through the fiber, strong quantum correlations can be recovered in both the zeroth order mode and the higher order modes. The direct use of near-infrared quantum sources with the already existing telecommunication infrastructure reduces the need for frequency conversion devices and is thus important for the development of the quantum internet.
As the demand for secure communication has grown in recent years, so has the need for robust implementations of quantum key distribution (QKD). Polarization encoding schemes suffer from phase drifts when encoded pulses pass through optical fibres, making the use of active phase compensation essential. These drifts arise due to ambient temperature changes and mechanical stresses on the fibre, which are unavoidable, especially in applications where part of the source is exposed to outdoor temperatures or is connected to moving platforms. We propose a method of active phase monitoring that can be used with a phase compensation system for the quantum optical ground station which is aimed to do free-space polarization-based QKD with quantum satellites, as a part of the Quantum Encryption and Science Satellite (QEYSSat) mission. Rather than performing a complete tomography of the polarization states, we propose monitoring the polarization encoded pulses using only one basis. Active phase corrections are applied using a PID control loop that takes as input the results of the measurement results of the characterization system. Not only does this approach ensure accurate transmission of the polarization encoded qubits, the approach also simplifies the requirements on optical equipment which results in reducing the net cost while maintaining the high performance.
The King plot technique widely used for isotopes of heavy atoms is
extended to light heliumlike ions by taking second differences to
eliminate large mass polarization corrections [1]. The effect of a
hypothetical electron-neutron interaction propagated by light bosons is
included and a comprehensive survey of all second-King plot transitions
for all states of Li$^+$ up to $n = 10$ and $L = 7$ is presented in
order to find the ones most sensitive to new physics due to light
bosons. The sensitivity is found to be comparable to that for the
recently studied case of Yb$^+$.
[1] G.W.F. Drake, Harvir S. Dhindsa and Victor J. Marton, Phys. Rev. A
104, L060801 2021).
The optimization of quantum systems using Quantum Optimal Control Theory (QOCT) is very important in many fields such as quantum information, photocatalysis, and atomic and molecular physics. The goal of QOCT is to optimize an external field shape such that it drives a quantum system to a target state. When applied to the real world, QOCT can be used to develop quantum gates in quantum computing or to achieve a particular state in atomic and molecular physics. There are many numerical methods that exist in order to determine the optimal external field shape when controlling quantum systems, with one being Krotov's algorithm. Krotov's algorithm minimizes the optimization functional, which consists of the figure of merit and any constraints. In this research, I apply Krotov's algorithm to determine the optimal external field shape to achieve quantum control in a chain of trapped ions employing the Sørensen & Mølmer scheme. I numerically implement Krotov's algorithm and compare its performance to other methods.
Ultracold neutral atoms are an excellent test-bed for novel quantum control techniques due to their stability, and efficient coupling to fields in the radio, microwave, and optical regimes. Various control protocols which could be used in quantum information processing (QIP) may first be investigated in ultracold atoms to prove their efficacy before being generalized to other more established systems. In this spirit we present two different novel control protocols. First we demonstrate holonomic single-qubit gates, which are conventionally performed via the adiabatic evolution of a degenerate manifold of states through a path in parameter space; this yields a non-Abelian geometric phase which couples the states in a way that depends only on the path taken. In this study, we eliminate the explicit need for degeneracy through Floquet engineering, where the atomic spin Hamiltonian is periodically modulated in time. We characterize the non-Abelian character of the geometric phase through a gauge-invariant parameter, the Wilson loop. Next, we demonstrate a decomposition of SU(3) including a resonant dual-tone operator which synthesizes coupling between two disconnected qutrit levels. For many conventional systems where the third coupling is not possible this technique provides a potential workaround. A decomposition of SU(3) using this operator is tested against conventional methods by performing a Walsh-Hadamard gate and performing maximum likelihood tomography on the resulting states. In both protocols we demonstrate novel methods for precision quantum control essential in advancing QIP techniques which can be readily adapted to trapped ions, superconducting qubits, and other quantum computing platforms.
The quasi-2D Mott insulator Ca2RuO4 has a metal-to-insulator transition (MIT) controllable through temperature, pressure, epitaxial strain, and curiously -- electrical current. However, the mechanism by which the current induces the MIT has yet to be understood. We use angle-resolved photoemisson spectroscopy (ARPES) with nanometer scale resolution to compare the electronic band structures in equilibrium and in non-equilibrium, or with and without current respectively. Preliminary results show a clear closure of the band gap and a more equal distribution in photoemission intensities.
spin ice material has many interesting properties such as geometrical frustration, non-zero magnetic moment and magnetic monopoles, the spin ice especially the quantum spin ice material was an active research areas. When temperature is low enough, the quantum fluctuation in spin ice material can lead to a liquid like material known as quantum spin liquid. In this paper, we use group theory to block diagonalize the Hamiltonian of 16 site Pyrochlore system and find the spin ice states. We start with the pure spin ice Hamiltonian and slowly turn the Hamiltonian to quantum spin ice by adding exchange constants as perturbation. Finally, we plot the spin ice spectrum with different exchange constants in finite temperature.
It is an increasingly urgent to protect the environment from the different kinds of pollutants, in particular industrial pollutants. Wastewater treatment is one example of these efforts that are necessary for mankind to enjoy a sustainable future. Recently, the use of piezoelectric nanomaterials as catalyst for water purification has been reported. It has been demonstrated that the piezoelectric properties of nanomaterials in solution, can be used for the degradation of organic pollutants, when activated by ultrasonic waves. When submitted to ultrasonic waves, however, other physical phenomena also contribute to the degradation of organic pollutants: Tribocatalytic activity comes from the frictions of the particles generating of transient charges that cause the degradation of organic compounds. Moreover, at higher ultrasonic energies, cavitation bubbles can occur, whose collapse creates localized pockets of high temperature in excess of 4000K and high pressure in excess of 1000 atm decomposing organic pollutants, a phenomenon called sonolysis. A general literature review shows not enough attention has been devoted so far to discriminate between these various effects, in particular when studying the pollutant degradation, using piezocatalyst materials such as BaTiO3 nanoparticles. In this study, we quantified the piezo-, tribo- and/or sonocatalytic activities of BaTiO3 nanoparticles, comparing their catalytic activities to that of non-piezoelectric TiO2 nanoparticles, which happen to have a similar surface termination. This comparison allows us to derive the contribution of the piezoelectric effect in the catalytic degradation reactions. BaTiO3 and TiO2 crystalline nanoparticles were characterized using X-ray diffraction and Raman spectroscopy. The degradation of methyl orange in water has then been measured using either BaTiO3 or TiO2 as catalysts. Comparing the results for BaTiO3 and TiO2 allows us to experimentally quantify the portion of the piezoelectric effect in the catalytic activity of BaTiO3 nanoparticles.
In recent years, physicists have discovered that the topological electronic structure of materials can have dramatic consequences on their properties. In a new variety of topological materials called Weyl semimetals, electrons behave as massless relativistic particles. These materials are in some sense a 3-dimensional equivalent to graphene. Many interesting magneto-electric effects, that could possibly be applicable to quantum technologies, have been predicted in Weyl semimetals and are still studied today. Theoretical and preliminary experiments have demonstrated that it is possible to probe the topological nature of these materials by measuring the speed at which acoustic waves travel through the material. This research technique allows us to probe the volume of the sample and to avoid certain errors associated with electrical conductivity measurements. In this project, we explore experimentally how the application of a magnetic field modifies the speed and absorption of sound in the Weyl semimetal NbP. We will show how applied magnetic fields have an anisotropic effect on the sound velocity and compare with previous results on the isostructural material TaAs. The sound velocity measurements also exhibit quantum oscillations that allow us to characterize the Fermi surface of the material. We have also carried out transport measurements on the same material NbP as a complementary measurement of quantum oscillations.
Recent advances in experiment and theory suggest that superfluid $^3$He under planar confinement may form a pair-density wave (PDW) whereby superfluid and crystalline orders coexist. While a natural candidate for this phase is a unidirectional stripe phase predicted by Vorontsov and Sauls in 2007, recent nuclear magnetic resonance measurements of the superfluid order parameter rather suggest a two-dimensional PDW with noncollinear wavevectors, of possibly square or hexagonal symmetry. In this work, we present a general mechanism by which a PDW with the symmetry of a triangular lattice can be stabilized, based on a superfluid generalization of Landau's theory of the liquid-solid transition. A soft-mode instability at finite wavevector within the translationally invariant planar-distorted B phase triggers a transition from uniform superfluid to PDW that is first order due to a cubic term generally present in the PDW free-energy functional. This cubic term also lifts the degeneracy of possible PDW states in favor of those for which wavevectors add to zero in triangles, which in two dimensions uniquely selects the triangular lattice.
*P.S.Y. was supported by the Alberta Innovates Graduate Student Scholarship Program. R.B. was supported by Département de physique, Université de Montréal. J.M. was supported by NSERC Discovery Grants Nos. RGPIN-2014-4608, RGPIN-2020-06999, RGPAS-2020-00064; the CRC Program; CIFAR; a Government of Alberta MIF Grant; a Tri-Agency NFRF Grant (Exploration Stream); and the PIMS CRG program.
The Quantum Spin Liquid is a novel magnetic ground state, characterized by quantum entanglement without long range magnetic order. Kagome lattice Heisenberg antiferromagnet is a prime candidate of quantum spin liquid owing to highly frustrated spin ½’s arranged on a corner sharing triangle geometry. We report 19F NMR investigation of a series of “barlowite” kagome material Zn1-xCu3+x(OD)6FBr with x ~ 0.05, 0.5, and 1 based on the inverse Laplace transform analysis on the spin-lattice relaxation rate 1/T1.
High-entropy oxides (HEOs) comprise an equimolar mixing of metal cations
combined into a single-phase crystal structure. First synthesized in 2015 [1],
HEOs have garnered much attention as candidates for high-efficiency batteries
and heat shields [2, 3]. HEOs composed of four and five binary oxides have
been previously investigated by infrared [4] and Raman spectroscopy [5] and
lattice dynamical simulations. The IR spectra consist of a strong, reststrahlen
mode at $350~\textrm{cm}^{-1}$ and a much weaker mode at $150~\textrm{cm}^{-1}$ not predicted by
group theory. The absence of spin-phonon splitting in the reststrahlen band
below the Neel temperature ($T_N$), despite appearing in the parent oxides CoO
and NiO, has been attributed to a high rate of static disorder scattering. The
Raman spectra are composed of five peaks which have been assigned to $TO, LO,
LO+TO, 2LO$ modes, as well as a two-magnon mode. Fits of the spectra to the
Lorentz oscillator model revealed a temperature-dependent damping parameter
which was ascribed to anharmonic effects. The phonon density of states will be
simulated using GULP [6] in order to understand the IR and Raman spectra.
[1] Christina Rost et al. “Entropy-stabilized oxides”. In: Nature Communica-
tions 6 (Sept. 2015), p. 8485. doi: 10.1038/ncomms9485.
[2] Abhishek Sarkar et al. “High Entropy Oxides for Reversible Energy Stor-
age”. In: Nature Communications 9 (Aug. 2018). doi: 10.1038/s41467-
018-05774-5.
[3] Joshua Gild et al. “High-entropy fluorite oxides”. In: Journal of the Euro-
pean Ceramic Society 38.10 (2018), pp. 3578–3584. issn: 0955-2219. doi:
https://doi.org/10.1016/j.jeurceramsoc.2018.04.010. url: https:
//www.sciencedirect.com/science/article/pii/S0955221918302115.
[4] Tahereh Afsharvosoughi and D. A. Crandles. “An infrared study of antifer-
romagnetic medium and high entropy rocksalt structure oxides”. In: Jour-
nal of Applied Physics 130.18 (2021), p. 184103. doi: 10.1063/5.0070994.
eprint: https://doi.org/10.1063/5.0070994. url: https://doi.org/
10.1063/5.0070994.
[5] Tahereh Afsharvosoughi. “Structural, Magnetic and Vibrational Studies of
Entropy Stabilized Oxides”. Brock University, 2021.
[6] Julian D. Gale. “GULP: A computer program for the symmetry-adapted
simulation of solids”. In: J. Chem. Soc., Faraday Trans. 93 (4 1997), pp. 629–
637. doi: 10 . 1039 / A606455H. url: http : / / dx . doi . org / 10 . 1039 /
A606455H.
Neutron scattering is an invaluable tool for studying the bulk characteristics of condensed matter systems. With the shutdown of the National Research Universal (NRU) reactor at Chalk River in 2018, Canada lost its main source of neutrons for spectroscopy and diffraction experiments, forcing those working at Canadian institutions to look abroad. There is currently a national effort to rebuild and renew Canada’s neutron scattering capabilities, and the centrepiece of this effort is “Building a Future for Canadian Neutron Scattering”, a successful CFI project led by McMaster and a coalition of 17 Canadian universities. Over the next five years, this project will lead to a $24 million investment in neutron scattering facilities at the McMaster Nuclear Reactor (MNR) and the construction of three new beamlines: a high-resolution neutron powder diffractometer, a neutron reflectometer, and a neutron stress scanning diffractometer. A new small-angle neutron scattering facility (MacSANS) is also scheduled to begin operation in Summer 2022. In this poster, we will describe new instrument development projects on the McMaster Alignment Diffractometer (MAD), a general purpose triple-axis spectrometer. This includes the commissioning of new sample environments, such as a 4K-800K cryofurnace, and the relocation of the detector from the C2 neutron powder diffractometer at Chalk River. The C2 detector is a gas filled (BF3) multiwire detector, with an array of 800 vertical wires covering an angular range of 80 degrees with 0.1 degree angular resolution. This new equipment will introduce exciting capabilities for neutron powder diffraction, low temperature, and magnetic scattering experiments at the MNR.
Positron emission tomography (PET) is an excellent medical imaging technique in clinical applications such as brain disease detection and tumor diagnosis. To reflect the level of brain molecular metabolism accurately, larger amounts of radio tracers are sometimes needed, which can be problematic for radiation dose. Improving the quality of the PET image using smaller amounts of radiotracers can be beneficial. In the paper, we propose a denoising and enhancement method for the low-dose PET images. Specifically, we use partial differential equations (PDE) to denoise images and apply the limited-contrast adaptive histogram equalization method to enhance images. We designed the diffusion coefficient of the anisotropic diffusion model by adding variance and bilateral filtering, which can better preserve details. In addition, we introduce the adaptive threshold method to adjust the diffusion coefficient and apply the regularization terms for further protecting the original details of the image. During the process of enhancement, we fine-tuned the denoised image using a limited-contrast adaptive histogram equalization method and adjusting the image contrast. Experiments show that our algorithm can remove much noise as well as maintain both the global structure and the fine textures of the PET image. On diverse datasets, the proposed method outperforms other methods in terms of qualitative and quantitative compared results.
Keywords: Positron emission tomography, PET, PDE, denoising, enhancement
Dynamic mechanical analysis (DMA) is an umbrella term for a variety of rheological experiments in which the response of a sample subjected to an oscillatory force is measured to characterize its dynamic properties. In this work, we present a method for DMA that employs simple magnetic resonance techniques and a small unilateral three magnet array with an extended constant gradient to measure the velocity of a vibrating sample. By orienting the vibrations in the direction of the gradient, we use the motion-sensitized phase accumulation to determine the velocity. By implementing delays into the pulse sequence, we measure the phase at evenly spaced points in the vibration cycle, allowing for the acquisition of a complete velocity waveform. Using velocity waveforms, samples are characterized through differences in amplitude and phase, providing information on the magnitude of the dynamic modulus and loss-angle, respectively.
Structure functions are employed in many optical scattering experiments for the determination of size distributions in small particles. Recently a similar method for magnetic resonance (MR) has been proposed, Dynamic Magnetic Resonance Scattering (DMRS) [1], which constructs structure functions from MR signal time series data. DMRS is useful in the characterization of sample dynamics, where it can be used to measure velocity of moving particles (coherent motion) or the diffusion coefficient of particles in a medium (random behaviour). DMRS has a number of potential advantages from an MR perspective: it examines particles below the minimum spatial resolution of instruments, and it largely cancels static signal contributions that occur in many samples. The method can be employed using a constant magnetic gradient, and the data can be acquired via basic MR sequences. Additionally, DMRS should be well-suited to studies of opaque media where optical methods, such as light scattering, fail. The original paper, though robust in characterising applicability, does not explore extreme cases for coherent motion such as high particle velocity. In this work, we explore the case of dispersed media (sprays) with velocities several orders of magnitude higher than the original paper, and we discuss fundamental restrictions of the method in terms of instrument parameters. The simulated structure function behaviour of expected velocities agrees with experimental data for velocities ~100 times faster than the original publication. Estimates of other variables of interest are discussed, as well as considerations for applicability to low-field and unilateral NMR instruments.
[1] Herold, Volker, Thomas Kampf, and Peter Michael Jakob. "Dynamic magnetic resonance scattering." Communications Physics 2.1 (2019): 1-10.
Title: A Monte Carlo simulation of the feasibility of detecting bone tungsten using X-ray fluorescence
Authors: Sajed Mcheik(1,*) and Ana Pejovic-Milic(1,)
Affiliations: Department of Physics, Ryerson University, Toronto, ON, Canada, M5B 2K3;
E-mail address of the corresponding author: smcheik@ryerson.ca
An increased number of studies are introducing use of tungsten in medicine in the form of sodium tungstate as an antidiabetic medicine t[1], and tungsten nanoparticles as a contrast agent for CT scanning [2] or enhancers of cancer therapy [3]. On the other side, human exposure to tungsten could lead to adverse health effects, including tumour promotion, pulmonary dysfunction, or immune dysfunction [4]. Therefore, it is timely to develop a diagnostic tool to monitor medical exposure to tungsten.
To address this need, we propose developing a robust non-invasive technique to detect bone tungsten in-vivo based on the x-ray fluorescence (XRF). A HPGe detector along with homogenous bone phantoms were modeled using Monte Carlo software TOPAS, 3.3 version. A cylindrical shape bone phantom constituted of percent mass as Ca10(PO4)6(OH)2 [5] (2.7 cm diameter and 8 cm height ) were modeled simulating human tibia measurements. The simulation model generated XRF spectra using 109 particles, which were then analyzed to decide on the best excitation source and geometry to optimize the detection limit
The TOPAS simulation showed that Cd-109 is a potential excitation source to detect tungsten in tibia with a minimum detection limit equal to 0.3 ppm W/Ca for 180-degree geometry.
Reference:
[1] Bertinat, R., Westermeier, F., Gatica, R., & Nualart, F. (2019;2018;). Sodium tungstate: Is it a safe option for a chronic disease setting, such as diabetes? Journal of Cellular Physiology, 234(1), 51-60.
[2] Jakhmola, A., Anton, N., Anton, H., Messaddeq, N., Hallouard, F., Klymchenko, A., Mely, Y., & Vandamme, T. F. (2013;2014;). Poly- ε -caprolactone tungsten oxide nanoparticles as a contrast agent for X-ray computed tomography. Biomaterials, 35(9), 2981-2986
[3] Wang, R., Cao, Z., Wei, L., Bai, L., Wang, H., Zhou, S.. Ma, Q. (2020). Barium tungstate nanoparticles to enhance radiation therapy against cancer. Nanomedicine, 28, 102230-102230.
[4] Bolt, A. M., & Mann, K. K. (2016). Tungsten: An emerging toxicant, alone or in combination. Current Environmental Health Reports, 3(4), 405-415.
[5] Da Silva E, Kirkham B, Heyd D V and Pejovi´c-Mili´c A.(2013) Pure hydroxyapatite phantoms for the calibration of in vivo x-ray fluorescence systems of bone lead and strontium quantification Anal. Chem. 85 9189–95
Nuclear-physics experiments probe nuclear structure, nucleosynthesis and fundamental interactions, for which high precision and accurate mass measurements are critical inputs. TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) facility employs the Measurement Penning Trap (MPET) to measure masses of exotic nuclei to high precision and accuracy up to ~1e-10. To improve the resolving power and reduce statistical uncertainty in the mass measurement, a higher charge state of the ions can be used. This and other benefits of charge breeding radionuclides like improved beam purification can only be realized at TITAN as it alone in world combines radioactive ions, charge breeding, and a Penning trap. To fully leverage these advantages, MPET is undergoing an upgrade to a new cryogenic vacuum system compatible with ions in charge states over 20+. The status of the new cryogenic upgrade will be presented.
nEXO is a next generation detector to search for neutrinoless double-beta decay in Xe-136. This hypothetical decay violates lepton-number conservation, requiring the neutrino to be its own antiparticle and would imply the existence of physics beyond the Standard Model. As a potential upgrade to further improve nEXO’s sensitivity, the Ba-tagging technique is being developed to eliminate nearly all background events. The Ba-tagging scheme being pursued by Canadian institutions involves an extraction of Ba-136 ions from candidate Xe-136 double-beta decay events within the detector in a gas phase, and an identification of Ba ions using laser and mass spectroscopy. To study and optimize the Ba-tagging extraction and identification process, a well-characterized in-gas ion source is needed. To this end, our group at McGill is developing an in-gas laser ablation source. Currently, ion production and transport efficiency in noble gas as a function of gas pressure is being studied. The setup, analysis, and future plans of the in-gas laser ablation source will be presented.
The neutron dose responses of the tissue equivalent multi-element Thick Gas Electron Multiplier (THGEM) microdosimetric detectors have been computed by Monte Carlo simulations. The absence of wire electrodes in THGEM has immensely simplified the construction of multi-element detectors. Three muti-element configurations of 7x3, 19x5, 37x7 were used as the representative detector geometries and the microdosimetric response of each configuration was computed by the MCNP 6.2. code. The dimensions of the three configurations were kept such that each configuration occupies a cylindrical volume of 5 cm diameter by 5 cm length. The incident neutron energy was varied from 10 keV to 2 MeV. The angular response was studied for incident neutron beam at angle $0^{0}$, $30^{0}$, $45^{0}$, $60^{0}$, and $90^{0}$.The simulated response showed a good agreement with the evaluated fluence-to-kerma conversion coefficients in the neutron energy region 10 keV to 100 keV while discrepancies were observed in the region above 250 keV. It was identified that the discrepancy was caused by the non-tissue equivalent response of the THGEM. This under-response can be corrected by applying a correction factor. The angular response simulation result showed an excellent uniform response.
We present optimization and characterization of a Si-plastic scintillator coincidence beta-ray spectrometer. Recent recommendation to lower the dose limit for the lens of the eye by International Commission on Radiological Protection posed new health physics requirement in the country. Beta-ray dosimetry is of great importance for nuclear industries, particularly during the maintenance periods. The beta-ray spectral data is most fundamental and vital information for accurate beta-ray dosimetry for mixed beta-gamma fields that are often encountered during the nuclear maintenance work. To this end, a Si-Plastic scintillator coincidence beta-ray spectrometer has been developed. The spectrometer can collect pure beta-ray spectra by rejecting the gamma-ray detection events through coincidence. The pulse height and arrival time of each detector signal was processed by a compact digital system and was collected in list mode. A recent upgrade in the digital processor enabled the spectrometer to cover the entire beta energy range of interest.The responses of the spectrometer to beta and gamma were characterized by experiments and Monte Carlo simulations. Spetrcal measurements under beta-gamma mixed fields with various beta and gamma count rates using 90Sr and 137Cs sources were executed as the evaluation of the system performance. The coincidence beta spectrum was quite stable and consistent in most energy region with the increase of the gamma count rate for a fixed beta field. Development of a real-time spectrum analysis method is currently underway.
Recently, it was observed that under moderate pressure (p > 10Pa), nanoparticles can be created using sputtering magnetron discharges [1]. Although such plasma source has been widely studied at low gas pressure (p < 0.1Pa) in the context of industrial application such as thin film coating, there are only few plasma models at the fairly high-pressure range where collisions between sputtered species and the background neutral particles favor the growth of nanoparticles. Such small "dust particles" were also observed in tokamaks of graphite wall [2].
Magnetron discharges in which the plasma density may reach 10^18 m-3 in the cathode region could help us to understand their formation in the coldest plasma region of tokamaks. Experimental studies are in progress at PIIM laboratory in Marseille where magnetically confined plasmas are generated using sputtering magnetron discharge. The feed gas is argon at 30 Pa and the magnetron cathode is in tungsten.
In that context a new and reliable numerical model is currently under development in order to investigate the transport of sputtered tungsten atoms in the discharge. Usually, cold plasma discharges are simulated using PIC-MC or kinetic models [3], but in this presentation, we present a 2D axisymmetric fluid model. In particular, as to resolve the sheaths, we developed a non quasineutral drift-diffusion model of two fluids – ions and electrons.
First two moments of the Boltzmann equation are solved for both with the energy equation only solved for the electrons. Second order finite difference and a fourth order Runge-Kutta method are used for the spatial and temporal discretization. Poisson equation completes the model, and we use kinetic boundary conditions based on a shifted and truncated velocity distribution functions [4]. Some results including plasma potential and density profiles of different species from the first numerical simulations are shown.
References
[1] L. Couëdel et al., AIP Conf. Proc., Proceedings of the 8th ICPDP, Prague, (2017).
[2] C. Arnas et al., Plasma Phys. Control. Fusion 52, 124007 (2010)
[3] G. J. M. Hagelaar et al., Journal of Applied Physics 93, 67 (2003)
[4] R. Sahu et al., Phys. Plasmas 27, 113505 (2020)
End-Hall Ion Source (EHIS) is a gridless device that combines a magnetic field B with an electric field E, in a E x B configuration, to generate and sustain a high-density plasma and to extract and accelerate a broad ion beam. The source can operate in a wide range of discharge voltage, such as 50 – 500 V, for a discharge current in the 1 A magnitude order. In this work, we presents an experimental investigation of a 1 inch EHIS produced by Plasmionique Inc. This source can operate in two different modes, namely: i) low voltage – high current mode, typically 50 V – 1 A, suitable for Ion Beam Cleaning applications and ii) moderate to high voltages – high current mode, typically [100, 500] V – 1 A, suitable for Ion Beam Assisted Deposition and Ion Beam Sputtering applications. The experimental investigation focuses on source’s current-voltage characteristics, ion energy distribution function, beam divergence and beam total current.
The aim of this work is to investigate the effectiveness of one of the novel disinfection methods that is based on cold plasma treatment. A particular plasma setup was adapted for the treatment of aqueous solutions and has been employed for the purification of water contaminated with bacterial strains. The treated samples were prepared by adding Staphylococcus aureus bacteria to distillated water, and then the treatment was carried out by submerging the aforementioned plasma jet in the suspension volume. The plasma discharge in our setup was ignited using a controlled mixture of argon and oxygen.
Results of this study showed that full water decontamination can be attained after about 12 minutes of treatment under 1.5 slpm of Argon gas flow containing 2.5±0.2% of oxygen. In addition, it is found that the oxygen ratio in the mixture is key parameter for the maintaining of the decontamination potential; exceeding 130 ml/min of oxygen flow rate resulted in the reproduction of the bacterial activity. Adding oxygen gas to argon leads to the creation in water of highly reactive oxygen based species (ROS), these species react with the microorganisms cells and lead to their destruction and to stop their reproduction. This study helped setting the ideal margins for the key parameters that should be taken into account while igniting the plasma in order to attain a full disinfection of water contaminated with harmful bacterial cells.
Electrical discharges in dielectric liquids are considered as an efficient technique for nanoparticle synthesis and machining via controlled erosion of the electrodes. Recently, magnetic field assisted method has shown great potential for enhancing the plasma-electrode interactions. Investigating the influence of the magnetic field, intensity and orientation, on the behavior of spark discharges are needed to understand the interactions, with the aim to improve the processes. In this study, spark discharges are produced in pin-to-plate configuration using a nanosecond pulsed power supply in deionized water. The magnetic field is generated with permanent magnet NdFeB. A statistical study of the electrical characteristics (voltage, current) of discharges with and without magnetic field was conducted with W and Ni electrodes and with various inter-electrode distances. The data is processed to report the evolution of some characteristics of the discharges, such as breakdown voltage, current peak, discharge delay, injected charge. Also, the pin erosion rate and the distribution of the impacts on the plate electrode are determined.
Caustics are regions of high intensity created generically by the natural focusing of waves. Some examples include optical rainbows, gravitational lensing, sonic booms, and even rouge waves. The intensity at a caustic is singular in the classical ray theory, but can be smoothed out by taking into account wave interference effects. Caustics are universally described by the mathematical theory of singularities known as catastrophe theory. Caustics can be categorized into classes of catastrophes, each class uniquely described by its own diffraction pattern. A more exotic form of wave singularity occurs near event horizons, which have analogues in classical hydrodynamics where the flow speed exceeds the speed of sound, and also in quantum fluids such as Bose-Einstein condensates (BEC), where Hawking radiation can be simulated. In particular, waves near event horizons display logarithmic phase singularities which cannot be described by the known catastrophe classifications. We introduce a new idea: a logarithmic catastrophe, which were first studied in the context of aeroacoustic flows from jet engines. We will discuss the basic idea behind these logged catastrophes and their relation to analogue Hawking radiation. Additionally we discuss two systems which appear to be categorized by logged catastrophes: undular tidal bores, and certain oscillatory
integrals in radio astronomy.
Despite its success in explaining the large-scale evolution of the universe, standard big bang cosmology has many unsolved problems. For example, it cannot explain why the universe is homogeneous and flat to the degree of precision we observe today. Moreover, as one goes back to the time of the big bang, the universe’s energy density is expected to reach infinity, leading to an initial singularity. String theory is the leading candidate to resolve these problems, as it is expected to correctly describe gravity at high energies and unite all forces of nature under a single theory. Our poster presentation will describe ways that string theory can resolve the issues of standard big bang cosmology. We will explain a new and recently published string-inspired scenario (see arXiv:2107.11512) in which our universe emerges as a gas of strings described by a matrix model. In this model, the homogeneity problem is automatically resolved since the universe emerges in a thermal state, and the singularity problem is resolved by the non-commutative properties of the matrix model. In addition, we obtain an approximately scale-invariant spectrum of cosmological perturbations and a scale-invariant spectrum of gravitational waves, as one would expect from observations. Finally, we will go over other possible predictions of this new model which are currently the subject of our studies, namely that the dimensionality and flatness of our universe can be respectively explained by energy and entropy arguments.
In his seminal work, Bekenstein conjectured that quantum-gravitational black holes possess a discrete mass spectrum, due to quantum fluctuations of the horizon area. The existence of black holes with quantized mass implies the possibility of considering superposition states of a black hole with different masses. Here we construct a spacetime generated by a BTZ black hole in a superposition of masses, using the notion of nonlocal correlations and automorphic fields in curved spacetime. This allows us to couple a particle detector to the black hole mass superposition. We show that the detector's dynamics exhibits signatures of quantum-gravitational effects arising from the black hole mass superposition in support of and in extension to Bekenstein's original conjecture.
The classical black hole is one of the most extreme and scientifically rich products of classical
general relativity. However, it has predictions which still leave some uncomfortable; these primarily being
the nature of the event horizon and the mass singularity. This has led to the development of alternative
black hole ‘mimicking’ models which correct for these singularities and retain the observed properties of
black holes without requiring modifications to general relativity.
One of these mimickers is the ‘gravastar’; a dense spherical mass distribution constructed of a
cold gravitational condensate, colloquially called dark matter, inside a thin perfect-fluid shell. The density
of the gravastar varies and the sizes for which it exhibits black hole properties are unknown. It has also
been shown that such a stellar configuration can exist in thermodynamic equilibrium while correcting the
information paradox. However, to replace the classical black hole as the end-product of gravitational
collapse, as is currently accepted, an analysis of its dynamical stability is required. By perturbing the shell
from gravitational equilibrium – as also occurs during mass accretion, binary coalescence, and other black
hole events – its dynamical stability can be discussed. If such a body could reach harmonic behaviour
around equilibrium without collapsing to a classical black hole, or alternatively leading to stellar explosion,
then it would suitably describe black hole behaviour while correcting for their singularities.
In this work we sought exactly this. By thoroughly investigating the equations of motion of the
thin shell, we determined the mass sequences for which a stable gravastar can exist as well as their
dynamical stability to a first order perturbation theory. We found that although such a configuration does
indeed have black hole mimicking equilibrium forms, they are dynamically unstable and thus not expected
to exist in nature.
Potassium-40 ($^{40}$K) is a naturally-occurring, radioactive isotope of interest to rare-event searches as a challenging background. In particular, NaI scintillators contain $^{40}$K contamination which produces an irreducible $\sim $3 keV signal originating from this isotope's electron capture (EC) decays. In geochronology, the $\mathcal{O}(\text{Gy})$ lifetime of $^{40}$K is utilized in dating techniques. The direct-to-ground-state EC intensity ($I_\text{EC}$) of this radionuclide has never been measured, and theoretical predictions are highly variable ($I_\text{EC}\sim (0.064(19)–0.22(4))\%$). The poorly understood intensity of this branch may affect the interpretation or precision of experimental results, including those probing dark matter signals in the (2-6) keV region. The KDK (``potassium decay") experiment is carrying out the first measurement of this $I_\text{EC}$ branch, using a coincidence technique between a high-resolution silicon drift detector for $\mathcal{O}(\text{keV})$ X-rays and Augers, and a high-efficiency ($\sim 98\%$) Modular Total Absorption Spectrometer (Oak Ridge National Labs) for $\mathcal{O}(\text{MeV})$ gammas, to differentiate ground and excited state EC decays of $^{40}$K. We report on the analysis of the main $^{40}$K result, and on a measurement of $^{65}$Zn decays used to test methods.
At the Isotope Separator and Accelerator (ISAC) facility of TRIUMF, an Electron Cyclotron Resonance Ion Source is used to charge breed radioactive ion beams before injection into the linear accelerator for post acceleration. The so-called Charge State Booster (CSB) has been used to charge breed radioactive isotopes ranging from potassium to erbium under the regime of single frequency heating since its commissioning in 2010. To improve the overall performance of the CSB, a research campaign has been launched since 2018 to conduct a systematic investigation of the source injection and extraction systems alongside the corresponding beamlines to further understand beam injection and formation from the booster. The well-known quadrupole scan technique was developed to measure the emittance of the beams from the CSB. To further improve the efficiency of the charge state booster, two-frequency heating is being implemented using a unique and unconventional method of the single waveguide. The results of the systematic investigation of the source extraction system, the efficiency of single charge states, the emittance of some selected charge states in comparison to the emittance of some selected background ion species will be presented and discussed.
The Pacific-Ocean Neutrino Explorer is a proposed multi-cubic kilometre
neutrino telescope to be located off the coast of British Columbia, Canada.
Two pathfinder missions, STRAW and STRAW-B, have been deployed to
the Cascadian Basin site, which uses existing infrastructure maintained by
Ocean Networks Canada (ONC). These missions were deployed in order to
characterise the site. The first mission, STRings for Absorption Length in
Water (STRAW) was deployed specifically to investigate the absorption and
scattering length, and qualify the site. This original architecture was not
designed to look for atmospheric muons, however their detection could be
possible. My research focuses on configuring STRAW to trigger on atmospheric
muons. This can serve as an experimental check on the muon rate
2.6 km underwater. In addition, it could potentially lay the groundwork for
a full scale neutrino trigger in the future P-ONE detector.
The proposed neutrino detector HALO-1kT will be used to detect neutrinos
from core-collapse supernova events and will contribute to our
understanding of the stars’ explosion mechanism. Its detection method is
based on neutrinos interacting with lead nuclei which then emit neutrons
that can be detected through helium counters. However, neutrino-lead
cross sections at supernova energy scale are yet to be accurately
measured. To help address this problem, a smaller scale prototype
detector called Mini-HALO will be placed at Oak Ridge National
Laboratory where a pulsed beam of neutrinos from the Spallation Neutron
Source will interact with the lead in the detector producing neutrons.
The measured cross-sections will then be used in HALO-1kT to constrain
the number of neutrons we expect from a supernova signal. In order to
obtain highly accurate measurements, a muon veto system will be
installed on Mini-HALO to veto events induced by cosmic muons
interacting in the detector that can be otherwise misidentified as
signals from neutrino interactions. A suit of GEANT4 Monte Carlo
simulations has been developed to study and build an optimized geometry
of the muon veto system. These simulations consist of PVT polymer-based
scintillator panels surrounding the detector which generate optical
photons when traversed by high energy muons. Results from these
simulations such as the energy deposited in the scintillator panels, the
multiplicity of neutrons produced in muon-lead interactions in the
detector, and detector dead-time will be addressed along with
discussions on how these results can be used to veto the muon-induced
signals in the detector.
A bubble chamber using fluorocarbons or liquid noble gases is a competitive technology to detect a low-energy nuclear recoil due to elastic scattering of weakly interacting massive particle (WIMP) dark matter. It consists of pressure and a temperature-controlled vessel filled with a liquid in the superheated state. Bubble nucleation from liquid to vapor phase can only occur if the energy deposition is above a certain energy threshold, described by the “heat-spike” Seitz Model. The nucleation efficiency of low-energy nuclear recoils in superheated liquids plays a crucial role in interpreting results from direct searches for WIMPs-dark matter. In this research, we used molecular dynamics simulation to study the bubble nucleation threshold, and we performed a Monte Carlo simulation using SRIM to obtain the nuclear recoil efficiency curve. The goal is to construct a physics model to explain the discrepancy observed between the experimental results and the current Seitz model. The preliminary results will be presented and compared with existing experimental data of bubble chamber detectors.
A Ring Imaging Cherenkov (RICH) detector allows the identification of charged particles through the measurement of the emission angle of the Cherenkov light produced by the passage of particles with speeds greater than the speed of light in the detector medium. An Aerogel Ring Imaging Cherenkov (ARICH) device uses aerogel material as a radiator medium to achieve a desirable index of refraction. The EMPHATIC (Experiment to Measure the Production of Hadrons At a Test beam In Chicagoland) is a low-cost, table-top-sized, hadron-production experiment located at the Fermilab Test Beam Facility (FTBF) that will measure hadron scattering and production cross sections that are relevant for neutrino flux predictions such as those necessary for neutrino oscillation studies with the Hyper-K experiment. High statistics data will be collected using a minimum bias trigger, enabling measurements of all relevant cross sections. Particle identification will be done using silicon strip detectors, a time-of-flight (ToF) wall, and a lead glass calorimeter array in combination with the ARICH detector. The ARICH focuses on the kaons, pions and protons identification in a multitrack environment up to 8 GeV/c. In my presentation I will discuss the simulations and mechanical studies for the implementation of optical mirrors in the ARICH system to increase the angular acceptance of the detector as a low cost improvement.
The SNO+ experiment is a multipurpose neutrino detector located 2 km underground at SNOLAB in Sudbury, Ontario. The primary goal of the experiment is to search for neutrinoless double beta $(0\nu\beta\beta)$ decay in liquid scintillator loaded with $^{130}$Te in a low-background environment. An observation of a $0\nu\beta\beta$ decay signal would demonstrate the Majorana nature of neutrinos. In order to resolve such a rare decay process, a precise optical calibration of the SNO+ detector is critical. This work presents a sensitive method of investigating the attenuation parameters in liquid scintillator by modelling the simulated radial light yield profiles of various internal background sources. The scintillator materials utilized in the SNO+ Monte Carlo (MC) simulation framework have been fine-tuned based on ex-situ measurements of the light yield and comparison to detector data.
Water Cherenkov (WC) neutrino detectors, such as Super-Kamiokande (Super-K), employ an outer detector (OD) volume to veto out cosmic muons and other types of background, and to provide passive shielding and identify events that are not contained in the inner detector (ID). The upcoming Hyper-Kamiokande (Hyper-K) experiment, a long-baseline neutrino facility to study oscillations and search for the CP violation in the lepton sector among other physics goals, will follow a similar OD and ID design for its far detector (FD) and for one of its planned near detectors - the Intermediate Water Cherenkov Detector (IWCD). The IWCD will be a sub-kiloton detector to be located at a distance of ~ 1 km from the J-PARC facility which will be upgraded to deliver a 1.3 MW beam. Due to its shallow depth and smaller size, along with its exposure to the intense neutrino beam, it is expected that background rates and pile-up events in the IWCD will be higher than in the Hyper-K FD. This demands a sophisticated OD veto system to reduce misidentified pile-up events and to improve the reconstruction efficiency for signal events. The IWCD OD walls will be covered with reflective Tyvek material to improve light collection, while a blacksheet layer will optically isolate it from the ID. Building an intelligent veto system would require, among other things, an understanding of the photon distribution in the OD region for different configurations of the reflective Tyvek and the blacksheet. For this purpose, a dedicated Geant4-based simulation was developed to perform a detailed optical simulation of the OD for different optical configurations in order to infer an optimal OD design, wherein we collect enough photon statistics to reconstruct the OD events and, at the same time, keep Cherenkov light localized to improve particle identification. The results of these optimization studies are presented here.
SNO+ is a multipurpose neutrino detector located 2km underground which detects events inside the active liquid organic scintillating (LAB) medium. It is important to understand the radio-active backgrounds in the detector in detail to interpret any potential physics signals. C14 is a source of background events in the SNO+ detector, and can be observed homogeneously throughout the LAB in the detector. The detector is now completely filled with scintillator and the wavelength shifter (PPO). By probing the detector threshold and evaluating the C14 rate we are able to investigate for any exotic physics interactions which may be present at low energy ranges.
SNOLAB has a low background gamma ray counting facility to screen materials for use in the next generation of Neutrinoless Double Beta decay and Dark Matter experiments. The low background is achieved through a 2 km depth underground, gamma ray shielding from the lab environment, and Radon reduction with Nitrogen purge gas. SNOLAB has acquired a new detector in collaboration with the Health Canada, Comprehensive Nuclear Test Ban Treaty Monitoring Program. The detector will be used to further their high sensitivity monitoring program. We will present the results of the initial commissioning and calibration of this detector.
Hyper-Kamiokande (HK) will be a next-generation neutrino detector. Following the successful T2K experiment, it will use a long-baseline neutrino beam to study neutrino oscillation and discover CP-phase violation in the lepton sector, among other goals. To characterize the unoscillated neutrino beam, the upcoming Intermediate Water-Cherenkov Detector (IWCD) will intercept the neutrino beam at different off-axis angles using a multi-Photomultiplier Tube (mPMT) system to detect Cherenkov light produced by charged particles resulting from neutrino interactions in the detector. However, the neutrino beam can also interact with the soil and water surrounding the IWCD, generating a background of penetrating particles, such as pions, photons, muons and electrons, that may interfere with the desirable neutrino-event detection. To reduce the effects of such background a veto mechanism is required. At the bottom of the mPMT module, a scintillator plate will generate a hit when traversed by a background particle, which, as part of a time-coincidence circuit with other detectors at the outer region of IWCD, will help us veto undesired particles. In this presentation, I will describe the concept