A3D3 Undergraduate Summer Research Symposium
Friday 30 August 2024 -
13:00
Monday 26 August 2024
Tuesday 27 August 2024
Wednesday 28 August 2024
Thursday 29 August 2024
Friday 30 August 2024
13:00
Kilonova Posteriors for Estimating the Hubble Constant
-
Megan Averill
Kilonova Posteriors for Estimating the Hubble Constant
Megan Averill
13:00 - 13:20
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
Observations of gravitational waves emitted from compact binary mergers and associated kilonovae show promise for estimating the Hubble Constant. We are able to take data gathered from these kilonova events and implement them into different methods to calibrate a precise method of estimation. The Kernel Density Estimation (KDE) method is founded in Bayes' Theorem. This method utilizes functions representing individual data points of the set, then adds them together to get a KDE. We then combined the KDEs using a method outlined here https://github.com/tsunhopang/KDE_multiply/tree/main
13:20
FPGA Deployment of PointNET for Use in the KamLAND-Zen Experiment
-
Alex Migala
FPGA Deployment of PointNET for Use in the KamLAND-Zen Experiment
Alex Migala
13:20 - 13:40
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
While there are several experiments searching for neutrinoless double beta decay $(0\nu\beta\beta)$, a rare decay phenomenon, we focus our attention on the KamLAND-Zen experiment, which is a monolithic liquid xenon scintillator detector. Because the events that occur in the detector are indirectly observed, a reconstruction of the event must be done, which is a non-trivial process and is done after all the data is collected. This means that the experiment commits resources towards storing data that is not relevant to the $0\nu\beta\beta$ analysis. To help solve this problem, we propose a toolchain that deploys a machine learning model, PointNET, onto a Field Programmable Gate Array so that a fast inference may be done at the time of data collection. This is the first time that hardware-Algorithm co-design has been brought to the stage of $0\nu\beta\beta$ experiments.
13:40
Improving Data Compression with Conditional Autoencoders
-
Mariel Peczak
Improving Data Compression with Conditional Autoencoders
Mariel Peczak
13:40 - 14:00
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
As a result of the need to improve searches for new particles and measure particle properties at CERN, the LHC is undergoing a high-luminosity upgrade which will provide a dataset ten times larger than the one currently available. To avoid complications in particle reconstruction as a result of the increased number of simultaneous interactions (pileup) per collision, a radiation-hard high-granularity calorimeter, which will measure the energy and position of particles with significantly improved precision, will be installed. This level of precision and higher levels of pileup represent a significant increase in complexity and data rates, which must be reduced by several orders of magnitude in real time to be processed. Thus, we aim to explore the application of machine learning to optimize the data com- pression performed by the HGCal front-end electronics through the development of a conditional autoencoder to compress data automatically before transmission.
14:00
Evaluation of Machine Learning Classifiers for Characterizing Gravitational Wave Events
-
Seiya Tsukamoto
Evaluation of Machine Learning Classifiers for Characterizing Gravitational Wave Events
Seiya Tsukamoto
14:00 - 14:20
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
The detection of gravitational waves with the Laser Interferometer Gravitational Wave Observatory (LIGO) has provided the tools to probe the furthest reaches of the universe. A rapid follow up to compact binary coalescence (CBC) events and their electromagnetic counterparts is crucial to find short lived transients. After a gravitational wave (GW) detection another particular challenge is determining a fast and efficient way of characterizing events as astrophysical or terrestrial in origin. The mergers themselves provide many data products from low latency CBC search pipelines which can aid in discerning whether or not a GW signal is real. We present an efficient low latency method of alert classification applying Bayes’ Factors into three machine learning classification algorithms: Random Forest (RF), K-Nearest Neighbors (KNN), and Neural network (NN) using event data from the Mock Data Challenge (MDC). We report the true positive rate of the RF, KNN, and NN classifiers as is 0.82, 0.84, and 0.89 respectively.
14:20
Episodic reinforcement learning for 0vbb signal discrimination
-
Sonata Simonaitis-boyd
(
UC San Diego
)
Episodic reinforcement learning for 0vbb signal discrimination
Sonata Simonaitis-boyd
(
UC San Diego
)
14:20 - 14:40
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
0vbb decay is a Beyond the Standard Model process that, if discovered, could prove the Majorana nature of neutrinos---that they are their own antiparticles. The Majorana Demonstrator (MJD) is one experiment searching for 0vbb decay using semiconductor detectors, however the nature of the waveform data produced by the detectors is such that they are unlabelled, and producing ground-truth labels is an involved process if using traditional methods. Fortunately, machine learning methods like reinforcement learning (RL) are able to perform tasks on unlabelled data. I present an episodic RL algorithm implementing Randomized Return Decomposition for binary classification of detector events from the Majorana Demonstrator Data Release for AI/ML Applications. Under stringent masking of the MJD detector data, the RL-trained classifier slightly outperforms a standard supervised learning model trained under the same conditions, showing potential for further development and even future deployment as a first-stop classifier on other 0vbb decay experiments like LEGEND.
14:40
Neural Architecture Codesign for Physics Applications
-
Jason Weitz
Dmitri Demler
Neural Architecture Codesign for Physics Applications
Jason Weitz
Dmitri Demler
14:40 - 15:00
Room: https://ucsd.zoom.us/j/95846737086?pwd=Pwygoui0Ka9cUi0yW7RckjPSduOQll.1
We develop an automated pipeline to streamline neural architecture codesign for physics applications. Our method employs neural architecture search to enhance these models, including hardware costs, leading to the discovery of more hardware-efficient neural architectures. We exceed performance and show further speedup through model compression techniques such as quantization-aware-training and neural network pruning. We synthesize the ideal models to high level synthesis code for FPGA deployment with the hls4ml library. Additionally, our hierarchical search space provides greater flexibility in optimization, which can easily extend to other tasks and domains. We demonstrate this with two case studies: Bragg Peak finding in materials science and jet classification in high energy physics.