-
15/10/2024, 09:00
-
Javier Mauricio Duarte (Univ. of California San Diego (US))15/10/2024, 09:10
-
Dylan Sheldon Rankin (University of Pennsylvania (US))15/10/2024, 09:45
-
Gautham Narayan (UIUC)15/10/2024, 10:20
-
Kieron Burke15/10/2024, 11:20
I will briefly outline the huge importance of density functional theory (DFT) calculations
Go to contribution page
to modern materials design (and to chemistry and warm dense matter, etc). I will then
discuss the impact of machine learning on the field, especially the rise of machine-learned
potentials. I will briefly mention my own work in using ML to improve DFT. -
Daniel Ratner (SLAC)15/10/2024, 11:55
-
David Gleich (Purdue)15/10/2024, 12:30
It is now standard practice across science to use models that have been trained, fit, or learned based on a set of data. Many of these models involve a large number of parameters that make direct interpretation of the model challenging and a near black-box model view appropriate. We explore the possibilities of using ideas based on topological analysis methods to understand and evaluate these...
Go to contribution page -
Siqi Miao (Georgia Tech)15/10/2024, 14:00Standard 15 min talk
This study introduces a novel transformer model optimized for large-scale point cloud processing in scientific domains such as high-energy physics (HEP) and astrophysics. Addressing the limitations of graph neural networks and standard transformers, our model integrates local inductive bias and achieves near-linear complexity with hardware-friendly regular operations. One contribution of this...
Go to contribution page -
Jiahui Zhuo (Univ. of Valencia and CSIC (ES))15/10/2024, 14:15Standard 15 min talk
One of the most significant challenges in tracking reconstruction is the reduction of "ghost tracks," which are composed of false hit combinations in the detectors. When tracking reconstruction is performed in real-time at 30 MHz, it introduces the difficulty of meeting high efficiency and throughput requirements. A single-layer feed-forward neural network (NN) has been developed and trained...
Go to contribution page -
Will Benoit15/10/2024, 14:30Standard 15 min talk
Deep Learning (DL) applications for gravitational wave (GW) physics are becoming increasingly common without the infrastructure to validate them at scale or deploy them in real-time. The challenge of gravitational waves requires and real-time time series workflow. With ever more sensitive GW observing runs beginning in 2023-5 and progressing through the next decade, ever-increasing...
Go to contribution page -
Dmitry Kondratyev (Purdue University (US))15/10/2024, 14:45Standard 15 min talk
Computing demands for large scientific experiments, including experiments at the Large Hadron Collider and the future DUNE neutrino detector, will increase dramatically in the next decades. Heterogeneous computing provides a solution enabling increased computing demands that pass the limitations brought on by the end of Dennard scaling. However, to effectively exploit Heterogeneous compute,...
Go to contribution page -
James Giroux (W&M)15/10/2024, 15:00Standard 15 min talk
The Deep(er)RICH architecture integrates Swin Transformers and normalizing flows, and demonstrates significant advancements in particle identification (PID) and fast simulation. Building on the earlier DeepRICH model, Deep(er)RICH extends its capabilities across the entire kinematic region covered by the DIRC detector in the \textsc{GlueX} experiment. It learns particle identification (PID)...
Go to contribution page -
Mirco Hünnefeld (University of Wisconsin-Madison)15/10/2024, 15:15Lightning 5 min talk + poster
Recently, compelling evidence for the emission of high-energy neutrinos from our host Galaxy - the Milky Way - was reported by IceCube, a neutrino detector instrumenting a cubic kilometer of glacial ice at the South Pole. This breakthrough observation is enabled by advances in AI, including a physics-driven deep learning method capable of exploiting available symmetries and domain knowledge....
Go to contribution page -
Jovan Mitrevski (Fermi National Accelerator Lab. (US))15/10/2024, 15:20Lightning 5 min talk, no poster
This R&D project, initiated by the DOE Nuclear Physics AI-Machine Learning initiative in 2022, explores advanced AI technologies to address data processing challenges at RHIC and future EIC experiments. The main objective is to develop a demonstrator capable of efficient online identification of heavy-flavor events in proton-proton collisions (~1 MHz) based on their decay topologies, while...
Go to contribution page -
Aaron Wang (University of Illinois at Chicago (US)), Vivekanand Gyanchand Sahu (University of California San Diego)15/10/2024, 15:25Lightning 5 min talk + poster
Attention-based transformers are ubiquitous in machine learning applications from natural language processing to computer vision. In high energy physics, one central application is to classify collimated particle showers in colliders based on the particle of origin, known as jet tagging. In this work, we study the interpretatbility and prospects for acceleration of Particle Transformer (ParT),...
Go to contribution page -
Chang Sun (California Institute of Technology (US))15/10/2024, 15:30Lightning 5 min talk + poster
In this work, we present the Scalable QUantization-Aware Real-time Keras (S-QUARK), an advanced quantization-aware training (QAT) framework for efficient FPGAs inference built on top of Keras-v3, supporting all Tensorflow, JAX, and PyTorch backends.
The framework inherits all perks from the High Granularity Quantization (HGQ) library, and extends it to support fixed-point numbers with...
Go to contribution page -
Jared Burleson (University of Illinois at Urbana-Champaign)15/10/2024, 15:35Lightning 5 min talk + poster
The next phase of high energy particle physics research at CERN will
Go to contribution page
involve the High-Luminosity Large Hadron Collider (HL-LHC). In preparation for
this phase, the ATLAS Trigger and Data AcQuisition (TDAQ) system will undergo
upgrades to the online software tracking capabilities. Studies are underway to
assess a heterogeneous computing farm deploying GPUs and/or FPGAs, together
with the... -
Benedikt Riedel15/10/2024, 15:40Lightning 5 min talk + poster
An Artificial Intelligence (AI) model will spend “90% of its lifetime in inference.”To fully utilize co-
Go to contribution page
processors, such as FPGAs or GPUs, for AI inference requires O(10) CPU cores to feed to work to the
coprocessors. Traditional data analysis pipelines will not be able to effectively and efficiently use
the coprocessors to their full potential. To allow for distributed access to... -
Andrew Mogan15/10/2024, 15:45Lightning 5 min talk + poster
Processing large volumes of sparse neutrino interaction data is essential to the success of liquid argon time projection chamber (LArTPC) experiments such as DUNE. High rates of radiological background must be eliminated to extract critical information for track reconstruction and downstream analysis. Given the computational load of this rejection, and potential real time constraints of...
Go to contribution page -
Oz Amram (Fermi National Accelerator Lab. (US))15/10/2024, 15:50Lightning 5 min talk + poster
Detector simulation is a key component of physics analysis and related activities in particle physics.In the upcoming High Luminosity LHC era, simulation will be required to use a smaller fraction of computing in order to satisfy resource constraints at the same time as experiments are being upgraded new with the new higher granularity detectors, which requires significantly more resources to...
Go to contribution page -
Akshay Malige15/10/2024, 15:55Lightning 5 min talk + poster
The demand for machine learning algorithms on edge devices, such as Field-Programmable Gate Arrays (FPGAs), arises from the need to process and intelligently reduce vast amounts of data in real-time, especially in large-scale experiments like the Deep Underground Neutrino Experiment (DUNE). Traditional methods, such as thresholding, clustering, multiplicity checks, or coincidence checks,...
Go to contribution page -
Maira Khan (Fermi National Accelerator Laboratory)15/10/2024, 16:00Lightning 5 min talk + poster
Detecting quenches in superconducting (SC) magnets by non-invasive means is a challenging real-time process that involves capturing
Go to contribution page
and sorting through physical events that occur at different frequencies and appear as various signal features. These events may be correlated across instrumentation type, thermal cycle, and ramp. These events together build a more complete picture of continuous... -
Luca Scomparin15/10/2024, 16:05Lightning 5 min talk + poster
Reinforcement Learning (RL) is a promising approach for the autonomous AI-based control of particle accelerators. Real-time requirements for these algorithms can often not be satisfied with conventional hardware platforms.
Go to contribution page
In this contribution, the unique KINGFISHER platform being developed at KIT will be presented. Based on the novel AMD-Xilinx Versal platform, this system provides... -
Anita Nikolich (UIUC)15/10/2024, 16:10Lightning 5 min talk + poster
AI Red Teaming, an offshoot of traditional cybersecurity practices, has emerged as a critical tool for ensuring the integrity of AI systems. An under explored area has been the application of AI Red Teaming methodologies to scientific applications, which increasingly use machine learning models in workflows. I'll highlight why this is important and how AI Red Teaming can highlight...
Go to contribution page -
Chang Sun (California Institute of Technology (US))15/10/2024, 16:15Lightning 5 min talk + poster
Neural networks with a latency requirement at the order of microseconds, like the ones used at the CERN Large Hadron Colliders, are typically deployed on FPGAs fully unrolled. A bottleneck for the deployment of such neural networks is area utilization, which is directly related to the number of Multiply Accumulate (MAC) operations in matrix-vector multiplications.
In this work, we present...
Go to contribution page -
ChiJui Chen15/10/2024, 16:20Poster
In software-hardware co-design, balancing performance with hardware constraints is critical, especially when using FPGAs for high-energy physics (HEP) applications with hls4ml. Limited resources and stringent latency requirements exacerbate this challenge. Existing frameworks such as AutoQKeras use Bayesian optimization to balance model size/energy and accuracy, but they are time-consuming,...
Go to contribution page -
Caleb Geniesse (Lawrence Berkeley National Laboratory)15/10/2024, 17:00Standard 15 min talk
Characterizing the loss of a neural network can provide insights into local structure (e.g., smoothness of the so-called loss landscape) and global properties of the underlying model (e.g., generalization performance). Inspired by powerful tools from topological data analysis (TDA) for summarizing high-dimensional data, we are developing tools for characterizing the underlying shape (or...
Go to contribution page -
Eric Anton Moreno (Massachusetts Institute of Technology (US))15/10/2024, 17:15Standard 15 min talk
Matched-filtering detection techniques for gravitational-wave (GW) signals in ground-based interferometers rely on having well-modeled templates of the GW emission. Such techniques have been traditionally used in searches for compact binary coalescences (CBCs) and have been employed in all known GW detections so far. However, interesting science cases aside from compact mergers do not yet have...
Go to contribution page -
Noah Alexander Zipper (University of Colorado Boulder (US))15/10/2024, 17:30Standard 15 min talk
We present the development, deployment, and initial recorded data of an unsupervised autoencoder trained for unbiased detection of new physics signatures in the CMS experiment during LHC Run 3. The Global Trigger makes the final hardware decision to readout or discard data from each LHC collision, which occur at a rate of 40 MHz, within nanosecond latency constraints. The anomaly detection...
Go to contribution page -
Mr Jason Edward Johnson (Purdue University)15/10/2024, 17:45Standard 15 min talk
The rapidly developing frontiers of additive manufacturing, especially multi-photon lithography, create a constant need for optimization of new process parameters. Multi-photon lithography is a 3D printing technique which uses the nonlinear absorption of two or more photons from a high intensity light source to induce highly confined polymerization. The process can 3D print structures with...
Go to contribution page -
Oliver Hoidn15/10/2024, 18:00Standard 15 min talk
Coherent diffractive imaging (CDI) techniques like ptychography enable nanoscale imaging, bypassing the resolution limits of lenses. Yet, the need for time consuming iterative phase recovery hampers real-time imaging. While supervised deep learning strategies have increased reconstruction speed, they sacrifice image quality. Furthermore, these methods’ demand for extensive labeled training...
Go to contribution page -
Jai Yu (U Chicago)16/10/2024, 09:00
-
Zhijian Liu (UCSD)16/10/2024, 09:35
-
Anand Raghunathan (Purdue University)16/10/2024, 10:10
-
Erica Carlson16/10/2024, 11:10
Spatially resolved surface probes have recently revealed rich electronic textures at the nanoscale and mesoscale in many quantum materials. Rather than transitioning from insulator to metal all at once, VO2 forms an intricate network of metallic puddles that extend like filigree over a wide range of temperatures. We developed a convolutional neural network to harvest information from both...
Go to contribution page -
Supriyo Datta (Purdue University)16/10/2024, 11:45
-
Seda Ogrenci (Northwestern University)16/10/2024, 14:00
-
Sergei Kalilin16/10/2024, 14:35
-
Lino Oscar Gerlach (Princeton University (US))16/10/2024, 15:10Lightning 5 min talk + poster
In the search for new physics, real-time detection of anomalous events is critical for maximizing the discovery potential of the LHC. CICADA (Calorimeter Image Convolutional Anomaly Detection Algorithm) is a novel CMS trigger algorithm operating at the 40 MHz collision rate. By leveraging unsupervised deep learning techniques, CICADA aims to enable physics-model independent trigger decisions,...
Go to contribution page -
Jack Henry Cleeve (Columbia University)16/10/2024, 15:15Lightning 5 min talk + poster
Unsupervised learning algorithms enable insights from large, unlabeled datasets, allowing for feature extraction and anomaly detection that can reveal latent patterns and relationships often not found by supervised or classical algorithms. Modern particle detectors, including liquid argon time projection chambers (LArTPCs), collect a vast amount of data, making it impractical to save...
Go to contribution page -
Ryan Forelli (Northwestern University)16/10/2024, 15:20Lightning 5 min talk + poster
Low latency machine learning inference is vital for many high-speed imaging applications across various scientific domains. From analyzing fusion plasma [1] to rapid cell-sorting [2], there is a need for in-situ fast inference in experiments operating in the kHz to MHz range. External PCIe accelerators are often unsuitable for these experiments due to the associated data transfer overhead,...
Go to contribution page -
Michael Tan Bezick16/10/2024, 15:25Lightning 5 min talk + poster
Recent advancements in generative artificial intelligence (AI), including transformers, adversarial networks, and diffusion models, have demonstrated significant potential across various fields, from creative art to drug discovery. Leveraging these models in engineering applications, particularly in nanophotonics, is an emerging frontier. Nanophotonic metasurfaces, which manipulate light at...
Go to contribution page -
Olivia Weng16/10/2024, 15:30Lightning 5 min talk + poster
Applications like high-energy physics and cybersecurity require extremely high throughput and low latency neural network (NN) inference. Lookup-table-based NNs address these constraints by implementing NNs purely as lookup tables (LUTs), achieving inference latency on the order of nanoseconds. Since LUTs are a fundamental FPGA building block, LUT-based NNs map to FPGAs easily. LogicNets (and...
Go to contribution page -
Hoin Jung (Purdue University)16/10/2024, 15:35Lightning 5 min talk + poster
Recent advancements in Vision-Language Models (VLMs) have enabled complex multimodal tasks by processing text and image data simultaneously, significantly enhancing the field of artificial intelligence. However, these models often exhibit biases that can skew outputs towards societal stereotypes, thus necessitating debiasing strategies. Existing debiasing methods focus narrowly on specific...
Go to contribution page -
Ben Hawks (Fermi National Accelerator Lab)16/10/2024, 15:40Lightning 5 min talk + poster
As machine learning (ML) increasingly serves as a tool for addressing real-time challenges in scientific applications, the development of advanced tooling has significantly reduced the time required to iterate on various designs. Despite these advancements in areas that once posed major obstacles, newer challenges have emerged. For example, processes that were not previously considered...
Go to contribution page -
Dmitri Demler16/10/2024, 15:45Lightning 5 min talk + poster
We develop an automated pipeline to streamline neural architecture codesign for physics applications, to reduce the need for ML expertise when designing models for a novel task. Our method employs a two-stage neural architecture search (NAS) design to enhance these models, including hardware costs, leading to the discovery of more hardware-efficient neural architectures. The global search...
Go to contribution page -
Niharika Das (G H Raisoni University)16/10/2024, 15:50Lightning 5 min talk + poster
Deep learning, particularly employing the Unet architecture, has become pivotal in cardiology, facilitating detailed analysis of heart anatomy and function. The segmentation of cardiac images enables the quantification of essential parameters such as myocardial viability, ejection fraction, cardiac chamber volumes, and morphological features. These segmentation methods operate autonomously...
Go to contribution page -
Nicolò Ghielmetti (CERN)16/10/2024, 15:55Lightning 5 min talk + poster
The number of CubeSats launched for data-intensive applications is increasing due to the modularity and reduced cost these platforms provide. Consequently, there is a growing need for efficient data processing and compression. Tailoring onboard processing with Machine Learning to specific mission tasks can optimise downlink usage by focusing only on relevant data, ultimately reducing the...
Go to contribution page -
Denis Leshchev16/10/2024, 16:45Standard 15 min talk
Modern scientific instruments generate vast amounts of data at increasingly higher rates, outpacing traditional data management strategies that rely on large-scale transfers to offline storage for post-analysis. To enable next-generation experiments, data processing must be performed at the edge—directly alongside the scientific instruments. By integrating these instruments with...
Go to contribution page -
Emadeldeen Hamdan (University of Illinois Chicago)16/10/2024, 17:00Standard 15 min talk
In situ machine learning data processing for neuroscience probes can have wide-reaching applications from data filtering, event triggering, and ultimately real-time interventions at kilohertz frequencies intrinsic to natural systems. In this work, we present the integration of Machine Learning (ML) algorithms on an off-the-shelf neuroscience data acquisition platform by Spike Gadgets. The...
Go to contribution page -
Lorenzo Cacciapuoti16/10/2024, 17:15Standard 15 min talk
Artificial neural networks (ANNs) are capable of complex feature extraction and classification with applications in robotics, natural language processing, and data science. Yet, many ANNs have several key limitations; notably, current neural network architectures require enormous training datasets and are computationally inefficient. It has been posited that biophysical computations in single...
Go to contribution page -
Ms Jieun Yoo (UIC)16/10/2024, 17:30Standard 15 min talk
We introduce a smart pixel prototype readout integrated circuit (ROIC) fabricated using a 28 nm bulk CMOS process, which integrates a machine learning (ML) algorithm for data filtering directly within the pixel region. This prototype serves as a proof-of-concept for a potential Phase III pixel detector upgrade of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC)....
Go to contribution page -
Marzieh Vaez Torshizi (Siemens EDA)16/10/2024, 17:45Standard 15 min talk
Nowadays, the application of neural networks (NNs) has expanded across different industries (e.g., autonomous vehicles, manufacturing, natural-language processing, etc.) due to their improved accuracy results. This was made possible because of the increased complexity of these networks which requires higher computational efforts and memory consumption. As a result, there is more demand for...
Go to contribution page -
Botao Du (Purdue University)16/10/2024, 18:00Standard 15 min talk
High-fidelity single-shot quantum state readout is crucial for advancing quantum technology. Machine-learning (ML) assisted qubit-state discriminators have shown high readout fidelity and strong resistance to crosstalk. By directly integrating these ML models into FPGA-based control hardware, fast feedback control becomes feasible, which is vital for quantum error correction and other...
Go to contribution page -
Callie Hao17/10/2024, 09:00
-
Siddharth Garg17/10/2024, 09:35
-
Abhishek Jain17/10/2024, 10:10
-
Cristiano Fanelli (William & Mary)17/10/2024, 11:10
The Electron Ion Collider (EIC) promises unprecedented insights into nuclear matter and quark-gluon interactions, with advances in artificial intelligence (AI) and machine learning (ML) playing a crucial role in unlocking its full potential. This talk will explore potential opportunities for AI/ML integration within the EIC program, drawn from broader discussions in the AI4EIC forum. I will...
Go to contribution page -
Sergey Furletov (Jefferson lab)17/10/2024, 11:45
-
Hamza Ezzaoui Rahali (University of Sherbrooke)17/10/2024, 13:10Standard 15 min talk
Deploying Machine Learning (ML) models on Field-Programmable Gate Arrays (FPGAs) is becoming increasingly popular across various domains as a low-latency and low-power solution that helps manage large data rates generated by continuously improving detectors. However, developing ML models for FPGA deployment is often hindered by the time-consuming synthesis procedure required to evaluate...
Go to contribution page -
Charles-Étienne Granger (Université de Sherbrooke)17/10/2024, 13:25Standard 15 min talk
Ultra-high-speed detectors are crucial in scientific and healthcare fields, such as medical imaging, particle accelerators and astrophysics. Consequently, upcoming large dark matter experiments, like the ARGO detector with an anticipated 200 m² detector surface, are generating massive amounts of data across a large quantity of channels that increase hardware, energy and environmental costs....
Go to contribution page -
Alexis Shuping (Northwestern University)17/10/2024, 13:40Standard 15 min talk
High-Level Synthesis (HLS) techniques, coupled with domain-specific translation tools such as HLS4ML, have made the development of FPGA-based Machine Learning (ML) accelerators more accessible than ever before, allowing scientists to develop and test new models on hardware with unprecedented speed. However, these advantages come with significant costs in terms of implementation complexity. The...
Go to contribution page -
Alan T. L. Bacellar (University of Texas at Austin)17/10/2024, 13:55Standard 15 min talk
We introduce the Differentiable Weightless Neural Network (DWN), a model based on interconnected lookup tables. Training of DWNs is enabled by a novel Extended Finite Difference technique for approximate differentiation of binary values. We propose Learnable Mapping, Learnable Reduction, and Spectral Regularization to further improve the accuracy and efficiency of these models. We evaluate...
Go to contribution page -
Haoyan Wang (Intel Corporation)17/10/2024, 14:10Standard 15 min talk
The increasing demand for efficient machine learning (ML) acceleration has intensified the need for user-friendly yet flexible solutions, particularly for edge computing. Field Programmable Gate Arrays (FPGAs), with their high configurability and low-latency processing, offer a compelling platform for this challenge. Our presentation gives update to an end-to-end ML acceleration flow utilizing...
Go to contribution page -
Ling-Chi Yang (Institute of Electronics in National Yang Ming Chiao Tung University)17/10/2024, 14:30Standard 15 min talk
Transformers are becoming increasingly popular in fields such as natural language processing, speech processing, and computer vision. However, due to the high memory bandwidth and power requirements of Transformers, contemporary hardware is gradually unable to keep pace with the trend of larger models. To improve hardware efficiency and increase throughput and reduce latency, there has been a...
Go to contribution page -
Sonata Simonaitis-boyd17/10/2024, 14:45Standard 15 min talk
Neutrinoless double beta ($0 \nu \beta \beta$) decay is a Beyond the Standard Model process that, if discovered, could prove the Majorana nature of neutrinos—that they are their own antiparticles. In their search for this process, $0 \nu \beta \beta$ decay experiments rely on signal/background discrimination, which is traditionally approached as a supervised learning problem. However, the...
Go to contribution page -
Janina Dorin Hakenmueller (Duke University)17/10/2024, 15:00Standard 15 min talk
High-purity germanium spectrometers are widely used in fundamental physics and beyond. Their excellent energy resolution enables the detection of electromagnetic signals and recoils down to below 1keV ionization energy and even lower. However, the detectors are also very sensitive to all types of noise that will overwhelm the trigger routines of the data acquisition system and significantly...
Go to contribution page -
Philip Coleman Harris (Massachusetts Inst. of Technology (US))17/10/2024, 15:30
-
17/10/2024, 17:00
-
Jennifer Ngadiuba (FNAL)18/10/2024, 10:00
-
Marco Rovere (CERN)18/10/2024, 10:20
-
Markus Elsing (CERN)18/10/2024, 10:40
-
18/10/2024, 11:20
-
18/10/2024, 13:45
-
Sasha Boltasseva (Purdue University)18/10/2024, 13:50
-
Mr Surojit Saha (Institute of Astronomy, National Tsing Hua University, Taiwan)Poster
The coalescence of binary neutron star (BNS) in the event GW170817, leading to the generation of gravitational waves (GW) and accompanied by kilonova (KNe), the electromagnetic (EM) counterpart, has been a prime topic of interest for the Astronomy community in recent times as it provided much insight into multi-messenger astronomy. Since its discovery in 2017, several research teams have put...
Go to contribution page -
Arghya Ranjan DasPoster
Enforcing sparsity, the number of zeros in a neural network’s weight matrices, has a variety of uses in machine learning such as improving computational efficiency and controlling encoding efficiency. Often achieving a specified sparsity requires trial and error or multiple retrainings of the same model until the criteria are met, which can be labor intensive and prone to error. Using the...
Go to contribution page -
Andrew Naylor (Lawrence Berkeley National Lab)Poster
As scientific experiments are generating increasingly larger and more complex datasets, the need to accelerate scientific workflows becomes ever more pressing. Recent advancements in machine learning (ML) algorithms, combined with the power of cutting-edge GPUs, have led to significant performance gains. However, optimizing computational efficiency remains crucial to minimize processing...
Go to contribution page -
Matt Wilkinson (University of Washington)Poster
Reflection High Energy Electron Diffraction (RHEED) is a technique for real-time monitoring of surface crystal structures during thin-film deposition. By directing a high-energy electron beam at a shallow angle onto a crystalline surface, RHEED produces diffraction patterns that reveal valuable information about both the bulk structure and the surface's atomic arrangement. The resulting...
Go to contribution page -
Poster
Accurate estimation of subglacial bed topography is crucial for understanding ice sheet dynamics and their responses to climate change. In this study, we employ machine learning models, enhanced with Spark parallelization, to predict subglacial bed elevation using surface attributes such as ice thickness, flow velocity, and surface elevation. Radar track data serves as ground truth for model...
Go to contribution page -
Daniel Rattner (SLAC)
-
Poster
Conventional photonic device design often relies on manual trial-and-error processes and simplistic algorithms, in which design processes are severely constrained by intuition-based models and limited adjustable parameters, leading to time-consuming inefficiencies. Although optimization methods like evolutionary algorithms have been introduced, they are insufficient in addressing...
Go to contribution page -
Cristian Barinaga (Purdue University)Poster
Modern AI model creation requires ample computational power to process data in both predictive and learning phases. Due to memory and processing constraints, edge and IoT electronics using such models can be forced to outsource optimization and training to either the cloud or pre-deployment development. This poses issues when optimization and classification are required from sensor and...
Go to contribution page -
Alexander Yue (Stanford University/SLAC)Poster
Detectors at next-generation high-energy physics experiments face several daunting requirements: high data rates, damaging radiation exposure, and stringent constraints on power, space, and latency. In light of this, recent detector design studies have explored the use of machine learning (ML) in readout Application-Specific Integrated Circuits (ASICs) to run intelligent inference and data...
Go to contribution page -
100. Autonomous discoveries using a modular ecosystem for adaptive anomaly detection in LHC triggersShaghayegh EmamiPoster
Anomaly detection (AD) in the earliest stage of LHC trigger systems represents a fundamentally new tool to enable data-driven discoveries. While initial efforts have focused on adapting powerful offline algorithms to these high-throughput streaming systems, the question of how such algorithms should adapt to constantly-evolving detector conditions remains a major challenge. In this work, we...
Go to contribution page -
Jack Patrick Rodgers (Purdue University (US))Poster
As deep learning methods and particularly Large Language Models have shown huge promise in a variety of applications, we attempt to apply a BERT (Bidirectional Encoder Representations from Transformers) model developed by Google utilizing the infamous multiheaded attention mechanism to a high energy physics problem. Specifically, we focus on the process of top quark-anti top decay...
Go to contribution page -
Poster
The Smartpixels project aims to deliver on-device data reduction using neural networks for fine granularity pixel sensors used in high-precision tracking detectors. This has resulted in two major implementations: a filter network and a regression network. Both of these networks deliver novel capabilities for pixels sensors, including on-sensor background rejection and single-sensor...
Go to contribution page -
Rajeev BotadraPoster
Non-Human Primates (NHPs) are central to neuroscience research due to their complex behavioral interactions and physiological similarities to the human brain. A principal motivation behind the NHP research in the aoLab at the University of Washington is to understand and model neural circuits, which can be translated for practical applications for humans. However, the nonlinear...
Go to contribution page -
Josh PetersonPoster
IceCube DeepCore is an infill of the IceCube Neutrino Observatory designed to study neutrinos with energies as low as 5 GeV. Reconstruction and classification tasks near the lower energy threshold of IceCube DeepCore are especially difficult due to the low number of detected photons per neutrino event. Many neural networks have been developed for these tasks, and there are many ways we could...
Go to contribution page -
Seungbin ParkPoster
Decoding neural activity into behaviorally-relevant variables such as speech or movement is an essential step in the development of brain-machine interfaces (BMIs)and can be used to clarify the role of distinct brain areas in relation to behavior. Two-photon (2p) calcium imaging provides access to thousands of neurons withsingle-cell resolution in genetically-defined populations and therefore...
Go to contribution page -
YAPING QI (Tohoku University)Poster
This study investigates the use of deep learning to enhance Raman spectroscopy analysis for two-dimensional (2D) materials, which are valued for their unique structural properties. Traditional methods for analyzing Raman data are time-consuming and rely heavily on manual interpretation, prompting the need for more efficient approaches. We developed a one-dimensional convolutional neural...
Go to contribution page -
Pranshul Sardana (Purdue Unviersity)Poster
Diffusion is a natural phenomenon in fluids. Its measurement can be done optically by seeding an otherwise featureless fluid with tracer particles and observing their motion using a microscope. However, existing particle-based diffusion coefficient measurement algorithms have multiple failure modes, especially when the fluid has a flow, or the particles are defocused. This work uses...
Go to contribution page -
Ishat Raihan JamilPoster
Additive manufacturing at the micro-nanoscale has made significant advancements through multi-photon lithography techniques. The recently developed continuous layer-by-layer projection 3D printing process facilitates high-speed micro-manufacturing. Achieving precise 3D printing, however, requires optimizing process parameters for each 2D layer to mitigate imperfections, such as...
Go to contribution page -
Zhuo (Cecilia) Chen (Bryn Mawr College)Poster
In material science, 4D Scanning Transmission Electron Microscopy (4D STEM) is a dataset of images formed by electrons passing through a thin specimen with the electron beam focused on a fine spot [1], allowing material scientists to learn some structural properties. Oxley et al. showed that deep learning is powerful for distinguishing structures embedded within the data [2]. However, Oxley et...
Go to contribution page -
Yilin ShenPoster
AI Engines (AIEs) are a component of the AMD Versal Adaptive Compute Acceleration Platform (ACAP). It is an innovative subsystem that offers extensive parallelism and enhanced compute density. Each AIE is a VLIW processor equipped with a powerful multiply-accumulate (MAC) unit that can perform multiple MAC operations in the same cycle. These processors are grouped together in a 2-D grid of...
Go to contribution page -
David Stewart (Wayne State University)Poster
Many studies in recent years have shown that neural networks (NNs) trained using jet sub-structure observables in ultra-relativistic heavy ion collision events are capable of significantly increasing the resolution of jet-\pT background corrections relative to the standard area-based technique. However, modifications to jet substructure due to quenching in quark-gluon plasma (QGP) in central...
Go to contribution page -
Chang Sun (California Institute of Technology (US))Poster
We demonstrate the use of the MLP-Mixer architecture for fast jet classification in high-energy physics. The MLP-Mixer architecture is a simple and efficient architecture consisting of MLP blocks applied in different directions of the input tensor. It is first proposed by Tolstikhin et al., and is shown to be competitive with state-of-the-art architectures...
Go to contribution page -
Abhishikth Mallampalli (University of Wisconsin Madison (US))Poster
We present our approach to mitigate the Beam-Induced Background(BIB) in a muon collider, leveraging machine learning. We then utilize pruning and quantization-aware training to enable real-time data processing, and demonstrate that we can distinguish BIB energy deposits from physics processes of interest with significant accuracy using FPGAs. Our work is a first proof-of-concept of the ability...
Go to contribution page -
Alexander MigalaPoster
Monolithic liquid scintillator detector technology has been at the center for exploring new neutrino physics. The KamLAND-Zen experiment exemplifies this detector technology and has yielded top results in the quest for neutrinoless double-beta ($0\nu\beta\beta$) decay. Experimenters must reconstruct each event's position and energy from the raw data produced to understand the physical events...
Go to contribution page -
Cymberly TsaiPoster
Fitting data to a variety of models is a fundamental challenge in the monitoring and control of dynamical systems across science and manufacturing domains. In this work, we present a compact foundation model designed for adaptive function selection and regression. The proposed architecture utilizes 1D convolutional neural networks (CNNs), augmented by physical constraints, to facilitate the...
Go to contribution page -
110. Fully-connected Neural Network for Orbital-free DFT: Exact Conditions and Non-local InformationUlises Zarate (Purdue University)Poster
Density Functional Theory (DFT) is one of the most successful methods for computing ground-state properties of molecules and materials. In its purest form ("orbital-free DFT"), it transforms a $3N$-dimensional interacting electron problem into one 3D integro-differential problem at the cost of approximating two functionals of the electron density $n(\mathbf{r})$, one of them being for the...
Go to contribution page -
Jue WangPoster
Shape-morphing devices, an emerging technology in soft robotics, have attracted significant attention due to their potential in applications such as human-machine interfaces, biomimetic robotics, haptic feedback, and tools for manipulating biological systems. These devices mimic the flexible, dynamic behavior of biological organisms, enabling programmable, controllable, and reversible...
Go to contribution page -
Miles Cochran-Branson (University of Washington (US))Poster
Particle tracking at Large Hadron Collider (LHC) experiments is a crucial component of particle reconstruction, yet it remains one of the most computationally challenging tasks in this process. As we approach the High-Luminosity LHC era, the complexity of tracking is expected to increase significantly. Leveraging coprocessors such as GPUs presents a promising solution to the rising...
Go to contribution page -
Ethan Colbert (Purdue University (US))Poster
One potential way to meet the quickly growing computing demands in High Energy Physics (HEP) experiments is by leveraging specialized processors such as GPUs. The “as a service” (AAS) approach helps improve utilization of GPU resources by allowing one GPU to serve a wide range of tasks, significantly reducing idle time. The SONIC project implements the AAS approach for a variety of widely used...
Go to contribution page -
Jahanzeb AhmadLightning 5 min talk + poster
In the presentation, the introduction of the Intel FPGA AI Suite alongside the revolutionary AI Tensor Blocks recently incorporated into the latest FPGA device families by Intel for deep learning inference is showcased. These innovative FPGA components bring real-time, low-latency, and energy-efficient processing to the forefront. They are supported by the inherent advantages of Intel FPGAs,...
Go to contribution page -
Dennis Plotnikov (Johns Hopkins University (US))Poster
Recent advancements in use of machine learning (ML) techniques on field-programmable gate arrays (FPGAs) have allowed for the implementation of embedded neural networks with extremely low latency. This is invaluable for particle detectors at the Large Hadron Collider, where latency and used area are strictly bounded. The hls4ml framework is a procedure that converts trained ML model software...
Go to contribution page -
Raghav Kansal (Univ. of California San Diego (US))Poster
Fast, accurate simulations are becoming increasingly necessary for the precision measurements and BSM searches planned by LHC experiments in Run 3 and beyond. The recent breakthroughs in deep generative modelling in computer vision and natural language processing offer a promising and exciting avenue for improving the speed of current LHC simulation paradigms by up to 3 orders of magnitude. We...
Go to contribution page -
Sergei Kalilin (UTK)
-
Dr Jie Feng (Shenzhen Campus of Sun Yat-Sen University (CN))Poster
This work introduces advanced computational techniques for modeling the time evolution of compact binary systems using machine learning. The dynamics of compact binary systems, such as black holes and neutron stars, present significant nonlinear challenges due to the strong gravitational interactions and the requirement for precise numerical simulations. Traditional methods, like the...
Go to contribution page -
Poster
Effective pile-up suppression, particle ID and clustering are essential for maximising the physics performance of the Phase-II Global trigger of the ATLAS experiment. To address this, we train both convolutional and DeepSets neural networks to exploit cluster topologies to accurately predict calorimeter cell labels, and benchmark performance against existing approaches. We optimise the...
Go to contribution page -
Poster
Machine learning and artificial neural networks (ANNs) have increasingly become integral to data analysis research in astrophysics due to the growing demand for fast calculations resulting from the abundance of observational data. Simultaneously, neutron stars and black holes have been extensively examined within modified theories of gravity since they enable the exploration of the strong...
Go to contribution page -
Stefano Veneziano (INFN e Università Roma Sapienza)
-
Seiya TsukamotoPoster
The detection of gravitational waves with the Laser Interferometer Gravitational Wave Observatory (LIGO) has provided the tools to probe the furthest reaches of the universe. A rapid follow up to compact binary coalescence (CBC) events and their electromagnetic counterparts is crucial to find short lived transients. After a gravitational wave (GW) detection, another particular challenge is...
Go to contribution page -
Henry Paschke, Mr James Gaboriault-Whitcomb (YSU)Poster
Optimizing the inference of Graph Neural Networks (GNNs) for track finding is very important for improving how well particle collision event reconstruction works. In high-energy physics experiments, like at the Large Hadron Collider (LHC), detectors generate a ton of complicated and noisy data from particles colliding at extremely high speeds. Track finding is about reconstructing the paths of...
Go to contribution page -
Liv Helen Vage (Princeton University (US))Poster
Upgrades to the CMS experiment will see the average pileup go from 50 to 140 and eventually 200. With current algorithms, this would mean that almost 50% of the High Level Trigger time budget would be spent on particle track reconstruction. Many ML methods have been explored to address the challenge of slow particle tracking at high pileup. Reinforcement learning is presented as a novel method...
Go to contribution page -
Stylianos Tzelepis (National Technical Univ. of Athens (GR))Poster
Deploying large CNNs on resource-constrained hardware such as FPGAs poses significant challenges, particularly in balancing high throughput with limited resources and power consumption. To address these challenges, hls4ml was leveraged to accelerate inference through a streaming architecture, in contrast to programmable engines with dedicated instruction sets commonly used to scale to...
Go to contribution page -
Bhaskar VermaPoster
In the emerging field of gravitational-wave (GW) astronomy, the data collected by ground-based GW detectors such as LIGO is key to understanding the universe. In addition to detector noise and potential astrophysical signals, detector data consists of various different types of artifacts that hinder our ability to detect GW signals. These artifacts, known as glitches, are non-stationary,...
Go to contribution page -
David Jiang (Univ. Illinois at Urbana Champaign (US))Poster
Pixel detectors are highly valuable for their precise measurement of charged particle trajectories. However, next-generation detectors will demand even smaller pixel sizes, resulting in extremely high data rates surpassing those at the HL-LHC. This necessitates a “smart” approach for processing incoming data, significantly reducing the data volume for a detector’s trigger system to select...
Go to contribution page -
Poster
Time series data containing signal features of interest and various periodic and aperiodic noise sources are ubiquitous in HEP. Often, signal reconstruction methods depend on being able to model and reconstruct signals from noise that may be complex and from various sources such as instrumentation or certain backgrounds. We present an initial systematic study of generating the appropriate...
Go to contribution page -
Yuan-Tang Chou (University of Washington (US))Poster
Tracking algorithms play a vital role in both online and offline event reconstruction in Large Hadron Collider (LHC) experiments; however, they are the most time-consuming component in the particle reconstruction chain. To reduce processing time, existing tracking algorithms have been adapted for use on massively parallel coprocessors such as GPUs. Nevertheless, fully utilizing the...
Go to contribution page
Choose timezone
Your profile timezone: