1–5 Sept 2025
ETH Zurich
Europe/Zurich timezone

SparsePixels: Efficient Convolution for Sparse Data on FPGAs

4 Sept 2025, 14:50
20m
ETH Zurich

ETH Zurich

HIT E 51, Siemens Auditorium, ETH Zurich, Hönggerberg campus, 8093 Zurich, Switzerland
Standard Talk Contributed talks

Speaker

Ho-Fung Tsoi (University of Pennsylvania)

Description

Inference of standard convolutional neural networks (CNNs) on FPGAs often incurs high latency and long initiation intervals due to the nested loops required to slide filters across the full input, especially when the input dimensions are large. However, in some datasets, meaningful signals may occupy only a small fraction of the input, say sometimes just a few percent of the total pixels or even less. In such cases, most computations are wasted on regions containing no useful information. In this work, we introduce SparsePixels, a framework for efficient convolution for sparsely populated input data on FPGAs operating under tight resource and low-latency constraints. Our approach implements a special class of CNNs where only active pixels (non-zero or above a threshold) are retained and processed at runtime, while the inactive ones are discarded and ignored. We show that our framework can achieve performance comparable to standard CNNs in some target datasets while significantly reducing both latency and resource usage on FPGAs. Custom kernels for training and the HLS implementation are developed to support sparse convolution operations.

Author

Ho-Fung Tsoi (University of Pennsylvania)

Co-authors

Dylan Sheldon Rankin (University of Pennsylvania (US)) Vladimir Loncar (CERN) Philip Coleman Harris (Massachusetts Inst. of Technology (US))

Presentation materials