Speaker
Description
Intracortical brain-computer interfaces (iBCIs) aim to decode behavior from neural population activity, enabling individuals with motor impairments to restore motor functions and communication abilities. A central challenge in the long-term deployment of iBCIs is the nonstationarity of neural recordings, where instability of electrode recordings alters the composition and tuning of the recorded neural population across sessions. Existing approaches attempt to address this issue by explicit alignment techniques; however, they rely on fixed neural identities and require test-time labels and parameter updates, limiting their ability to generalize across sessions and imposing a computational burden during deployment. In this work, we introduce SPINT - a Spatial Permutation-Invariant Neural Transformer framework for behavioral decoding that operates directly on unordered sets of neural units. Central to our approach is a novel context-dependent positional embedding scheme that infers unit-specific identities dynamically, enabling flexible generalization across recording sessions. Our model supports inference on variable-size neural populations and allows few-shot, gradient-free adaptation using a small amount of unlabeled data from the new session. To further promote robustness to population variability, we introduce dynamic channel dropout, a regularization method for iBCI applications by simulating shifts in population composition during training. We evaluate our approach on three motor decoding tasks from the FALCON Benchmark, comprising multi-session datasets from human and non-human primates. Our approach demonstrates robust cross-session generalization, outperforming existing zero-shot and few-shot unsupervised baselines while eliminating the need for test-time alignment and fine-tuning. Our work contributes an initial step toward a flexible and practical framework for robust, scalable neural decoding in long-term iBCI applications.