Beyond Sight: Probing Alignment Between Image Models and Blind V1

Published: 02 Mar 2024, Last Modified: 05 May 2024ICLR 2024 Workshop Re-Align ContributedTalkEveryoneRevisionsBibTeXCC BY 4.0
Track: short paper (up to 5 pages)
Keywords: NeuroAI, Representational Alignment, Visual Prosthesis, Blind, Human Neural Activity
TL;DR: We present a first of its kind analysis on representational alignment with blind human visual cortex.
Abstract: Neural activity in the visual cortex of blind humans persists in the absence of visual stimuli. However, little is known about the preservation of visual representation capacity in these cortical regions, which could have significant implications for neural interfaces such as visual prostheses. In this work, we present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex (V1) of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks (DNNs). In the absence of natural visual input, we examine two alternative forms of inducing neural activity: electrical stimulation and mental imagery. We first quantitatively demonstrate that latent DNN activations are aligned with neural activity measured in blind V1. On average, DNNs with higher ImageNet accuracy or higher sighted primate neural predictivity are more predictive of blind V1 activity. We further probe blind V1 alignment in ResNet-50 and propose a proof-of-concept approach towards interpretability of blind V1 neurons. The results of these studies suggest the presence of some form of natural visual processing in blind V1 during electrically evoked visual perception and present unique directions in mechanistically understanding and interfacing with blind V1.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 69
Loading