Graph neural networks uncover structure and function underlying the activity of neural assemblies
Graph neural networks trained to predict observable dynamics can be used to decompose the temporal activity of complex heterogeneous systems into simple, interpretable representations. Here we apply this framework to simulated neural assemblies with thousands of neurons and demonstrate that it can jointly reveal the connectivity matrix, the neuron types, the signaling functions, and in some cases hidden external stimuli. In contrast to existing machine learning approaches such as recurrent neural networks and transformers, which emphasize predictive accuracy but offer limited interpretability, our method provides both reliable forecasts of neural activity and interpretable decomposition of the mechanisms governing large neural assemblies.
Figure 2: baseline - 1000 neurons with 4 types
Neural Activity
Simulation
GNN Training
Supplementary Figure 11: large scale - 8000 neurons
Neural Activity
Simulation
GNN Training
Large Scale
Supplementary Figure 12: many types - 32 neuron types
Neural Activity
Simulation
GNN Training
Many Types
Supplementary Figure 13: heterogeneous transfer functions
Neural Activity
Simulation
GNN Training
Transmitters
Supplementary Figure 14: neuron-dependent transfer functions
Neural Activity
Simulation
GNN Training
Neuron-dependent
No matching items