Skip to content

Latest commit

 

History

History
14 lines (7 loc) · 1.2 KB

README.md

File metadata and controls

14 lines (7 loc) · 1.2 KB

MSTD* is a computational neuroscience project aimed at modeling motion and depth processing in the primate visual cortex.

It utilizes spiking neural networks (SNNs) based on the leaky integrate-and-fire and adaptive exponential integrate-and-fire neuron models.

Stimuli: The project includes artificial stimuli (moving bars in directions up, down, left, right) found in the "ds_models" directory and event camera recordings (TUM-VIE and MVSEC datasets) in the "of_models" and "v_models" directories.

Learning: The project employs Spike-Timing-Dependent Plasticity (STDP) and backpropagation to achieve selectivity for motion properties.

Software: The models are implemented using deep learning libraries such as PyTorch and Norse, which provide tools for constructing and simulating spiking neural networks.

Hardware: The models are capable of running on both CPU and GPU, with CUDA support if available, to enhance computational efficiency and performance.

MSTD stands for Medial Superior Temporal Dorsal.