Skip to content
This repository has been archived by the owner on Jan 5, 2024. It is now read-only.

added snn files #41

Draft
wants to merge 8 commits into
base: master
Choose a base branch
from
Draft

Conversation

nitin-rathi
Copy link

No description provided.

@@ -1361,3 +1361,4 @@ fixres_resnext101_32x48d_wsl,http://openaccess.thecvf.com/content_ECCV_2018/html
year = {2019},
month = {jun},
}",0.863,
spiking-vgg16,TODO: paper link,TODO: paper bibtex,TODO: ImageNet top-1,TODO: ImageNet top-5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a reference to your paper, the bibtex, and ImageNet top-5 and top-1 accuracies

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Paper is under review. Can we update this part later?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes absolutely, just leave it empty for now

return np.concatenate(images)

def get_activations(preprocessed_inputs, layer_names):
... # TODO
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is meant to output activations (aka firing rates) for preprocessed images for a given set of layer names. For standard vgg, this would output activations at the different blocks. I don't know how exactly your spiking network works, but is there a way for you to store spikes and their timing information in response to images?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a spike_count variable in the forward method of the model to keep track of the number of spikes. Do you think that will work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that sounds good. As long as you can produce spike rates for a given millisecond time-bin, this should work

def get_activations(preprocessed_inputs, layer_names):
... # TODO

model = ActivationsExtractorHelper(identifier='spiking-vgg',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what the right name is

@@ -171,6 +171,7 @@ def __init__(self):
'resnext101_32x32d_wsl': self._resnext101_layers(),
'resnext101_32x48d_wsl': self._resnext101_layers(),
'fixres_resnext101_32x48d_wsl': self._resnext101_layers(),
'spiking-vgg16': ['spike'], # TODO
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

define the names of the layers that you would like to test. Usually, we use the last defined layer for behavioral heads (e.g. classifier)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my model, the final layer does not consist of spiking neurons. In the final layer the membrane potential is simply accumulated and the cost function is defined on the accumulated potential. I am not sure if you need the spike counts or the membrane potential will work for testing

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ideal layer here contains a general set of features that is broadly applicable, i.e. not just for ImageNet but also other tasks, and categories should be linearly decodable from that layer. In e.g. standard VGG, we use the last convolutional layer before the fully-connected, (in our mind) ImageNet-specific decoder.

@@ -95,6 +95,8 @@ class TestImagenet:
('resnext101_32x48d_wsl', .854),
# FixRes: from https://arxiv.org/pdf/1906.06423.pdf, Table 8
('fixres_resnext101_32x48d_wsl', .863),
# spiking-vgg: from <TODO: paper source>
('spiking-vgg16', ...),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please state the expected ImageNet top-1 accuracy and where it's coming from (e.g. a paper if there is one)

The variable spike_count counts the number of spikes for each neuron over all the time-steps
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants