Replies: 1 comment
-
|
Attention maps are a fairly specific sort of activation, so not sure they're a template for what you want.... there are helpers in timm/models/_features.py and timm/models/_features_fx.py for accessing the more common (end of block) features from various models, or specific (by module or graph node name) if you access to internal signals. Those are used by 'features_only' model modes (on creation). There's also a forward_intermediates() API that might return the feature maps you want. As for visualizing/rendering them, that's very specific to the system & setup you're visualizing on so the array of potential dependencies and variations on that didn't seem worth supporting in the core library... or at least would add a lot of maintenance burden. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to visualize activation stats of certain layers for some qualitative analysis. I found an old merged request (#525) that looks somewhat like what I want to do.
I don't want signal propagation, I want the stats for normal inputs (i.e. images). Is there a canonical way to do this -- in particular is it already implemented somewhere in timm?
It should't be difficult to just use forward hooks but I if I can save some work I'd be glad to 😄
Edit:
There is a similar application in spaces for attention activations in particular. It seems that there was also some refactoring that added the Extraction class to timm.utils.
I suppose I can just copy paste the spaces code and modify it a bit for my own goals. Is there a specific reason why the spaces app is not included in the main repository or even integrated in the inference or validation script? could that be a useful addition or is it too much clutter?
Beta Was this translation helpful? Give feedback.
All reactions