Skip to content

Latest commit

 

History

History
22 lines (12 loc) · 978 Bytes

File metadata and controls

22 lines (12 loc) · 978 Bytes

Note on fork

Tried to tweak code to work on deepfashion1 INSHOP dataset, which results was shown on in original paper.

Note: Didnt finish tweaking the code to get the numbers from original paper.

AttentionBasedEmbeddingForMetricLearning

Pytorch Implementation of paper Attention-based Ensemble for Deep Metric Learning

Major difference from the paper: attention maps are not followed by a sigmoid activation function and minmax norm are used instead.

The weighted sampling module code is copied from suruoxi/DistanceWeightedSampling

performance on Stanford Cars 196: 71.4% recall@1 86.9% recall@4 (8 attentions and size of each embedding is 64)

TODO:

transform attention map: att_maps = sign(att_maps) * sqrt(abs(att_maps)) before normalizing. (Motivated by tau-yihouxiang/WSDAN)

Will update here if I got better validation performance