-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
my trained model cannot achieve the same performance compared with the provided trained model #4
Comments
I mean training. Test is OK. For the provided model I got But when I train by myself, the performance of final model after 60000 iters is: So a gap exsits. I use the default solver prototxt and batch size... |
@pipipopo Get it. Try to change the hard_ratio from {1.0, 0.5, 0.2} to {0.5, 0.2, 0.1}. Also please pay attention to the sampling methods. You need to sample 2 big classes firstly and then sample small classes, which is detailed in https://github.com/PkuRainBow/Hard-Aware-Deeply-Cascaded-Embedding_release/blob/master/src_code/sample_stanford_products.py |
hi @PkuRainBow, |
@pipipopo I trained on CARS, have the same question... |
@zhengxiawu Please try hard ratio settings : {0.5, 0.2, 0.1}, which can mine better hard examples. |
I use the default training setting in the code. For StanfordOnlineProduct dataset I got:
stanford online products mean recall@ 1 : 0.632077
stanford online products mean recall@ 10 : 0.785517
stanford online products mean recall@ 100 : 0.891618
stanford online products mean recall@ 1000 : 0.959900
How can I train a model with comparable performance?
The text was updated successfully, but these errors were encountered: