Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The loss of S and C optimization is different #11

Open
DrewdropLife opened this issue Jul 8, 2019 · 1 comment
Open

The loss of S and C optimization is different #11

DrewdropLife opened this issue Jul 8, 2019 · 1 comment

Comments

@DrewdropLife
Copy link

The paper mentions that S and C optimization have the same loss, but in the code, dice loss is added to optimize S, and after I try to get rid of it, the effect will become very poor, why?

@YuanXue1993
Copy link
Owner

The dice loss is used to help stabilize the adversarial training. You can try to "warm-up" the network with regular training (dice only) for several epochs and then remove the dice to use pure adversarial training, and you should be able to see the adversarial training works properly. Or you can try to use adversarial loss only from scratch, but in this case, you may have to experiment with different learning rates or even different architecture as the training can be unstable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants