You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper mentions that S and C optimization have the same loss, but in the code, dice loss is added to optimize S, and after I try to get rid of it, the effect will become very poor, why?
The text was updated successfully, but these errors were encountered:
The dice loss is used to help stabilize the adversarial training. You can try to "warm-up" the network with regular training (dice only) for several epochs and then remove the dice to use pure adversarial training, and you should be able to see the adversarial training works properly. Or you can try to use adversarial loss only from scratch, but in this case, you may have to experiment with different learning rates or even different architecture as the training can be unstable.
The paper mentions that S and C optimization have the same loss, but in the code, dice loss is added to optimize S, and after I try to get rid of it, the effect will become very poor, why?
The text was updated successfully, but these errors were encountered: