append sims in batches #1666
-
|
Hi, I have a very heavy set of simulations which causes my GPU to go OOM when I append them. I tried appending them in batches, do the training on one batch, load another batch, train again etc. The code is below (device is cuda). the problem is that I still OOM, because it seems that inference.append_simulations saves all previous simulations, so RAM usage accumulates. Is it possible to purge the old simulations while saving the network weights? also, I set trained_once=True after the first batch training because the second batch is still sampled from the prior, is that correct? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Hey! Thanks for reaching out. A quick fix might be to do: inference.append_simulations(theta_chunk, x_chunk, data_device="cpu")By doing this, data gets stored on CPU, and only individual batches are copied to GPU. If this does not help, Hope that helps! |
Beta Was this translation helpful? Give feedback.
Hey! Thanks for reaching out. A quick fix might be to do:
By doing this, data gets stored on CPU, and only individual batches are copied to GPU.
If this does not help,
sbialso allows you to write your own training loop from scratch, with full flexibility for, e.g., dataloading. See here for a tutorial.Hope that helps!
Michael