You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we don't utilize mini-batch training, due to constraints involving the model. We have to modify the model to incorporate mini-batch training on the DeepVelo graph, and this is something that is in the to-do's and we'll create a PR for this. Without this, we can't necessarily incorporate the DataParallel or DDP torch classes (https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).
But even with one GPU, we found that DeepVelo runs in under a minute for >10000 cells, so it will scale quite well to larger datasets.
We did not explicitly perform experiments comparing one vs multi-gpu usage, but I'm confident this will lower the training time when enabled, just not sure to what extent/scaling factor.
In the meantime, please let us know if you find any limitations in training the model for your data.
Hi,
I'm interested in understanding whether it's feasible to run deepvelo on multiple GPUs (instead of just a single GPU).
My particular questions are the following:
Best,
Miona Rankovic
The text was updated successfully, but these errors were encountered: