multi-GPU training #367
camillebrianceau
started this conversation in
Ideas
Replies: 1 comment
-
If I understand well, the last mentioned approach is frequently called Pipeline Parallelism (It seems that Pytorch support is limited, at this moment). We need to define which of these approaches would be better for ClinicaDL, under the following constraints:
I think also that we need to correctly evaluate the pros and cons of introducing a new dependency as PL. Probably using native Pytorch tools integrated to our MAPS class can be enough. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Training a neural network can pose problems with regards to learning time or memory (model too big). With multi-GPU distribution, we can:
To do so , Pytorch Lightning can be used but it will require to add a dependence on ClinicaDL: is it necessary?
Beta Was this translation helpful? Give feedback.
All reactions