Colvar with Torch CV #848
Unanswered
cchapellier
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi Charlotte! Just to clarify what you are doing now: you are running eABF, but not with Colvars, so I assume using Plumed? I see no a priori reason to enable OpenMPI. You should keep a couple OpenMP threads for Colvars. 8 threads should be plenty if all the force calculations are offloaded to the GPU. I recommend you run some benchmarks using Colvars, then it will be easier to discuss. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I would like to run a eABF biasing simulation with Gromacs 2025.2 patched with Torch and Colvar. The end goal is to get a free energy landscape and get it converged in a couple days if possible (so far it takes me more than a week with a speed of 90 ns/day and my simulations get systematically interrupted. I am not using Colvar thought and was hoping I might be able to get faster with it). I am looking for feedback on past experience so that I set my environment and simulation parameters as best as possible for fast performance.
My environment has been built with the following Dockerfile:
I have access to L4 types of GPU. I didn't install OpenMPI because in the past (I was not using colvar), thread MPI with 8 OPenMP threads was the best combo for me for performance.
When I switch to colvar, do you recommend I use OpenMPI ? Do I need to allocate part of the computation to the GPUs and some to the CPUs ?
Thank you for your help and feedback !
Beta Was this translation helpful? Give feedback.
All reactions