You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the container for dustynv/transformers:nvgpt-r35.3.1 I try the command nvidia-smi and get this error:
NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
Please also try adding directory that contains libnvidia-ml.so to your system PATH.
As a result I also get this error when I try torch.cuda.init() I get:
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
nvidia-smi command runs on my host system, whats could be going on? Is there a different version of this container I can try?
I don't get such errors when I run the container for dustynv/l4t-pytorch:r36.4.0, nvidia-smi works and torch can access cuda.
The text was updated successfully, but these errors were encountered:
PhilipAmadasun
changed the title
dustynv/transformers:nvgpt-r35.3.1 can't access drivers?
dustynv/transformers:nvgpt-r35.3.1 can't access nvidia drivers?
Jan 26, 2025
For context:
When I run the container for
dustynv/transformers:nvgpt-r35.3.1
I try the commandnvidia-smi
and get this error:As a result I also get this error when I try
torch.cuda.init()
I get:nvidia-smi command runs on my host system, whats could be going on? Is there a different version of this container I can try?
I don't get such errors when I run the container for
dustynv/l4t-pytorch:r36.4.0
, nvidia-smi works and torch can access cuda.The text was updated successfully, but these errors were encountered: