Does llama.cpp support CUDA through HIP? #10892
MarioIshac
started this conversation in
General
Replies: 1 comment
-
sure yeah that should work, i dont think anyone has tried however |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am already running llama.cpp using HIP for ROCm and it is great, but had the above question out of curiosity. If HIP is a thin wrapper of CUDA on CUDA machines, does that mean that following this AMD guide would produce a
llama-server
/ artifact that would be GPU accelerated on an Nvidia machine?Beta Was this translation helpful? Give feedback.
All reactions