Summary
My setup:
- Machine A: internet, no GPU (build/prepare)
- Machine B: GPU, no internet (runtime)
I can:
- use Pkg on A to install CUDA.jl, Reactant, Enzyme, etc.
- copy the depot/envs from A to B
- run CUDA.jl + Enzyme directly on B (CuArray works)
- run Reactant on CPU on B
But I cannot reliably get Reactant's GPU backend working on B. Calls
like Reactant.set_default_backend("gpu") fail with errors such as
"no GPU client found / no functional client". It seems some PJRT/CUDA
pieces are only downloaded/initialized when running Reactant on a machine
that already has a GPU.
Feature request
It would be very helpful if Reactant provided either:
-
a helper like
Reactant.prepare_gpu_backend(target="cuda", cuda_version="12.4")
that I can run on A (even without a GPU) to download/materialize
everything the CUDA/PJRT backend needs, or
-
a documented sequence of steps/env vars that let me:
- fully prepare the GPU backend on A
- copy depot/envs to B
- use Reactant's GPU backend on B completely offline
Even a clearly documented "you must run X once on a machine with a GPU,
then copy depot" workflow would already help a lot for offline / air‑gapped
environments.
Summary
My setup:
I can:
But I cannot reliably get Reactant's GPU backend working on B. Calls
like
Reactant.set_default_backend("gpu")fail with errors such as"no GPU client found / no functional client". It seems some PJRT/CUDA
pieces are only downloaded/initialized when running Reactant on a machine
that already has a GPU.
Feature request
It would be very helpful if Reactant provided either:
a helper like
that I can run on A (even without a GPU) to download/materialize
everything the CUDA/PJRT backend needs, or
a documented sequence of steps/env vars that let me:
Even a clearly documented "you must run X once on a machine with a GPU,
then copy depot" workflow would already help a lot for offline / air‑gapped
environments.