Skip to content

Conversation

@yeazelm
Copy link
Contributor

@yeazelm yeazelm commented Aug 25, 2025

Issue number:

Closes # #254

Description of changes:
Add the NVIDIA R580 driver kmod for the 6.12 kernel. This uses similar logic to the kmod-6.12-nvidia-r570 driver and provides all the same libraries with the addition of a few from R580.

It is easier to compare this with the kmod-6.12-nvidia-r570 driver to see what the differences are. This is a basic diff of the differences:

  • 0001-makefile-allow-to-use-any-kernel-arch.patch lines changed but otherwise is intact
  • NVIDIA hasn't released rpms of Fabric Manager for R580, so this uses the tar.xz version (same resulting binary at the end)
  • Two new topology files for B300 from Fabric Manager
  • libnvidia-egl-xcb and libnvidia-egl-xlib bumped minor versions (still excluded)
  • libnvidia-nvvm70.so.4 was added
  • libnvidia-present was excluded, if we find it is needed by containers, we can add it.
  • libnvidia-sandboxutils is now provided by both arches so we exclude them outside the arch if statement

Full git diff:

git diff --no-index kmod-6.12-nvidia-r570 kmod-6.12-nvidia-r580
diff --git a/kmod-6.12-nvidia-r570/0001-makefile-allow-to-use-any-kernel-arch.patch b/kmod-6.12-nvidia-r580/0001-makefile-allow-to-use-any-kernel-arch.patch
index 264ec23..1581b1d 100644
--- a/kmod-6.12-nvidia-r570/0001-makefile-allow-to-use-any-kernel-arch.patch
+++ b/kmod-6.12-nvidia-r580/0001-makefile-allow-to-use-any-kernel-arch.patch
@@ -19,10 +19,10 @@ Signed-off-by: Shikha Vyaghra <[email protected]>
  2 files changed, 20 deletions(-)

 diff --git a/kernel-open/Makefile b/kernel-open/Makefile
-index 72672c2..187f39e 100644
+index f7a8db6..725d474 100644
 --- a/kernel-open/Makefile
 +++ b/kernel-open/Makefile
-@@ -80,16 +80,6 @@ else
+@@ -104,16 +104,6 @@ else
      )
    endif

@@ -31,7 +31,7 @@ index 72672c2..187f39e 100644
 -  ifneq ($(filter $(ARCH),i386 x86_64),)
 -    KERNEL_ARCH = x86
 -  else
--    ifeq ($(filter $(ARCH),arm64 powerpc),)
+-    ifeq ($(filter $(ARCH),arm64 riscv),)
 -        $(error Unsupported architecture $(ARCH))
 -    endif
 -  endif
@@ -40,10 +40,10 @@ index 72672c2..187f39e 100644
    NV_KERNEL_MODULES := $(filter-out $(NV_EXCLUDE_KERNEL_MODULES), \
                                      $(NV_KERNEL_MODULES))
 diff --git a/kernel/Makefile b/kernel/Makefile
-index 72672c2..187f39e 100644
+index f7a8db6..725d474 100644
 --- a/kernel/Makefile
 +++ b/kernel/Makefile
-@@ -80,16 +80,6 @@ else
+@@ -104,16 +104,6 @@ else
      )
    endif

@@ -52,7 +52,7 @@ index 72672c2..187f39e 100644
 -  ifneq ($(filter $(ARCH),i386 x86_64),)
 -    KERNEL_ARCH = x86
 -  else
--    ifeq ($(filter $(ARCH),arm64 powerpc),)
+-    ifeq ($(filter $(ARCH),arm64 riscv),)
 -        $(error Unsupported architecture $(ARCH))
 -    endif
 -  endif
@@ -61,5 +61,4 @@ index 72672c2..187f39e 100644
    NV_KERNEL_MODULES := $(filter-out $(NV_EXCLUDE_KERNEL_MODULES), \
                                      $(NV_KERNEL_MODULES))
 --
-2.40.1
-
+2.49.0
diff --git a/kmod-6.12-nvidia-r570/Cargo.toml b/kmod-6.12-nvidia-r580/Cargo.toml
index e83edec..f173c77 100644
--- a/kmod-6.12-nvidia-r570/Cargo.toml
+++ b/kmod-6.12-nvidia-r580/Cargo.toml
@@ -1,5 +1,5 @@
 [package]
-name = "kmod-6_12-nvidia-r570"
+name = "kmod-6_12-nvidia-r580"
 version = "0.1.0"
 edition = "2021"
 publish = false
@@ -9,7 +9,7 @@ build = "../build.rs"
 path = "../packages.rs"

 [package.metadata.build-package]
-package-name = "kmod-6.12-nvidia-r570"
+package-name = "kmod-6.12-nvidia-r580"
 releases-url = "https://docs.nvidia.com/datacenter/tesla/"

 [[package.metadata.build-package.external-files]]
@@ -17,37 +17,37 @@ url = "https://s3.amazonaws.com/EULA/NVidiaEULAforAWS.pdf"
 sha512 = "e1926fe99afc3ab5b2f2744fcd53b4046465aefb2793e2e06c4a19455a3fde895e00af1415ff1a5804c32e6a2ed0657e475de63da6c23a0e9c59feeef52f3f58"

 [[package.metadata.build-package.external-files]]
-url = "https://us.download.nvidia.com/tesla/570.172.08/NVIDIA-Linux-x86_64-570.172.08.run"
-sha512 = "8000f31575392ca8a575879f36b6e3e0fdee14e63efb856b77035e5aa434a02de0fd4ff5472d01984cbc541d40656ed6b7b77c78d00f6e1bc4341864bad725c5"
+url = "https://us.download.nvidia.com/tesla/580.65.06/NVIDIA-Linux-x86_64-580.65.06.run"
+sha512 = "e9149873cc83c250f601be58ea919cbdc891773157587366d78f505ee1db96bf392bc5e689d39ce8fa339287699118897b8d6eba2b2a9caf163126a9bb2a6044"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://us.download.nvidia.com/tesla/570.172.08/NVIDIA-Linux-aarch64-570.172.08.run"
-sha512 = "291012513c2b9bff94a0892248207734b1d12a13ff994036045fd159f60bf410508fd66873d78388d0e289ded1b76f8d0980219c289fa2ba99303f2cf872e9d6"
+url = "https://us.download.nvidia.com/tesla/580.65.06/NVIDIA-Linux-aarch64-580.65.06.run"
+sha512 = "c4f2902412e9f47006e50c7f687e8f3cfc4580877c945b5da35c9e3a00f5e72eba8b0aaf250ff51d382fcf611177c9115f72f23b7858a520f0a7e1b27354d3e6"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/nvidia-fabric-manager-570.172.08-1.x86_64.rpm"
-sha512 = "6cbc0d14c6de8a21aad9e247df6d9fbb425c38abe783218b10d1ff9fcf2736c0221e2817c0f9dd9ea2886f78e2182bcc2ad44a9884228a6abe86ed8a9d1b6940"
+url = "https://developer.download.nvidia.com/compute/nvidia-driver/redist/fabricmanager/linux-x86_64/fabricmanager-linux-x86_64-580.65.06-archive.tar.xz"
+sha512 = "30cad75eb74a8ab5c928861a11a22e24b95778e62a9ac3acfec0d6daea27e2a537e8eedb3311ab0a322a114313976777439271fc905f3d725cad59031b687221"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/sbsa/nvidia-fabric-manager-570.172.08-1.aarch64.rpm"
-sha512 = "324557df51e56aca1337744554ff6bf78177f413bc0778cc5dd6a76bba9423d538afecd2326707b92af3a086aa1bf5535d7cbabebe614eeec75cbeede644dd9a"
+url = "https://developer.download.nvidia.com/compute/nvidia-driver/redist/fabricmanager/linux-sbsa/fabricmanager-linux-sbsa-580.65.06-archive.tar.xz"
+sha512 = "efbbc797ff288391b19d2ffa1ef9e06e1004c75a6ef7b03b4828c7af37ba6d9811195c4d475b8d5baa2441ae3a9f7cca604e73655b2ac49213d762a564f4be7d"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/nvidia-imex-570-570.172.08-1.x86_64.rpm"
-sha512 = "09210f102aafd0e4583751c20e0fb8adb2e7bbec1d71adaaf742b2ab4b7b1b676e79e7612749fe9a88d70b3a1cf93ea42a03c78f7ae32d8b2b7a757ff48cc9f0"
+url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/nvidia-imex-580.65.06-1.x86_64.rpm"
+sha512 = "7cfcaf3a7752a8d8f3949948321949356ef31209a1dd35b41b164c416eaf5df6ce720bf63af2ff58bca6184e8ebadf36fbfa3c5978436ef000b845a22862a797"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/sbsa/nvidia-imex-570-570.172.08-1.aarch64.rpm"
-sha512 = "b3a5e5e838d4318d4d3c5267f0af7daf57c354a2d3422e10b9b6457a8c62f90a770c73101a0d2e0e995c0372b3bf5a2b96457aca957e56e842c3e746cb98a912"
+url = "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/sbsa/nvidia-imex-580.65.06-1.aarch64.rpm"
+sha512 = "5468ac57e3827e83690f78f02ca0517b5b51e398a3eb30580e7e27a46277052761f6b13c4e9c70042f8ed8e81277ffb224c9771085d2463645199689e817ebc5"
 force-upstream = true

 [[package.metadata.build-package.external-files]]
-url = "https://raw.githubusercontent.com/NVIDIA/open-gpu-kernel-modules/570/COPYING"
+url = "https://raw.githubusercontent.com/NVIDIA/open-gpu-kernel-modules/580/COPYING"
 sha512 = "f9cee68cbb12095af4b4e92d01c210461789ef41c70b64efefd6719d0b88468b7a67a3629c432d4d9304c730b5d1a942228a5bcc74a03ab1c411c77c758cd938"
 force-upstream = true

diff --git a/kmod-6.12-nvidia-r570/kmod-6.12-nvidia-r570.spec b/kmod-6.12-nvidia-r580/kmod-6.12-nvidia-r580.spec
similarity index 96%
rename from kmod-6.12-nvidia-r570/kmod-6.12-nvidia-r570.spec
rename to kmod-6.12-nvidia-r580/kmod-6.12-nvidia-r580.spec
index 934c08b..461c15e 100644
--- a/kmod-6.12-nvidia-r570/kmod-6.12-nvidia-r570.spec
+++ b/kmod-6.12-nvidia-r580/kmod-6.12-nvidia-r580.spec
@@ -1,6 +1,6 @@
-%global tesla_major 570
-%global tesla_minor 172
-%global tesla_patch 08
+%global tesla_major 580
+%global tesla_minor 65
+%global tesla_patch 06
 %global tesla_ver %{tesla_major}.%{tesla_minor}.%{tesla_patch}
 %if "%{?_cross_arch}" == "aarch64"
 %global nvidia_arch sbsa
@@ -36,8 +36,8 @@ Source2: NVidiaEULAforAWS.pdf
 Source3: COPYING

 # fabricmanager for NVSwitch
-Source10: https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/nvidia-fabric-manager-%{tesla_ver}-1.x86_64.rpm
-Source11: https://developer.download.nvidia.com/compute/cuda/repos/rhel9/sbsa/nvidia-fabric-manager-%{tesla_ver}-1.aarch64.rpm
+Source10: https://developer.download.nvidia.com/compute/nvidia-driver/redist/fabricmanager/linux-x86_64/fabricmanager-linux-x86_64-580.65.06-archive.tar.xz
+Source11: https://developer.download.nvidia.com/compute/nvidia-driver/redist/fabricmanager/linux-sbsa/fabricmanager-linux-sbsa-580.65.06-archive.tar.xz

 # IMEX for GB200
 Source20: https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/nvidia-imex-%{tesla_ver}-1.x86_64.rpm
@@ -145,10 +145,9 @@ pushd NVIDIA-Linux-%{_cross_arch}-%{tesla_ver}
 cp -r kernel-open grid
 popd

-# Extract fabricmanager from the rpm via cpio rather than `%%setup` since the
+# Extract fabricmanager from the tarfile rather than `%%setup` since the
 # correct source is architecture-dependent.
-mkdir fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive
-rpm2cpio %{_sourcedir}/nvidia-fabric-manager-%{tesla_ver}-1.%{_cross_arch}.rpm | cpio -idmV -D fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive
+tar xf %{_sourcedir}/fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive.tar.xz

 # Add the license.
 install -p -m 0644 %{S:2} %{S:3} .
@@ -156,7 +155,7 @@ install -p -m 0644 %{S:2} %{S:3} .
 # Extract imex from the rpm via cpio rather than `%%setup` since the
 # correct source is architecture-dependent.
 mkdir imex-%{nvidia_arch}-%{tesla_ver}-archive
-rpm2cpio %{_sourcedir}/nvidia-imex-%{tesla_major}-%{tesla_ver}-1.%{_cross_arch}.rpm | cpio -idmV -D imex-%{nvidia_arch}-%{tesla_ver}-archive
+rpm2cpio %{_sourcedir}/nvidia-imex-%{tesla_ver}-1.%{_cross_arch}.rpm | cpio -idmV -D imex-%{nvidia_arch}-%{tesla_ver}-archive

 # This recipe was based in the NVIDIA yum/dnf specs:
 # https://github.com/NVIDIA/yum-packaging-precompiled-kmod
@@ -450,13 +449,13 @@ popd

 # Begin NVIDIA fabric manager binaries and topologies
 pushd fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive
-install -p -m 0755 usr/bin/nv-fabricmanager %{buildroot}%{_cross_bindir}
-install -p -m 0755 usr/bin/nvswitch-audit %{buildroot}%{_cross_bindir}
+install -p -m 0755 bin/nv-fabricmanager %{buildroot}%{_cross_bindir}
+install -p -m 0755 bin/nvswitch-audit %{buildroot}%{_cross_bindir}
 ln -rs %{buildroot}%{_cross_bindir}/nv-fabricmanager %{buildroot}%{_cross_libexecdir}/nvidia/tesla/bin/nv-fabricmanager
 ln -rs %{buildroot}%{_cross_bindir}/nvswitch-audit %{buildroot}%{_cross_libexecdir}/nvidia/tesla/bin/nvswitch-audit

 install -d %{buildroot}%{_cross_datadir}/nvidia/tesla/nvswitch
-for t in usr/share/nvidia/nvswitch/*_topology ; do
+for t in share/nvidia/nvswitch/*_topology ; do
   install -p -m 0644 "${t}" %{buildroot}%{_cross_datadir}/nvidia/tesla/nvswitch
 done

@@ -482,7 +481,7 @@ popd

 %files tesla
 %license NVidiaEULAforAWS.pdf
-%license fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive/usr/share/doc/nvidia-fabricmanager/third-party-notices.txt
+%license fabricmanager-linux-%{nvidia_arch}-%{tesla_ver}-archive/third-party-notices.txt
 %dir %{_cross_datadir}/egl
 %dir %{_cross_datadir}/egl/egl_external_platform.d
 %dir %{_cross_datadir}/glvnd
@@ -538,6 +537,8 @@ popd
 %{_cross_datadir}/nvidia/tesla/nvswitch/gb200_nvl72r2_c2g4_topology
 %{_cross_datadir}/nvidia/tesla/nvswitch/gb200_nvl8r1_c2g4_etf_topology
 %{_cross_datadir}/nvidia/tesla/nvswitch/gb200_nvl8r1_c2g4_etf_nso_topology
+%{_cross_datadir}/nvidia/tesla/nvswitch/gb300_nvl72r1_c2g4_topology
+%{_cross_datadir}/nvidia/tesla/nvswitch/gb300_nvl72r2_c2g4_topology
 %{_cross_datadir}/nvidia/tesla/nvswitch/gh200_nvlink_32gpus_topology
 %{_cross_datadir}/nvidia/tesla/nvswitch/mgxh20_nvl16_topology

@@ -595,6 +596,7 @@ popd
 %{_cross_libdir}/nvidia/tesla/libnvidia-cfg.so.1
 %{_cross_libdir}/nvidia/tesla/libnvidia-nvvm.so.4
 %{_cross_libdir}/nvidia/tesla/libnvidia-nvvm.so.%{tesla_ver}
+%{_cross_libdir}/nvidia/tesla/libnvidia-nvvm70.so.4

 # Compute libs
 %{_cross_libdir}/nvidia/tesla/libcuda.so.%{tesla_ver}
@@ -696,15 +698,16 @@ popd
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-gbm.so.1.1.2
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-wayland.so.1.1.19
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xcb.so.1
-%exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xcb.so.1.0.2
+%exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xcb.so.1.0.1
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xlib.so.1
-%exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xlib.so.1.0.2
-%if "%{_cross_arch}" == "x86_64"
+%exclude %{_cross_libdir}/nvidia/tesla/libnvidia-egl-xlib.so.1.0.1
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-sandboxutils.so.1
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-sandboxutils.so.%{tesla_ver}
+%if "%{_cross_arch}" == "x86_64"
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-vksc-core.so.1
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-vksc-core.so.%{tesla_ver}
 %exclude %{_cross_libdir}/nvidia/tesla/libnvidia-wayland-client.so.%{tesla_ver}
+%exclude %{_cross_libdir}/nvidia/tesla/libnvidia-present.so.%{tesla_ver}
 %endif

 %files open-gpu

Testing done:
Built an image with R580 and ran NVIDIA smoke tests on g6.xlarge and p3dn.xlarge (to confirm both proprietary and open gpu drivers are working.

From the p3dn.24xlarge:

[root@admin]# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  580.65.06  Sun Jul 27 07:14:19 UTC 2025
GCC version:  gcc version 13.3.0 (Buildroot 2024.11.1)

From the g6.xlarge smoke test:

[root@gpu-tests-lvmxh /]# ./run.sh

=========================================
  Running sample UnifiedMemoryPerf
=========================================

MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
MapSMtoArchName for SM 8.9 is undefined.  Default to use Ampere
GPU Device 0: "Ampere" with compute capability 8.9

Running ........................................................

Overall Time For matrixMultiplyPerf

Printing Average of 20 measurements in (ms)
Size_KB  UMhint UMhntAs  UMeasy   0Copy MemCopy CpAsync CpHpglk CpPglAs
4         0.238   0.317   0.359   0.012   0.031   0.024   0.031   0.023
16        0.272   0.327   0.623   0.025   0.041   0.035   0.048   0.046
64        0.340   0.390   0.976   0.092   0.090   0.083   0.081   0.068
256       0.611   0.633   1.435   0.499   0.291   0.271   0.249   0.247
1024      1.948   1.831   3.096   3.135   1.085   1.033   0.938   0.928
4096      6.645   6.268  11.305  22.294   4.229   4.175   4.052   4.036
16384    28.115  26.147  49.639 168.625  20.244  20.165  20.159  19.985

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

=========================================
  Running sample deviceQuery
=========================================

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA L4"
  CUDA Driver Version / Runtime Version          13.0 / 11.4
  CUDA Capability Major/Minor version number:    8.9
  Total amount of global memory:                 22563 MBytes (23659151360 bytes)
MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
  (058) Multiprocessors, (128) CUDA Cores/MP:    7424 CUDA Cores
  GPU Max Clock rate:                            2040 MHz (2.04 GHz)
  Memory Clock rate:                             6251 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 50331648 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        102400 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 49 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
....
> GPU device has 58 Multi-Processors, SM 8.9 compute capabilities

[VOTE Kernel Test 1/3]
        Running <<Vote.Any>> kernel1 ...
        OK

[VOTE Kernel Test 2/3]
        Running <<Vote.All>> kernel2 ...
        OK

[VOTE Kernel Test 3/3]
        Running <<Vote.Any>> kernel3 ...
        OK
  1 ---
  1 ---
        Shutting down...

=========================================
  Running sample vectorAdd
=========================================

[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

=========================================
  Running sample warpAggregatedAtomicsCG
=========================================

MapSMtoCores for SM 8.9 is undefined.  Default to use 128 Cores/SM
MapSMtoArchName for SM 8.9 is undefined.  Default to use Ampere
GPU Device 0: "Ampere" with compute capability 8.9

CPU max matches GPU max

Warp Aggregated Atomics PASSED

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

Add the NVIDIA R580 driver kmod for the 6.12 kernel. This uses similar
logic to the kmod-6.12-nvidia-r570 driver and provides all the same
libraries with the addition of a few from R580.

Signed-off-by: Matthew Yeazel <[email protected]>
@yeazelm
Copy link
Contributor Author

yeazelm commented Aug 28, 2025

^ Updated from PR comments. This now uses the rpm for Fabric Manager so it is even more similar to the R570 version.

Copy link
Contributor

@bcressey bcressey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I compared the results of readelf -a ${lib} | rg NEEDED for all the /usr/lib/nvidia/tesla libraries between r570 and r580, and nothing new jumped out.

@ginglis13 ginglis13 merged commit 1415935 into bottlerocket-os:develop Sep 4, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants