You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -86,24 +86,24 @@ To build the TensorRT-OSS components, you will first need the following software
86
86
87
87
Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com) with the direct links below:
88
88
89
-
-[TensorRT 10.13.0.35 for CUDA 11.8, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.0/tars/TensorRT-10.13.0.35.Linux.x86_64-gnu.cuda-11.8.tar.gz)
90
-
-[TensorRT 10.13.0.35 for CUDA 12.9, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.0/tars/TensorRT-10.13.0.35.Linux.x86_64-gnu.cuda-12.9.tar.gz)
91
-
-[TensorRT 10.13.0.35 for CUDA 11.8, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.0/zip/TensorRT-10.13.0.35.Windows.win10.cuda-11.8.zip)
92
-
-[TensorRT 10.13.0.35 for CUDA 12.9, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.0/zip/TensorRT-10.13.0.35.Windows.win10.cuda-12.9.zip)
89
+
-[TensorRT 10.13.2.6 for CUDA 13.0, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.2/tars/TensorRT-10.13.2.6.Linux.x86_64-gnu.cuda-13.0.tar.gz)
90
+
-[TensorRT 10.13.2.6 for CUDA 12.9, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.2/tars/TensorRT-10.13.2.6.Linux.x86_64-gnu.cuda-12.9.tar.gz)
91
+
-[TensorRT 10.13.2.6 for CUDA 13.0, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.2/zip/TensorRT-10.13.2.6.Windows.win10.cuda-13.0.zip)
92
+
-[TensorRT 10.13.2.6 for CUDA 12.9, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.13.2/zip/TensorRT-10.13.2.6.Windows.win10.cuda-12.9.zip)
93
93
94
-
**Example: Ubuntu 20.04 on x86-64 with cuda-12.9**
94
+
**Example: Ubuntu 22.04 on x86-64 with cuda-13.0**
95
95
96
96
```bash
97
97
cd~/Downloads
98
-
tar -xvzf TensorRT-10.13.0.35.Linux.x86_64-gnu.cuda-12.9.tar.gz
99
-
export TRT_LIBPATH=`pwd`/TensorRT-10.13.0.35
98
+
tar -xvzf TensorRT-10.13.2.6.Linux.x86_64-gnu.cuda-13.0.tar.gz
> NOTE: The default CUDA version used by CMake is 12.9.0. To override this, for example to 11.8, append `-DCUDA_VERSION=11.8` to the cmake command.
202
+
> NOTE: The default CUDA version used by CMake is 13.0. To override this, for example to 12.9, append `-DCUDA_VERSION=12.9` to the cmake command.
203
203
204
204
- Required CMake build arguments are:
205
205
-`TRT_LIB_DIR`: Path to the TensorRT installation directory containing libraries.
206
206
-`TRT_OUT_DIR`: Output directory where generated build artifacts will be copied.
207
207
- Optional CMake build arguments:
208
208
-`CMAKE_BUILD_TYPE`: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [`Release`] | `Debug`
209
-
-`CUDA_VERSION`: The version of CUDA to target, for example [`11.7.1`].
210
-
-`CUDNN_VERSION`: The version of cuDNN to target, for example [`8.6`].
211
-
-`PROTOBUF_VERSION`: The version of Protobuf to use, for example [`3.0.0`]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.
209
+
-`CUDA_VERSION`: The version of CUDA to target, for example [`12.9.9`].
210
+
-`CUDNN_VERSION`: The version of cuDNN to target, for example [`8.9`].
211
+
-`PROTOBUF_VERSION`: The version of Protobuf to use, for example [`3.20.1`]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.
212
212
-`CMAKE_TOOLCHAIN_FILE`: The path to a toolchain file for cross compilation.
213
213
-`BUILD_PARSERS`: Specify if the parsers should be built, for example [`ON`] | `OFF`. If turned OFF, CMake will try to find precompiled versions of the parser libraries to use in compiling samples. First in `${TRT_LIB_DIR}`, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
214
214
-`BUILD_PLUGINS`: Specify if the plugins should be built, for example [`ON`] | `OFF`. If turned OFF, CMake will try to find a precompiled version of the plugin library to use in compiling samples. First in `${TRT_LIB_DIR}`, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
215
215
-`BUILD_SAMPLES`: Specify if the samples should be built, for example [`ON`] | `OFF`.
216
-
-`GPU_ARCHS`: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found [here](https://developer.nvidia.com/cuda-gpus). Examples: - NVidia A100: `-DGPU_ARCHS="80"` - Tesla T4, GeForce RTX 2080: `-DGPU_ARCHS="75"` - Titan V, Tesla V100: `-DGPU_ARCHS="70"` - Multiple SMs: `-DGPU_ARCHS="80 75"`
216
+
-`GPU_ARCHS`: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found [here](https://developer.nvidia.com/cuda-gpus). Examples: - NVidia A100: `-DGPU_ARCHS="80"` - RTX 50 series: `-DGPU_ARCHS="120"` - Multiple SMs: `-DGPU_ARCHS="80 120"`
0 commit comments