|
| 1 | +.. _Torch_TensorRT_in_JetPack_6.1 |
| 2 | +
|
| 3 | +Overview |
| 4 | +################## |
| 5 | + |
| 6 | +JetPack 6.1 |
| 7 | +--------------------- |
| 8 | +Nvida JetPack 6.1 is the latest production release ofJetPack 6. |
| 9 | +With this release it incorporates: |
| 10 | +CUDA 12.6 |
| 11 | +TensorRT 10.3 |
| 12 | +cuDNN 9.3 |
| 13 | +DLFW 24.09 |
| 14 | + |
| 15 | +You can find more details for the JetPack 6.1: |
| 16 | + |
| 17 | + * https://docs.nvidia.com/jetson/jetpack/release-notes/index.html |
| 18 | + * https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html |
| 19 | + |
| 20 | + |
| 21 | +Prerequisites |
| 22 | +~~~~~~~~~~~~~~ |
| 23 | + |
| 24 | + |
| 25 | +Ensure your jetson developer kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager: |
| 26 | + |
| 27 | + * https://developer.nvidia.com/sdk-manager |
| 28 | + |
| 29 | + |
| 30 | +check the current jetpack version using |
| 31 | + |
| 32 | +.. code-block:: sh |
| 33 | +
|
| 34 | + apt show nvidia-jetpack |
| 35 | +
|
| 36 | +Ensure you have installed JetPack Dev components. This step is required if you need to build on jetson board. |
| 37 | + |
| 38 | +You can only install the dev components that you require: ex, tensorrt-dev would be the meta-package for all TRT development or install everthing. |
| 39 | + |
| 40 | +.. code-block:: sh |
| 41 | + # install all the nvidia-jetpack dev components |
| 42 | + sudo apt-get update |
| 43 | + sudo apt-get install nvidia-jetpack |
| 44 | +
|
| 45 | +Ensure you have cuda 12.6 installed(this should be installed automatically from nvidia-jetpack) |
| 46 | + |
| 47 | +.. code-block:: sh |
| 48 | +
|
| 49 | + # check the cuda version |
| 50 | + nvcc --version |
| 51 | + # if not installed or the version is not 12.6, install via the below cmd: |
| 52 | + sudo apt-get update |
| 53 | + sudo apt-get install cuda-toolkit-12-6 |
| 54 | +
|
| 55 | +Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/: |
| 56 | + |
| 57 | +.. code-block:: sh |
| 58 | +
|
| 59 | + # if not exist, download and copy to the directory |
| 60 | + wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz |
| 61 | + tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz |
| 62 | + sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/ |
| 63 | + sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/ |
| 64 | +
|
| 65 | +
|
| 66 | +Build torch_tensorrt |
| 67 | +~~~~~~~~~~~~~~ |
| 68 | + |
| 69 | + |
| 70 | +Install bazel |
| 71 | + |
| 72 | +.. code-block:: sh |
| 73 | +
|
| 74 | + wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64 |
| 75 | + sudo mv bazelisk-linux-arm64 /usr/bin/bazel |
| 76 | + chmod +x /usr/bin/bazel |
| 77 | +
|
| 78 | +Install pip and required python packages: |
| 79 | + * https://pip.pypa.io/en/stable/installation/ |
| 80 | + |
| 81 | +.. code-block:: sh |
| 82 | +
|
| 83 | + # install pip |
| 84 | + wget https://bootstrap.pypa.io/get-pip.py |
| 85 | + python get-pip.py |
| 86 | +
|
| 87 | +.. code-block:: sh |
| 88 | +
|
| 89 | + # install pytorch from nvidia jetson distribution: https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch |
| 90 | + python -m pip install torch https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl |
| 91 | +
|
| 92 | +.. code-block:: sh |
| 93 | +
|
| 94 | + # install required python packages |
| 95 | + python -m pip install -r toolchains/jp_workspaces/requirements.txt |
| 96 | +
|
| 97 | + # if you want to run the test cases, then install the test required python packages |
| 98 | + python -m pip install -r toolchains/jp_workspaces/test_requirements.txt |
| 99 | +
|
| 100 | +
|
| 101 | +Build and Install torch_tensorrt wheel file |
| 102 | + |
| 103 | + |
| 104 | +Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0). |
| 105 | + |
| 106 | +Please make sure to build torch_tensorrt wheel file from source release/2.5 branch |
| 107 | +(TODO: lanl to update the branch name once release/ngc branch is available) |
| 108 | + |
| 109 | +.. code-block:: sh |
| 110 | +
|
| 111 | + cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g') |
| 112 | + export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))") |
| 113 | + export SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6} |
| 114 | + export CUDA_HOME=/usr/local/cuda-${cuda_version}/ |
| 115 | + # replace the MODULE.bazel with the jetpack one |
| 116 | + cat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel |
| 117 | + # build and install torch_tensorrt wheel file |
| 118 | + python setup.py --use-cxx11-abi install --user |
| 119 | +
|
0 commit comments