You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using TensorRT 10.8 in the nvcr.io/nvidia/pytorch:25.01-py3 container works as expected. But how do I use TensorRT 10.8 on Jetson without using the PyTorch container?
Due to some issues a Jetson TensorRT 10.8 is not available on Jetson. Jetson packages will be available again in the 10.9 release. Given that the original accuracy issue has been resolved in 10.8, closing this issue.
I am trying to convert an open-clip model to TensorRT:
I converted the model and found that there was a problem with the accuracy so I used the following command:
polygraphy run model.onnx --trt --onnxrt > comparison_results.txt
the command's result:
comparison_results_r35.txt
comparison_results_r36.txt
docker image:
dustynv/l4t-pytorch:2.2-r35.4.1
dustynv/l4t-pytorch:r36.4.0
code and onnx:
https://drive.google.com/file/d/1YXDKsnEYa_v6Un44lG12QVTVw-dzNr5E/view?usp=sharing
The text was updated successfully, but these errors were encountered: