We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I tried to run tao-converter on this file from ngc https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/pointpillarnet/files?version=deployable_v1.0 but it gives error
TensorRT Version:8.2.5
NVIDIA GPU:GTX1650
NVIDIA Driver Version:470
CUDA Version:11.4
CUDNN Version:8.2
Operating System:
Python Version: 3.8
./tao-converter -k $KEY -e /home/osman/trt.engine -p points,1x204800x4,1x204800x4,1x204800x4 -p num_points,1,1,1 -t fp16 ../pointpillars_deployable.etlt [INFO] [MemUsageChange] Init CUDA: CPU +337, GPU +0, now: CPU 348, GPU 202 (MiB) [INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 202 MiB [INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 234 MiB [libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Interpreting non ascii codepoint 191. [libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Expected identifier, got: � [ERROR] ModelImporter.cpp:735: Failed to parse ONNX model from file: /tmp/file4IbqiP [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] Number of optimization profiles does not match model input node number. Aborted (core dumped)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Description
I tried to run tao-converter on this file from ngc https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/pointpillarnet/files?version=deployable_v1.0 but it gives error
Environment
TensorRT Version:8.2.5
NVIDIA GPU:GTX1650
NVIDIA Driver Version:470
CUDA Version:11.4
CUDNN Version:8.2
Operating System:
Python Version: 3.8
./tao-converter -k $KEY -e /home/osman/trt.engine -p points,1x204800x4,1x204800x4,1x204800x4 -p num_points,1,1,1 -t fp16 ../pointpillars_deployable.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +337, GPU +0, now: CPU 348, GPU 202 (MiB)
[INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 202 MiB
[INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 234 MiB
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Interpreting non ascii codepoint 191.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Expected identifier, got: �
[ERROR] ModelImporter.cpp:735: Failed to parse ONNX model from file: /tmp/file4IbqiP
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Number of optimization profiles does not match model input node number.
Aborted (core dumped)
The text was updated successfully, but these errors were encountered: