[Mobile] QNN GPU backend crashes. #24004
Labels
ep:QNN
issues related to QNN exeution provider
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
Describe the issue
I am trying to use ONNXruntime with QNN execution provider on a Qualcomm Android device
Qualcomm® QCS610
which supports GPU for QNN.I have tested platform validators from the device and no problem with QNN with GPU backend.
And I have used a mobilenet model from Qualcomm as quantization is not needed
(https://huggingface.co/qualcomm/MobileNet-v2/tree/main)
and which has float32 in/output

Configuration is okay.
All nodes are placed well without failing validation.
But when I run on the GPU backend(libQnnGpu.so), it crashes with error code 6999.
(on CPU backend
libQnnCpu.so
, it works fine)I have set log severity 0 and level verbose, but the above is the only meaningful log when it crashes.
I have tested several models but the results were the same.
The node's number doesn't matter, it always crashes from the first node placed on the QnnExecution(GPU) provider, not the CPU provider for the unsupported functions as a fallback.
(like CPU nodes: 1-23, GPU nodes: 24-)
I am currently doubting some reasons.
(But when I did the platform test, it automatically linked it 🤔 )
QCS610
which also supports hexagon-v66Any Idea?
(a.k.a I am running it in C++ for a native Android application.)
To reproduce
Build Onnxruntime with QNN
Run with GPU execution provider backend on Qualcomm device
Urgency
No response
Platform
Android
OS Version
29
ONNX Runtime Installation
Built from Source
Compiler Version (if 'Built from Source')
clang 17.0.2
Package Name (if 'Released Package')
None
ONNX Runtime Version or Commit ID
Rel-1.20.1
ONNX Runtime API
C++/C
Architecture
X64
Execution Provider
Other / Unknown
Execution Provider Library Version
QNN (qairt 2.28.2.241116)
The text was updated successfully, but these errors were encountered: