Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX][MLIR] Add support for onnx.gather op #2726

Merged
merged 1 commit into from
Jan 19, 2024

Conversation

Shukla-Gaurav
Copy link
Collaborator

@Shukla-Gaurav Shukla-Gaurav commented Jan 4, 2024

This commit adds support for gather op in the onnx pipeline.
nod-ai/SHARK-ModelDev#242

@Shukla-Gaurav
Copy link
Collaborator Author

@stellaraccident @rsuderman @vivekkhandelwal1 @renxida Can you please review this PR?

Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Shukla-Gaurav, it would be better if you can add a small comment explaining your approach since this is not a straight-forward lowering, and it would be hard for us to understand this by ourselves and then review.

@Shukla-Gaurav
Copy link
Collaborator Author

@vivekkhandelwal1 Sure, will add a comment explaining the algorithm in the lowering code itself. Meanwhile you can review it by checking the algorithm here nod-ai/SHARK-ModelDev#242

Copy link
Contributor

@rsuderman rsuderman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on having the high level description. I can see several constructions being performed but not interpret how this generates a onnx.gather behavior.

@Shukla-Gaurav
Copy link
Collaborator Author

@rsuderman @vivekkhandelwal1
Added high level description of the conversion. Could you please review again? Thanks!

Copy link
Contributor

@rsuderman rsuderman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor nits but LGTM otherwise

This commit adds support for gather op in the onnx pipeline.

Signed-Off-by: Gaurav Shukla <[email protected]>
@Shukla-Gaurav Shukla-Gaurav merged commit 3b85c70 into llvm:main Jan 19, 2024
5 checks passed
@AmosLewis
Copy link
Collaborator

AmosLewis commented Jun 12, 2024

Gather failed again in Shark-TestSuites onnx model opt-125M-awq with and

iree candidate-20240610.920

torch-mlir 
commit 7e0e23c66820d1db548103acbdf1337f701dc5a3 (upstream/main)
Author: Sambhav Jain <[email protected]>
Date:   Sun Jun 9 00:32:49 2024 -0700

    Test custom op import with symbolic shapes (#3431)
    
    Tests the basic constructs of registering a custom op and its abstract
    implementations (with FakeTensors) in python, going through TorchDynamo
    export, followed by importing the shape expressions in the Torch
    dialect.
    
    Also fixes the importer were previously the symbolic bind op insertion
    was not gated in one place.

python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../iree-build -f onnx -g models --mode onnx --report --torchtolinalg --tests onnx/models/opt-125M-awq

opt-125M-awq.default.torch-onnx.mlir:1123:13: error: 'tensor.expand_shape' op expected number of static shape dims to be equal to the output rank (3) but found 0 inputs instead
    %1119 = torch.operator "onnx.Gather"(%0, %1113) : (!torch.vtensor<[50272,768],f32>, !torch.vtensor<[?,?],si64>) -> !torch.vtensor<[?,?,768],f32> 
            ^
opt-125M-awq.default.torch-onnx.mlir:1123:13: note: see current operation: %1049 = "tensor.expand_shape"(%1046) <{reassociation = [[0, 1], [2]], static_output_shape = array<i64>}> : (tensor<?x768xf32>) -> tensor<?x?x768xf32>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants