-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gather #242
Comments
Hi @stellaraccident ,This is regarding onnx gather op support. It is more general compared to the torch gather op.
while torch gather op requires ranks of all the tensors to be same. I am still figuring out how to do this conversion and one possible algorithm I came up with seems like:
But not sure how to implement it and if there is another better solution? |
@rsuderman you have a lot of experience with these ops. Can you have a look? |
So the restriction is a weird one. I cannot see a logical reason why the |
Copying from direct chat
|
@rsuderman Can you please review llvm/torch-mlir#2726 ? |
This commit adds support for gather op in the onnx pipeline. nod-ai/SHARK-ModelDev#242 Signed-off-by: Gaurav Shukla <[email protected]>
Gather failed again in Shark-TestSuites onnx model opt-125M-awq with and
|
…#3504) Addresses an issue with onnx.Gather lowering to linalg: <nod-ai/SHARK-ModelDev#242> The builder for tensor.expand_shape, without an explicitly provided output shape, fails to infer an output shape in the case of multiple dynamic reassociation dims. I tried adding the output shape explicitly for tensor.expand_shape, but ran into compilation issues later on (see <iree-org/iree#17760>). This PR adds support by lowering this op to tensor.reshape when multiple dynamic reassociation dims are provided.
llvm/torch-mlir#2726
The text was updated successfully, but these errors were encountered: