-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes needed to enable outlining by default #951
Conversation
bca3d17
to
278a131
Compare
return computeOp.emitOpError("has an operand of type ") | ||
<< operand.getType() << " that isn't compatible with outlining."; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of a non-contiguous memref, wouldn't it be better to just try continuing without outlining?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's probably more sensible. I'll refactor to do that
b73704a
to
db72a6c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…tity layout (#959) This is a fix for a bug I introduced in #951 . I thought that the approach of removing the offset from the memref function signature worked, but it doesn't seem to. I must have been running a test other than the convolution end-to-end test when I was confirming it worked - now with function outlining enabled in gives a numerical error. This PR simplifies logic: any non-identity layout results in no outlining.
Before this PR, batch matmul and convolution didn't behave as expected.
batch matmul: an error was emitted but not propagated to signal a pass failure, so the pass quietly continued
convolution: at the point of function outlining the convolution has already been decomposed into a rank-3 matmul. So it was being outlined as a matmul. Which is fine, except the generic being outlined was
which has layouts on the memrefs. Outlining this without adjusting the types in the pass results in the error
This comes from the outlined function with signature
trying to lower through the above function through
convert-func-to-llvm
results in the above error.This PR does the following: The pass now checks if the strides on the memref are contiguous, and if they are creates a signature with memrefs without layouts. At the call site, it does a memref cast. I have confirmed that this works end-to-end for convolution (correct numerics). I guess an alternative would be just not outline in the case where the operation being outlined has operands with layouts.