Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes needed to enable outlining by default #951

Merged
merged 4 commits into from
Dec 4, 2024

Conversation

newling
Copy link
Contributor

@newling newling commented Dec 3, 2024

Before this PR, batch matmul and convolution didn't behave as expected.

batch matmul: an error was emitted but not propagated to signal a pass failure, so the pass quietly continued

convolution: at the point of function outlining the convolution has already been decomposed into a rank-3 matmul. So it was being outlined as a matmul. Which is fine, except the generic being outlined was

 linalg.generic {indexing_maps = [#map2, #map3, #map4], 
     iterator_types = ["parallel", "parallel", "reduction"]} 
     ins(%subview_20, %subview_18 : memref<4x8xbf16, strided<[8, 1], offset: ?>, 2 : i32>, 
                                    memref<8x4xbf16, strided<[4, 1], offset: ?>, 2 : i32>) ... 

which has layouts on the memrefs. Outlining this without adjusting the types in the pass results in the error

 error: 'llvm.call' op 'generic_matmul_0_outlined' does not reference a valid LLVM function

This comes from the outlined function with signature

func.func private @generic_matmul_0_outlined(
  %arg0: memref<4x8xi8, strided<[8, 1], offset: ?>>, 
  %arg1: memref<8x8xi8, strided<[8, 1], offset: ?>>, 
  %arg2: memref<4x8xi32, strided<[8, 1]>>) 
  attributes {llvm.bareptr = true} 

trying to lower through the above function through convert-func-to-llvm results in the above error.

This PR does the following: The pass now checks if the strides on the memref are contiguous, and if they are creates a signature with memrefs without layouts. At the call site, it does a memref cast. I have confirmed that this works end-to-end for convolution (correct numerics). I guess an alternative would be just not outline in the case where the operation being outlined has operands with layouts.

@newling newling force-pushed the fixes_for_outline_all branch from bca3d17 to 278a131 Compare December 3, 2024 15:45
@newling newling marked this pull request as ready for review December 3, 2024 17:30
@newling newling removed the request for review from nirvedhmeshram December 3, 2024 19:21
Comment on lines 59 to 60
return computeOp.emitOpError("has an operand of type ")
<< operand.getType() << " that isn't compatible with outlining.";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of a non-contiguous memref, wouldn't it be better to just try continuing without outlining?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's probably more sensible. I'll refactor to do that

@newling newling force-pushed the fixes_for_outline_all branch from b73704a to db72a6c Compare December 4, 2024 05:54
Copy link
Collaborator

@jtuyls jtuyls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jtuyls jtuyls merged commit e09ff08 into nod-ai:main Dec 4, 2024
7 checks passed
newling added a commit that referenced this pull request Dec 5, 2024
…tity layout (#959)

This is a fix for a bug I introduced in
#951 . I thought that the
approach of removing the offset from the memref function signature
worked, but it doesn't seem to. I must have been running a test other
than the convolution end-to-end test when I was confirming it worked -
now with function outlining enabled in gives a numerical error.

This PR simplifies logic: any non-identity layout results in no
outlining.
@newling newling deleted the fixes_for_outline_all branch December 12, 2024 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants