-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix model breakages #53
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
b02d3cc
to
08898b3
Compare
… last dim of output
…state of lowering
e7f1936
to
02bc9b4
Compare
ayerofieiev-tt
approved these changes
Aug 14, 2024
This reverts commit 775fb9f.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Model status for this PR:
Unmark gpt2 and mnist test models to expect passing
Disable conversion from aten._to_copy
Pass device for all from_torch opsReverted because this conflicts with unsqueeze conversionReplace aten.full op to a literal scalar for certain cases
Compare only Tensor types for dictionary outputs
Fails because:
from_torch
op makes GPT-2 fail. Removing it causes, Bloom, Llama, and Yolos to fail.MNIST fixes:
Replace aten.view with aten.reshape
Fails because:
Bloom and Llama fixes:
Add conversion for aten.min
Add exception to aten.eq conversion
Fix reusing ttnn data movement op if mixed with aten ops
Convert all inputs to ttnn.bfloat16 when moving data in
Skip unsqueeze transformation if last dim of input is not the same as…
Add exception to aten.expand conversion when last dimension of input …
Support list type arguments
Check layout change for ttnn reshape and embedding op
Freeze encoder for llama model
Yolos fixes
Add workaround for ttnn.permute when dim 0 is 1 for rank 3
Reconvert int64 types from metadata when mixing ttnn and aten ops
Check for valid page size for ops that decompose to ttnn.full
Delete aten.expand op if output has the exact same shape
General fixes:
Consolidate metadata during op conversion
Fix output type of aten.arange unit test to match output of original
Disable to_copy unit test to re-evaluate conversion
Lower pcc for addmm slightly
Change input shapes of some unit test to match exceptions in current …