Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added sum, min, max support and tests for TTIR Builder #2404

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jgrimTT
Copy link

@jgrimTT jgrimTT commented Mar 7, 2025

Ticket

#1539

Problem description

Not all TTIR ops are supported by TTIR builder.

What's changed

Added support in ttir_builder.py for ops sum, max, min.
Added tests in test_ttir_ops.py.

Checklist

  • New/Existing tests provide coverage for changes

@jgrimTT jgrimTT requested a review from tapspatel March 7, 2025 20:30
@jgrimTT jgrimTT requested a review from ctodTT as a code owner March 7, 2025 20:30
Copy link
Contributor

@ctodTT ctodTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for doing this, just a few tiny nits.

Comment on lines 419 to 430
if op_ttir_function in [ttir.MaxOp, ttir.MinOp]:
golden = Golden(
op_golden_function(
*(organize_golden_args(inputs)),
golden_kwargs["dim"][0],
golden_kwargs["keepdim"],
)[0]
)
else:
golden = Golden(
op_golden_function(*(organize_golden_args(inputs)), **golden_kwargs)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, why isn't the kwarg expansion sufficient here? I assume it's some weird inconsistency in how torch.{min,max} are called, but I can't see it. No need to change this if it's the only way to make it work.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I’m not thrilled with this solution, if you have a more elegant/scalable way lmk. But torch.max and min expect an int or name as the second argument whereas other torch functions expect a tuple of ints or names. They both also return torch.return_types.max(tensor, tensor) instead of a tensor.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, see most recent commit

@jgrimTT jgrimTT requested a review from ctodTT March 8, 2025 20:53
Copy link
Collaborator

@tapspatel tapspatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great! Thanks for adding in the ops. If you tested golden running via ttrt with the checks happening correctly (and no pcc issues), its good from my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants