Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--start-symbol for kmir run #446

Merged
merged 9 commits into from
Feb 5, 2025

Conversation

gtrepta
Copy link
Contributor

@gtrepta gtrepta commented Feb 4, 2025

This adds a <start-symbol> cell to the configuration with a configuration variable for the symbol to begin execution from.

It also changes the expected output in the exec-smir integration tests to only check the contents of the K cell. The semantics are going to be very unstable right now, and without this change we will probably be spending time updating the test vectors on nearly every pull request.

Comment on lines +59 to +61
run_parser.add_argument(
'--start-symbol', type=str, metavar='SYMBOL', default='main', help='Symbol name to begin execution from'
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 good idea.
I was struggling to find a way to have a default value for a K config variable (there is probably none), this is much easier.

Comment on lines 193 to 207
kcell = None

def get_k_cell(term: KInner) -> None:
match term:
case KApply(label=KLabel('<k>')):
nonlocal kcell
kcell = term

collect(get_k_cell, kast_result)

with output_kast.open('r') as f:
expected = f.read().rstrip()

result_pretty = tools.kprint.kore_to_pretty(result).rstrip()
assert isinstance(kcell, KApply)
result_pretty = tools.kprint.pretty_print(kcell).rstrip()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure we should give up on comparing all of the configuration.
How about we add a make target to both re-generate the *.smir.json and the expected output?
The expected output in my version is easy to produce just by piping kmir run output into a file, and we could run stable-mir-json before so we also update the json. This would be a standard task to run locally as a preparation when opening a PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A make target to update expected outputs will in certain circumstances be a good tool to have.

But, what's the point of having these tests, if they change with every change we make? I think we should try for something that's at least semi-stable so we don't have as many larger irrelevant diffs in our PRs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well... if the test fails, or it you proactively change the expectation file using that make target, reviewers are encouraged to look at the changed output. We can still all choose to skip those changes in the test expectations but they provide examples of how the configuration is expected to change with the respective code change in the PR.
If you think that's useless we can of course also just scrape the tests, or compare selected cells. But I thought it is acceptable to maintain and useful. Let's see what @dkcumming thinks

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is nice to check the diffs, at least that way we can tell if things are changing as we expect. But this is assuming there isn't much overhead. I think it is preferable to have these extra steps if they are manageable though - preferable to not noticing we changed something spuriously and finding it later and having to retrace steps.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I've added make update-exec-smir and reverted the test outputs. I do believe that the K configuration exposes implementation details that we should prefer not to have and should focus on interfacing with it to test only behavior (ie. function return values, successful execution, anything we can model about Rust's semantics), but we can kick that can down the road. I wont push for it here.

Base automatically changed from jb/exec-arbitrary-entry-function to master February 4, 2025 23:34
@gtrepta gtrepta requested a review from jberthold February 5, 2025 20:45
Copy link
Member

@jberthold jberthold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apart from the submodule downgrade, this LGTM, thanks for adapting.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was in one of my commits on the other PR by mistake.
Maybe best to rebase -i master your current branch and remove all commits from the other PR. Or else you can of course revert the submodule downgrade manually.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. When I merged master in I had too many conflicts that didn't make sense so I just used the "ours" strategy, this submodule downgrade must have slipped in that way.

@gtrepta gtrepta force-pushed the guy/cli-start-symbol branch from 0c3b00c to e11f43d Compare February 5, 2025 22:57
@automergerpr-permission-manager automergerpr-permission-manager bot merged commit 01a575c into master Feb 5, 2025
5 checks passed
@automergerpr-permission-manager automergerpr-permission-manager bot deleted the guy/cli-start-symbol branch February 5, 2025 23:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants