Skip to content

add tests that verify actual behavior of generated code #216

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from

Conversation

eli-bl
Copy link

@eli-bl eli-bl commented Nov 7, 2024

This is a tests-only change that I've also submitted to the main repo in openapi-generators#1156; see PR description there. The goal is to facilitate quicker development of features and bugfixes as I continue to make changes to the generator, by verifying that the generated classes actually work in unit tests.

Previously, if I wanted to check for instance that a model with a discriminator would deserialize correctly, I had to go to a shell, run the generator on an example spec, cd into the output directory, start a Python REPL, import the class and execute some test statements there. But now I could just write tests like these.

The new tests that I've included in this PR cover a variety of JSON encoding/decoding behaviors for model classes. I would add something similar for request encoding (query parameters, URLs, etc.) later. I think that'll avoid the need for unit tests of specific SDK model classes and methods, since it will basically prove that if an API spec has such-and-such type of thing then the generated code for that thing will behave correctly.


On entering this context, sys.path is changed to include the root directory of the
generated code, so its modules can be imported. On exit, the original sys.path is
restored, and any modules that were loaded within the context are removed.
Copy link
Author

@eli-bl eli-bl Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, the proof that this works is simply that running all the tests in generated_code_live_tests in a single pytest run passes. Since many of the test classes reuse the same generated modules and class names for different things, if the sandboxing didn't work then the tests would inevitably pollute each other and fail.

return importlib.import_module(f"{self.base_module}{module_path}")


def _run_command(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved from test_end_to_end.py.

@eli-bl eli-bl force-pushed the live-generated-code-tests-bl branch from 03c6551 to 1aa1b51 Compare November 7, 2024 04:02
@eli-bl eli-bl marked this pull request as ready for review November 7, 2024 04:08
)


@with_generated_client_fixture(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests below are for all the new discriminator logic in the previous PR. I hope you'll agree that this makes it a lot easier to see what's going on.

@eli-bl eli-bl closed this Nov 12, 2024
@eli-bl eli-bl deleted the live-generated-code-tests-bl branch November 12, 2024 03:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant