-
Notifications
You must be signed in to change notification settings - Fork 1
add tests that verify actual behavior of generated code #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
||
On entering this context, sys.path is changed to include the root directory of the | ||
generated code, so its modules can be imported. On exit, the original sys.path is | ||
restored, and any modules that were loaded within the context are removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, the proof that this works is simply that running all the tests in generated_code_live_tests
in a single pytest run passes. Since many of the test classes reuse the same generated modules and class names for different things, if the sandboxing didn't work then the tests would inevitably pollute each other and fail.
return importlib.import_module(f"{self.base_module}{module_path}") | ||
|
||
|
||
def _run_command( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved from test_end_to_end.py
.
03c6551
to
1aa1b51
Compare
) | ||
|
||
|
||
@with_generated_client_fixture( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests below are for all the new discriminator logic in the previous PR. I hope you'll agree that this makes it a lot easier to see what's going on.
This is a tests-only change that I've also submitted to the main repo in openapi-generators#1156; see PR description there. The goal is to facilitate quicker development of features and bugfixes as I continue to make changes to the generator, by verifying that the generated classes actually work in unit tests.
Previously, if I wanted to check for instance that a model with a discriminator would deserialize correctly, I had to go to a shell, run the generator on an example spec, cd into the output directory, start a Python REPL, import the class and execute some test statements there. But now I could just write tests like these.
The new tests that I've included in this PR cover a variety of JSON encoding/decoding behaviors for model classes. I would add something similar for request encoding (query parameters, URLs, etc.) later. I think that'll avoid the need for unit tests of specific SDK model classes and methods, since it will basically prove that if an API spec has such-and-such type of thing then the generated code for that thing will behave correctly.