Skip to content

add tests against generated code #217

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 26 commits into from
Nov 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
0928032
add tests that verify actual behavior of generated code
eli-bl Oct 31, 2024
f324f26
documentation
eli-bl Nov 6, 2024
6b15783
make assertion error messages work correctly
eli-bl Nov 8, 2024
b7d34a7
misc improvements, test error conditions, remove redundant unit tests
eli-bl Nov 8, 2024
53fca35
misc improvements + remove redundant unit tests
eli-bl Nov 8, 2024
cd2ccb0
restore some missing test coverage
eli-bl Nov 11, 2024
064780a
add tests against generated code
eli-bl Nov 12, 2024
7a0e2b4
generated code tests for discriminators
eli-bl Nov 12, 2024
91640f7
Merge branch '2.x' into generated-code-tests-bl
eli-bl Nov 12, 2024
91abc40
rm test file, rm Python 3.8
eli-bl Nov 12, 2024
f0f105c
run new tests
eli-bl Nov 12, 2024
79c322d
don't run tests in 3.8 because type hints don't work the same
eli-bl Nov 12, 2024
d915267
make sure all tests get run
eli-bl Nov 12, 2024
87673c8
cover another error case
eli-bl Nov 12, 2024
e4e63cb
cover another error case
eli-bl Nov 12, 2024
3a0c36c
reorganize
eli-bl Nov 12, 2024
36866eb
reorganize
eli-bl Nov 12, 2024
eabbf2b
rm test file
eli-bl Nov 12, 2024
6241f44
more discriminator tests
eli-bl Nov 12, 2024
718d236
rm unused
eli-bl Nov 12, 2024
80c8333
reorganize
eli-bl Nov 12, 2024
aee95ab
Merge branch 'live-generated-code-tests' into generated-code-tests-bl
eli-bl Nov 12, 2024
1c59c6c
coverage
eli-bl Nov 12, 2024
abe9c4d
Merge branch 'live-generated-code-tests' into generated-code-tests-bl
eli-bl Nov 12, 2024
aa63390
docs
eli-bl Nov 12, 2024
640d1d0
Merge branch 'live-generated-code-tests' into generated-code-tests-bl
eli-bl Nov 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
test:
strategy:
matrix:
python: [ "3.8", "3.9", "3.10", "3.11", "3.12", "3.13" ]
python: [ "3.9", "3.10", "3.11", "3.12", "3.13" ]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll be dropping 3.8 anyway, but I removed it from the CI test matrix here because some of the stuff in the new tests (like assertions about type annotations) is a lot harder to do in 3.8.

os: [ ubuntu-latest, macos-latest, windows-latest ]
runs-on: ${{ matrix.os }}
steps:
Expand Down
30 changes: 22 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,26 +50,40 @@ All changes must be tested, I recommend writing the test first, then writing the

If you think that some of the added code is not testable (or testing it would add little value), mention that in your PR and we can discuss it.

1. If you're adding support for a new OpenAPI feature or covering a new edge case, add an [end-to-end test](#end-to-end-tests)
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add [functional tests](#functional-tests), and optionally an [end-to-end snapshot test](#end-to-end-snapshot-tests).
2. If you're modifying the way an existing feature works, make sure functional tests cover this case. Existing end-to-end snapshot tests might also be affected if you have changed what generated model/endpoint code looks like.
3. If you're improving error handling or adding a new error, add [functional tests](#functional-tests).
4. For tests of low-level pieces of code that are fairly self-contained, and not tightly coupled to other internal implementation details, you can use regular [unit tests](#unit-tests).

#### End-to-end tests
#### End-to-end snapshot tests

This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and functional tests.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.

In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:

1. `baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
2. `baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
3. `test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
4. `3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)

#### Functional tests

These are black-box tests that verify the runtime behavior of generated code, as well as the generator's validation behavior. They are also end-to-end tests, since they run the generator as a shell command.

This can sometimes identify issues with error handling, validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature. For instance, they can verify that JSON data is correctly decoded into model class attributes, or that the generator will emit an appropriate warning or error for an invalid spec.

See [`end_to_end_tests/functional_tests`](./end_to_end_tests/functional_tests).

#### Unit tests

> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
These include:

In some cases, we need to test things which cannot be generated—like validating that errors are caught and handled correctly. These should be tested via unit tests in the `tests` directory, using the `pytest` framework.
* Regular unit tests of basic pieces of fairly self-contained low-level functionality, such as helper functions. These are implemented in the `tests` directory, using the `pytest` framework.
* Older-style unit tests of low-level functions like `property_from_data` that have complex behavior. These are brittle and difficult to maintain, and should not be used going forward. Instead, they should be migrated to functional tests.

### Creating a Pull Request

Expand Down
4 changes: 4 additions & 0 deletions end_to_end_tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
""" Generate a complete client and verify that it is correct """
import pytest

pytest.register_assert_rewrite("end_to_end_tests.end_to_end_test_helpers")
pytest.register_assert_rewrite("end_to_end_tests.functional_tests.helpers")
50 changes: 0 additions & 50 deletions end_to_end_tests/baseline_openapi_3.0.json
Original file line number Diff line number Diff line change
Expand Up @@ -2858,56 +2858,6 @@
"ModelWithBackslashInDescription": {
"type": "object",
"description": "Description with special character: \\"
},
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and in the next file, and in golden_record, I've removed some schemas that no longer need to be in the golden-record-based tests. There are now better equivalent tests in test_unions.py.

"ModelWithDiscriminatedUnion": {
"type": "object",
"properties": {
"discriminated_union": {
"allOf": [
{
"$ref": "#/components/schemas/ADiscriminatedUnion"
}
],
"nullable": true
}
}
},
"ADiscriminatedUnion": {
"type": "object",
"discriminator": {
"propertyName": "modelType",
"mapping": {
"type1": "#/components/schemas/ADiscriminatedUnionType1",
"type2": "#/components/schemas/ADiscriminatedUnionType2",
"type2-another-value": "#/components/schemas/ADiscriminatedUnionType2"
}
},
"oneOf": [
{
"$ref": "#/components/schemas/ADiscriminatedUnionType1"
},
{
"$ref": "#/components/schemas/ADiscriminatedUnionType2"
}
]
},
"ADiscriminatedUnionType1": {
"type": "object",
"properties": {
"modelType": {
"type": "string"
}
},
"required": ["modelType"]
},
"ADiscriminatedUnionType2": {
"type": "object",
"properties": {
"modelType": {
"type": "string"
}
},
"required": ["modelType"]
}
},
"parameters": {
Expand Down
52 changes: 0 additions & 52 deletions end_to_end_tests/baseline_openapi_3.1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2850,58 +2850,6 @@ info:
"ModelWithBackslashInDescription": {
"type": "object",
"description": "Description with special character: \\"
},
"ModelWithDiscriminatedUnion": {
"type": "object",
"properties": {
"discriminated_union": {
"oneOf": [
{
"$ref": "#/components/schemas/ADiscriminatedUnion"
},
{
"type": "null"
}
],
}
}
},
"ADiscriminatedUnion": {
"type": "object",
"discriminator": {
"propertyName": "modelType",
"mapping": {
"type1": "#/components/schemas/ADiscriminatedUnionType1",
"type2": "#/components/schemas/ADiscriminatedUnionType2",
"type2-another-value": "#/components/schemas/ADiscriminatedUnionType2"
}
},
"oneOf": [
{
"$ref": "#/components/schemas/ADiscriminatedUnionType1"
},
{
"$ref": "#/components/schemas/ADiscriminatedUnionType2"
}
]
},
"ADiscriminatedUnionType1": {
"type": "object",
"properties": {
"modelType": {
"type": "string"
}
},
"required": ["modelType"]
},
"ADiscriminatedUnionType2": {
"type": "object",
"properties": {
"modelType": {
"type": "string"
}
},
"required": ["modelType"]
}
}
"parameters": {
Expand Down
75 changes: 75 additions & 0 deletions end_to_end_tests/functional_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
## The `functional_tests` module

These are end-to-end tests which run the client generator against many small API documents that are specific to various test cases.

Rather than testing low-level implementation details (like the unit tests in `tests`), or making assertions about the exact content of the generated code (like the "golden record"-based end-to-end tests), these treat both the generator and the generated code as black boxes and make assertions about their behavior.

The tests are in two submodules:

# `generated_code_execution`

These tests use valid API specs, and after running the generator, they _import and execute_ pieces of the generated code to verify that it actually works at runtime.

Each test class follows this pattern:

- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
- The spec can omit the `openapi:`, `info:`, and `paths:`, blocks, unless those are relevant to the test.
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
- `sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
- Use the decorator `@with_generated_code_imports` or `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
- `@with_generated_code_imports(".models.MyModel1", ".models.MyModel2)` would execute `from [package name].models import MyModel1, MyModel2` and inject the imported classes into the test class as fixtures called `MyModel1` and `MyModel2`.
- `@with_generated_code_import(".api.my_operation.sync", alias="endpoint_method")` would execute `from [package name].api.my_operation import sync`, but the fixture would be named `endpoint_method`.
- After the test class finishes, these imports are discarded.

Example:

```python
@with_generated_client_fixture(
"""
components:
schemas:
MyModel:
type: object
properties:
stringProp: {"type": "string"}
""")
@with_generated_code_import(".models.MyModel")
class TestSimpleJsonObject:
def test_encoding(self, MyModel):
instance = MyModel(string_prop="abc")
assert instance.to_dict() == {"stringProp": "abc"}
```

# `generator_failure_cases`

These run the generator with an invalid API spec and make assertions about the warning/error output. Some of these invalid conditions are expected to only produce warnings about the affected schemas, while others are expected to produce fatal errors that terminate the generator.

For warning conditions, each test class uses `@with_generated_client_fixture` as above, then uses `assert_bad_schema` to parse the output and check for a specific warning message for a specific schema name.

```python
@with_generated_client_fixture(
"""
components:
schemas:
MyModel:
# some kind of invalid schema
""")
class TestBadSchema:
def test_encoding(self, generated_client):
assert_bad_schema(generated_client, "MyModel", "some expected warning text")
```

Or, for fatal error conditions:

- Call `inline_spec_should_fail`, providing an inline API spec (JSON or YAML).

```python
class TestBadSpec:
def test_some_spec_error(self):
result = inline_spec_should_fail("""
# some kind of invalid spec
""")
assert "some expected error text" in result.output
```
Loading