Skip to content

Commit 847d5a6

Browse files
authored
add tests against generated code (#217)
* add tests that verify actual behavior of generated code * documentation * make assertion error messages work correctly * misc improvements, test error conditions, remove redundant unit tests * misc improvements + remove redundant unit tests * restore some missing test coverage * add tests against generated code * generated code tests for discriminators * rm test file, rm Python 3.8 * run new tests * don't run tests in 3.8 because type hints don't work the same * make sure all tests get run * cover another error case * cover another error case * reorganize * rm test file * more discriminator tests * rm unused * reorganize * coverage * docs
1 parent 1d51601 commit 847d5a6

40 files changed

+2177
-2041
lines changed

.github/workflows/checks.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ jobs:
1111
test:
1212
strategy:
1313
matrix:
14-
python: [ "3.8", "3.9", "3.10", "3.11", "3.12", "3.13" ]
14+
python: [ "3.9", "3.10", "3.11", "3.12", "3.13" ]
1515
os: [ ubuntu-latest, macos-latest, windows-latest ]
1616
runs-on: ${{ matrix.os }}
1717
steps:

CONTRIBUTING.md

+22-8
Original file line numberDiff line numberDiff line change
@@ -50,26 +50,40 @@ All changes must be tested, I recommend writing the test first, then writing the
5050

5151
If you think that some of the added code is not testable (or testing it would add little value), mention that in your PR and we can discuss it.
5252

53-
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add an [end-to-end test](#end-to-end-tests)
54-
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
55-
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)
53+
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add [functional tests](#functional-tests), and optionally an [end-to-end snapshot test](#end-to-end-snapshot-tests).
54+
2. If you're modifying the way an existing feature works, make sure functional tests cover this case. Existing end-to-end snapshot tests might also be affected if you have changed what generated model/endpoint code looks like.
55+
3. If you're improving error handling or adding a new error, add [functional tests](#functional-tests).
56+
4. For tests of low-level pieces of code that are fairly self-contained, and not tightly coupled to other internal implementation details, you can use regular [unit tests](#unit-tests).
5657

57-
#### End-to-end tests
58+
#### End-to-end snapshot tests
5859

59-
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
60+
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and functional tests.
6061

61-
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
62+
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.
63+
64+
In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
65+
66+
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:
6267

6368
1. `baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
6469
2. `baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
6570
3. `test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
6671
4. `3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)
6772

73+
#### Functional tests
74+
75+
These are black-box tests that verify the runtime behavior of generated code, as well as the generator's validation behavior. They are also end-to-end tests, since they run the generator as a shell command.
76+
77+
This can sometimes identify issues with error handling, validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature. For instance, they can verify that JSON data is correctly decoded into model class attributes, or that the generator will emit an appropriate warning or error for an invalid spec.
78+
79+
See [`end_to_end_tests/functional_tests`](./end_to_end_tests/functional_tests).
80+
6881
#### Unit tests
6982

70-
> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
83+
These include:
7184

72-
In some cases, we need to test things which cannot be generated—like validating that errors are caught and handled correctly. These should be tested via unit tests in the `tests` directory, using the `pytest` framework.
85+
* Regular unit tests of basic pieces of fairly self-contained low-level functionality, such as helper functions. These are implemented in the `tests` directory, using the `pytest` framework.
86+
* Older-style unit tests of low-level functions like `property_from_data` that have complex behavior. These are brittle and difficult to maintain, and should not be used going forward. Instead, they should be migrated to functional tests.
7387

7488
### Creating a Pull Request
7589

end_to_end_tests/__init__.py

+4
Original file line numberDiff line numberDiff line change
@@ -1 +1,5 @@
11
""" Generate a complete client and verify that it is correct """
2+
import pytest
3+
4+
pytest.register_assert_rewrite("end_to_end_tests.end_to_end_test_helpers")
5+
pytest.register_assert_rewrite("end_to_end_tests.functional_tests.helpers")

end_to_end_tests/baseline_openapi_3.0.json

-50
Original file line numberDiff line numberDiff line change
@@ -2858,56 +2858,6 @@
28582858
"ModelWithBackslashInDescription": {
28592859
"type": "object",
28602860
"description": "Description with special character: \\"
2861-
},
2862-
"ModelWithDiscriminatedUnion": {
2863-
"type": "object",
2864-
"properties": {
2865-
"discriminated_union": {
2866-
"allOf": [
2867-
{
2868-
"$ref": "#/components/schemas/ADiscriminatedUnion"
2869-
}
2870-
],
2871-
"nullable": true
2872-
}
2873-
}
2874-
},
2875-
"ADiscriminatedUnion": {
2876-
"type": "object",
2877-
"discriminator": {
2878-
"propertyName": "modelType",
2879-
"mapping": {
2880-
"type1": "#/components/schemas/ADiscriminatedUnionType1",
2881-
"type2": "#/components/schemas/ADiscriminatedUnionType2",
2882-
"type2-another-value": "#/components/schemas/ADiscriminatedUnionType2"
2883-
}
2884-
},
2885-
"oneOf": [
2886-
{
2887-
"$ref": "#/components/schemas/ADiscriminatedUnionType1"
2888-
},
2889-
{
2890-
"$ref": "#/components/schemas/ADiscriminatedUnionType2"
2891-
}
2892-
]
2893-
},
2894-
"ADiscriminatedUnionType1": {
2895-
"type": "object",
2896-
"properties": {
2897-
"modelType": {
2898-
"type": "string"
2899-
}
2900-
},
2901-
"required": ["modelType"]
2902-
},
2903-
"ADiscriminatedUnionType2": {
2904-
"type": "object",
2905-
"properties": {
2906-
"modelType": {
2907-
"type": "string"
2908-
}
2909-
},
2910-
"required": ["modelType"]
29112861
}
29122862
},
29132863
"parameters": {

end_to_end_tests/baseline_openapi_3.1.yaml

-52
Original file line numberDiff line numberDiff line change
@@ -2850,58 +2850,6 @@ info:
28502850
"ModelWithBackslashInDescription": {
28512851
"type": "object",
28522852
"description": "Description with special character: \\"
2853-
},
2854-
"ModelWithDiscriminatedUnion": {
2855-
"type": "object",
2856-
"properties": {
2857-
"discriminated_union": {
2858-
"oneOf": [
2859-
{
2860-
"$ref": "#/components/schemas/ADiscriminatedUnion"
2861-
},
2862-
{
2863-
"type": "null"
2864-
}
2865-
],
2866-
}
2867-
}
2868-
},
2869-
"ADiscriminatedUnion": {
2870-
"type": "object",
2871-
"discriminator": {
2872-
"propertyName": "modelType",
2873-
"mapping": {
2874-
"type1": "#/components/schemas/ADiscriminatedUnionType1",
2875-
"type2": "#/components/schemas/ADiscriminatedUnionType2",
2876-
"type2-another-value": "#/components/schemas/ADiscriminatedUnionType2"
2877-
}
2878-
},
2879-
"oneOf": [
2880-
{
2881-
"$ref": "#/components/schemas/ADiscriminatedUnionType1"
2882-
},
2883-
{
2884-
"$ref": "#/components/schemas/ADiscriminatedUnionType2"
2885-
}
2886-
]
2887-
},
2888-
"ADiscriminatedUnionType1": {
2889-
"type": "object",
2890-
"properties": {
2891-
"modelType": {
2892-
"type": "string"
2893-
}
2894-
},
2895-
"required": ["modelType"]
2896-
},
2897-
"ADiscriminatedUnionType2": {
2898-
"type": "object",
2899-
"properties": {
2900-
"modelType": {
2901-
"type": "string"
2902-
}
2903-
},
2904-
"required": ["modelType"]
29052853
}
29062854
}
29072855
"parameters": {
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
## The `functional_tests` module
2+
3+
These are end-to-end tests which run the client generator against many small API documents that are specific to various test cases.
4+
5+
Rather than testing low-level implementation details (like the unit tests in `tests`), or making assertions about the exact content of the generated code (like the "golden record"-based end-to-end tests), these treat both the generator and the generated code as black boxes and make assertions about their behavior.
6+
7+
The tests are in two submodules:
8+
9+
# `generated_code_execution`
10+
11+
These tests use valid API specs, and after running the generator, they _import and execute_ pieces of the generated code to verify that it actually works at runtime.
12+
13+
Each test class follows this pattern:
14+
15+
- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
16+
- The spec can omit the `openapi:`, `info:`, and `paths:`, blocks, unless those are relevant to the test.
17+
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
18+
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
19+
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
20+
- `sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
21+
- Use the decorator `@with_generated_code_imports` or `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
22+
- `@with_generated_code_imports(".models.MyModel1", ".models.MyModel2)` would execute `from [package name].models import MyModel1, MyModel2` and inject the imported classes into the test class as fixtures called `MyModel1` and `MyModel2`.
23+
- `@with_generated_code_import(".api.my_operation.sync", alias="endpoint_method")` would execute `from [package name].api.my_operation import sync`, but the fixture would be named `endpoint_method`.
24+
- After the test class finishes, these imports are discarded.
25+
26+
Example:
27+
28+
```python
29+
@with_generated_client_fixture(
30+
"""
31+
components:
32+
schemas:
33+
MyModel:
34+
type: object
35+
properties:
36+
stringProp: {"type": "string"}
37+
""")
38+
@with_generated_code_import(".models.MyModel")
39+
class TestSimpleJsonObject:
40+
def test_encoding(self, MyModel):
41+
instance = MyModel(string_prop="abc")
42+
assert instance.to_dict() == {"stringProp": "abc"}
43+
```
44+
45+
# `generator_failure_cases`
46+
47+
These run the generator with an invalid API spec and make assertions about the warning/error output. Some of these invalid conditions are expected to only produce warnings about the affected schemas, while others are expected to produce fatal errors that terminate the generator.
48+
49+
For warning conditions, each test class uses `@with_generated_client_fixture` as above, then uses `assert_bad_schema` to parse the output and check for a specific warning message for a specific schema name.
50+
51+
```python
52+
@with_generated_client_fixture(
53+
"""
54+
components:
55+
schemas:
56+
MyModel:
57+
# some kind of invalid schema
58+
""")
59+
class TestBadSchema:
60+
def test_encoding(self, generated_client):
61+
assert_bad_schema(generated_client, "MyModel", "some expected warning text")
62+
```
63+
64+
Or, for fatal error conditions:
65+
66+
- Call `inline_spec_should_fail`, providing an inline API spec (JSON or YAML).
67+
68+
```python
69+
class TestBadSpec:
70+
def test_some_spec_error(self):
71+
result = inline_spec_should_fail("""
72+
# some kind of invalid spec
73+
""")
74+
assert "some expected error text" in result.output
75+
```

0 commit comments

Comments
 (0)