You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* add tests that verify actual behavior of generated code
* documentation
* make assertion error messages work correctly
* misc improvements, test error conditions, remove redundant unit tests
* misc improvements + remove redundant unit tests
* restore some missing test coverage
* add tests against generated code
* generated code tests for discriminators
* rm test file, rm Python 3.8
* run new tests
* don't run tests in 3.8 because type hints don't work the same
* make sure all tests get run
* cover another error case
* cover another error case
* reorganize
* rm test file
* more discriminator tests
* rm unused
* reorganize
* coverage
* docs
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+22-8
Original file line number
Diff line number
Diff line change
@@ -50,26 +50,40 @@ All changes must be tested, I recommend writing the test first, then writing the
50
50
51
51
If you think that some of the added code is not testable (or testing it would add little value), mention that in your PR and we can discuss it.
52
52
53
-
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add an [end-to-end test](#end-to-end-tests)
54
-
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
55
-
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)
53
+
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add [functional tests](#functional-tests), and optionally an [end-to-end snapshot test](#end-to-end-snapshot-tests).
54
+
2. If you're modifying the way an existing feature works, make sure functional tests cover this case. Existing end-to-end snapshot tests might also be affected if you have changed what generated model/endpoint code looks like.
55
+
3. If you're improving error handling or adding a new error, add [functional tests](#functional-tests).
56
+
4. For tests of low-level pieces of code that are fairly self-contained, and not tightly coupled to other internal implementation details, you can use regular [unit tests](#unit-tests).
56
57
57
-
#### End-to-end tests
58
+
#### End-to-end snapshot tests
58
59
59
-
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
60
+
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and functional tests.
60
61
61
-
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
62
+
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.
63
+
64
+
In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
65
+
66
+
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:
62
67
63
68
1.`baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
64
69
2.`baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
65
70
3.`test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
66
71
4.`3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)
67
72
73
+
#### Functional tests
74
+
75
+
These are black-box tests that verify the runtime behavior of generated code, as well as the generator's validation behavior. They are also end-to-end tests, since they run the generator as a shell command.
76
+
77
+
This can sometimes identify issues with error handling, validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature. For instance, they can verify that JSON data is correctly decoded into model class attributes, or that the generator will emit an appropriate warning or error for an invalid spec.
78
+
79
+
See [`end_to_end_tests/functional_tests`](./end_to_end_tests/functional_tests).
80
+
68
81
#### Unit tests
69
82
70
-
> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
83
+
These include:
71
84
72
-
In some cases, we need to test things which cannot be generated—like validating that errors are caught and handled correctly. These should be tested via unit tests in the `tests` directory, using the `pytest` framework.
85
+
* Regular unit tests of basic pieces of fairly self-contained low-level functionality, such as helper functions. These are implemented in the `tests` directory, using the `pytest` framework.
86
+
* Older-style unit tests of low-level functions like `property_from_data` that have complex behavior. These are brittle and difficult to maintain, and should not be used going forward. Instead, they should be migrated to functional tests.
These are end-to-end tests which run the client generator against many small API documents that are specific to various test cases.
4
+
5
+
Rather than testing low-level implementation details (like the unit tests in `tests`), or making assertions about the exact content of the generated code (like the "golden record"-based end-to-end tests), these treat both the generator and the generated code as black boxes and make assertions about their behavior.
6
+
7
+
The tests are in two submodules:
8
+
9
+
# `generated_code_execution`
10
+
11
+
These tests use valid API specs, and after running the generator, they _import and execute_ pieces of the generated code to verify that it actually works at runtime.
12
+
13
+
Each test class follows this pattern:
14
+
15
+
- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
16
+
- The spec can omit the `openapi:`, `info:`, and `paths:`, blocks, unless those are relevant to the test.
17
+
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
18
+
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
19
+
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
20
+
-`sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
21
+
- Use the decorator `@with_generated_code_imports` or `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
22
+
-`@with_generated_code_imports(".models.MyModel1", ".models.MyModel2)` would execute `from [package name].models import MyModel1, MyModel2` and inject the imported classes into the test class as fixtures called `MyModel1` and `MyModel2`.
23
+
-`@with_generated_code_import(".api.my_operation.sync", alias="endpoint_method")` would execute `from [package name].api.my_operation import sync`, but the fixture would be named `endpoint_method`.
24
+
- After the test class finishes, these imports are discarded.
These run the generator with an invalid API spec and make assertions about the warning/error output. Some of these invalid conditions are expected to only produce warnings about the affected schemas, while others are expected to produce fatal errors that terminate the generator.
48
+
49
+
For warning conditions, each test class uses `@with_generated_client_fixture` as above, then uses `assert_bad_schema` to parse the output and check for a specific warning message for a specific schema name.
0 commit comments