You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a new set of tests that generate client code from an API document and then actually import and execute that code. See [`end_to_end_tests/generated_code_live_tests`](./end_to_end_tests/generated_code_live_tests) for more details.
8
+
9
+
This does not affect any runtime functionality of openapi-python-client.
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+15-3
Original file line number
Diff line number
Diff line change
@@ -54,17 +54,29 @@ If you think that some of the added code is not testable (or testing it would ad
54
54
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
55
55
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)
56
56
57
-
#### End-to-end tests
57
+
#### End-to-end snapshot tests
58
58
59
-
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
59
+
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and unit tests of generated code.
60
60
61
-
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
61
+
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.
62
+
63
+
In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
64
+
65
+
There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:
62
66
63
67
1.`baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
64
68
2.`baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
65
69
3.`test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
66
70
4.`3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)
67
71
72
+
#### Unit tests of generated code
73
+
74
+
These verify the runtime behavior of the generated code, without making assertions about the exact implementation of the code. For instance, they can verify that JSON data is correctly decoded into model class attributes.
75
+
76
+
The tests run the generator against a small API spec (defined inline for each test class), and then import and execute the generated code. This can sometimes identify issues with validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature.
77
+
78
+
See [`end_to_end_tests/generated_code_live_tests`](./end_to_end_tests/generated_code_live_tests).
79
+
68
80
#### Unit tests
69
81
70
82
> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
These are end-to-end tests which run the code generator command, but unlike the other tests in `end_to_end_tests`, they are also unit tests _of the behavior of the generated code_.
4
+
5
+
Each test class follows this pattern:
6
+
7
+
- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
8
+
- The spec can omit the `openapi:` and `info:` blocks, unless those are relevant to the test.
9
+
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
10
+
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
11
+
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
12
+
-`sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
13
+
- Use the decorator `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
14
+
-`@with_generated_code_import(".models.MyModel")` would execute `from [client package name].models import MyModel` and inject the imported object into the test class as a fixture called `MyModel`.
15
+
-`@with_generated_code_import(".models.MyModel", alias="model1")` would do the same thing, but the fixture would be named `model1`.
16
+
- After the test class finishes, these imports are discarded.
0 commit comments