Skip to content

add tests that verify behavior of generated code + generator errors/warnings #1156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
14 changes: 14 additions & 0 deletions .changeset/live_tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
default: minor
---

# Functional tests

Automated tests have been extended to include a new category of "functional" tests, in [`end_to_end_tests/functional_tests`](./end_to_end_tests/functional_tests). These are of two kinds:

1. Happy-path tests that run the generator from an inline API document and then actually import and execute the generated code.
2. Warning/error condition tests that run the generator from an inline API document that contains something invalid, and make assertions about the generator's output.

These provide more efficient and granular test coverage than the "golden record"-based end-to-end tests. Also, the low-level unit tests in `tests`, which are dependent on internal implementation details, can now in many cases be replaced by functional tests.

This does not affect any runtime functionality of openapi-python-client.
30 changes: 22 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,26 +50,40 @@ All changes must be tested, I recommend writing the test first, then writing the

If you think that some of the added code is not testable (or testing it would add little value), mention that in your PR and we can discuss it.

1. If you're adding support for a new OpenAPI feature or covering a new edge case, add an [end-to-end test](#end-to-end-tests)
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)
1. If you're adding support for a new OpenAPI feature or covering a new edge case, add [functional tests](#functional-tests), and optionally an [end-to-end snapshot test](#end-to-end-snapshot-tests).
2. If you're modifying the way an existing feature works, make sure functional tests cover this case. Existing end-to-end snapshot tests might also be affected if you have changed what generated model/endpoint code looks like.
3. If you're improving error handling or adding a new error, add [functional tests](#functional-tests).
4. For tests of low-level pieces of code that are fairly self-contained, and not tightly coupled to other internal implementation details, you can use regular [unit tests](#unit-tests).

#### End-to-end tests
#### End-to-end snapshot tests

This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and functional tests.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.

In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:

1. `baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
2. `baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
3. `test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
4. `3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)

#### Functional tests

These are black-box tests that verify the runtime behavior of generated code, as well as the generator's validation behavior. They are also end-to-end tests, since they run the generator as a shell command.

This can sometimes identify issues with error handling, validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature. For instance, they can verify that JSON data is correctly decoded into model class attributes, or that the generator will emit an appropriate warning or error for an invalid spec.

See [`end_to_end_tests/functional_tests`](./end_to_end_tests/functional_tests).

#### Unit tests

> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
These include:

In some cases, we need to test things which cannot be generated—like validating that errors are caught and handled correctly. These should be tested via unit tests in the `tests` directory, using the `pytest` framework.
* Regular unit tests of basic pieces of fairly self-contained low-level functionality, such as helper functions. These are implemented in the `tests` directory, using the `pytest` framework.
* Older-style unit tests of low-level functions like `property_from_data` that have complex behavior. These are brittle and difficult to maintain, and should not be used going forward. Instead, they should be migrated to functional tests.

### Creating a Pull Request

Expand Down
4 changes: 4 additions & 0 deletions end_to_end_tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
""" Generate a complete client and verify that it is correct """
import pytest

pytest.register_assert_rewrite("end_to_end_tests.end_to_end_test_helpers")
pytest.register_assert_rewrite("end_to_end_tests.functional_tests.helpers")
75 changes: 75 additions & 0 deletions end_to_end_tests/functional_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
## The `functional_tests` module

These are end-to-end tests which run the client generator against many small API documents that are specific to various test cases.

Rather than testing low-level implementation details (like the unit tests in `tests`), or making assertions about the exact content of the generated code (like the "golden record"-based end-to-end tests), these treat both the generator and the generated code as black boxes and make assertions about their behavior.

The tests are in two submodules:

# `generated_code_execution`

These tests use valid API specs, and after running the generator, they _import and execute_ pieces of the generated code to verify that it actually works at runtime.

Each test class follows this pattern:

- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
- The spec can omit the `openapi:`, `info:`, and `paths:`, blocks, unless those are relevant to the test.
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
- `sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
- Use the decorator `@with_generated_code_imports` or `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
- `@with_generated_code_imports(".models.MyModel1", ".models.MyModel2)` would execute `from [package name].models import MyModel1, MyModel2` and inject the imported classes into the test class as fixtures called `MyModel1` and `MyModel2`.
- `@with_generated_code_import(".api.my_operation.sync", alias="endpoint_method")` would execute `from [package name].api.my_operation import sync`, but the fixture would be named `endpoint_method`.
- After the test class finishes, these imports are discarded.

Example:

```python
@with_generated_client_fixture(
"""
components:
schemas:
MyModel:
type: object
properties:
stringProp: {"type": "string"}
""")
@with_generated_code_import(".models.MyModel")
class TestSimpleJsonObject:
def test_encoding(self, MyModel):
instance = MyModel(string_prop="abc")
assert instance.to_dict() == {"stringProp": "abc"}
```

# `generator_failure_cases`

These run the generator with an invalid API spec and make assertions about the warning/error output. Some of these invalid conditions are expected to only produce warnings about the affected schemas, while others are expected to produce fatal errors that terminate the generator.

For warning conditions, each test class uses `@with_generated_client_fixture` as above, then uses `assert_bad_schema` to parse the output and check for a specific warning message for a specific schema name.

```python
@with_generated_client_fixture(
"""
components:
schemas:
MyModel:
# some kind of invalid schema
""")
class TestBadSchema:
def test_encoding(self, generated_client):
assert_bad_schema(generated_client, "MyModel", "some expected warning text")
```

Or, for fatal error conditions:

- Call `inline_spec_should_fail`, providing an inline API spec (JSON or YAML).

```python
class TestBadSpec:
def test_some_spec_error(self):
result = inline_spec_should_fail("""
# some kind of invalid spec
""")
assert "some expected error text" in result.output
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
from typing import Any, ForwardRef, Union

from end_to_end_tests.functional_tests.helpers import (
assert_model_decode_encode,
assert_model_property_type_hint,
with_generated_client_fixture,
with_generated_code_imports,
)


@with_generated_client_fixture(
"""
components:
schemas:
SimpleObject:
type: object
properties:
name: {"type": "string"}
ModelWithArrayOfAny:
properties:
arrayProp:
type: array
items: {}
ModelWithArrayOfInts:
properties:
arrayProp:
type: array
items: {"type": "integer"}
ModelWithArrayOfObjects:
properties:
arrayProp:
type: array
items: {"$ref": "#/components/schemas/SimpleObject"}
""")
@with_generated_code_imports(
".models.ModelWithArrayOfAny",
".models.ModelWithArrayOfInts",
".models.ModelWithArrayOfObjects",
".models.SimpleObject",
".types.Unset",
)
class TestArraySchemas:
def test_array_of_any(self, ModelWithArrayOfAny):
assert_model_decode_encode(
ModelWithArrayOfAny,
{"arrayProp": ["a", 1]},
ModelWithArrayOfAny(array_prop=["a", 1]),
)

def test_array_of_int(self, ModelWithArrayOfInts):
assert_model_decode_encode(
ModelWithArrayOfInts,
{"arrayProp": [1, 2]},
ModelWithArrayOfInts(array_prop=[1, 2]),
)
# Note, currently arrays of simple types are not validated, so the following assertion would fail:
# with pytest.raises(TypeError):
# ModelWithArrayOfInt.from_dict({"arrayProp": [1, "a"]})

def test_array_of_object(self, ModelWithArrayOfObjects, SimpleObject):
assert_model_decode_encode(
ModelWithArrayOfObjects,
{"arrayProp": [{"name": "a"}, {"name": "b"}]},
ModelWithArrayOfObjects(array_prop=[SimpleObject(name="a"), SimpleObject(name="b")]),
)

def test_type_hints(self, ModelWithArrayOfAny, ModelWithArrayOfInts, ModelWithArrayOfObjects, Unset):
assert_model_property_type_hint(ModelWithArrayOfAny, "array_prop", Union[list[Any], Unset])
assert_model_property_type_hint(ModelWithArrayOfInts, "array_prop", Union[list[int], Unset])
assert_model_property_type_hint(ModelWithArrayOfObjects, "array_prop", Union[list["SimpleObject"], Unset])


@with_generated_client_fixture(
"""
components:
schemas:
SimpleObject:
type: object
properties:
name: {"type": "string"}
ModelWithSinglePrefixItem:
type: object
properties:
arrayProp:
type: array
prefixItems:
- type: string
ModelWithPrefixItems:
type: object
properties:
arrayProp:
type: array
prefixItems:
- $ref: "#/components/schemas/SimpleObject"
- type: string
ModelWithMixedItems:
type: object
properties:
arrayProp:
type: array
prefixItems:
- $ref: "#/components/schemas/SimpleObject"
items:
type: string
""")
@with_generated_code_imports(
".models.ModelWithSinglePrefixItem",
".models.ModelWithPrefixItems",
".models.ModelWithMixedItems",
".models.SimpleObject",
".types.Unset",
)
class TestArraysWithPrefixItems:
def test_single_prefix_item(self, ModelWithSinglePrefixItem):
assert_model_decode_encode(
ModelWithSinglePrefixItem,
{"arrayProp": ["a"]},
ModelWithSinglePrefixItem(array_prop=["a"]),
)

def test_prefix_items(self, ModelWithPrefixItems, SimpleObject):
assert_model_decode_encode(
ModelWithPrefixItems,
{"arrayProp": [{"name": "a"}, "b"]},
ModelWithPrefixItems(array_prop=[SimpleObject(name="a"), "b"]),
)

def test_prefix_items_and_regular_items(self, ModelWithMixedItems, SimpleObject):
assert_model_decode_encode(
ModelWithMixedItems,
{"arrayProp": [{"name": "a"}, "b"]},
ModelWithMixedItems(array_prop=[SimpleObject(name="a"), "b"]),
)

def test_type_hints(self, ModelWithSinglePrefixItem, ModelWithPrefixItems, ModelWithMixedItems, Unset):
assert_model_property_type_hint(ModelWithSinglePrefixItem, "array_prop", Union[list[str], Unset])
assert_model_property_type_hint(
ModelWithPrefixItems,
"array_prop",
Union[list[Union[ForwardRef("SimpleObject"), str]], Unset],
)
assert_model_property_type_hint(
ModelWithMixedItems,
"array_prop",
Union[list[Union[ForwardRef("SimpleObject"), str]], Unset],
)
# Note, this test is asserting the current behavior which, due to limitations of the implementation
# (see: https://github.com/openapi-generators/openapi-python-client/pull/1130), is not really doing
# tuple type validation-- the ordering of prefixItems is ignored, and instead all of the types are
# simply treated as a union.
Loading
Loading