Skip to content

add tests that verify actual behavior of generated code #216

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 15 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,17 +54,29 @@ If you think that some of the added code is not testable (or testing it would ad
2. If you're modifying the way an existing feature works, make sure an existing test generates the _old_ code in `end_to_end_tests/golden-record`. You'll use this to check for the new code once your changes are complete.
3. If you're improving an error or adding a new error, add a [unit test](#unit-tests)

#### End-to-end tests
#### End-to-end snapshot tests

This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end to end tests (snapshot tests). In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.
This project aims to have all "happy paths" (types of code which _can_ be generated) covered by end-to-end tests. There are two types of these: snapshot tests, and unit tests of generated code.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tets` directory:
Snapshot tests verify that the generated code is identical to a previously-committed set of snapshots (called a "golden record" here). They are basically regression tests to catch any unintended changes in the generator output.

In order to check code changes against the previous set of snapshots (called a "golden record" here), you can run `pdm e2e`. To regenerate the snapshots, run `pdm regen`.

There are 4 types of snapshots generated right now, you may have to update only some or all of these depending on the changes you're making. Within the `end_to_end_tests` directory:

1. `baseline_openapi_3.0.json` creates `golden-record` for testing OpenAPI 3.0 features
2. `baseline_openapi_3.1.yaml` is checked against `golden-record` for testing OpenAPI 3.1 features (and ensuring consistency with 3.0)
3. `test_custom_templates` are used with `baseline_openapi_3.0.json` to generate `custom-templates-golden-record` for testing custom templates
4. `3.1_specific.openapi.yaml` is used to generate `test-3-1-golden-record` and test 3.1-specific features (things which do not have a 3.0 equivalent)

#### Unit tests of generated code

These verify the runtime behavior of the generated code, without making assertions about the exact implementation of the code. For instance, they can verify that JSON data is correctly decoded into model class attributes.

The tests run the generator against a small API spec (defined inline for each test class), and then import and execute the generated code. This can sometimes identify issues with validation logic, module imports, etc., that might be harder to diagnose via the snapshot tests, especially during development of a new feature.

See [`end_to_end_tests/generated_code_live_tests`](./end_to_end_tests/generated_code_live_tests).

#### Unit tests

> **NOTE**: Several older-style unit tests using mocks exist in this project. These should be phased out rather than updated, as the tests are brittle and difficult to maintain. Only error cases should be tests with unit tests going forward.
Expand Down
3 changes: 3 additions & 0 deletions end_to_end_tests/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
""" Generate a complete client and verify that it is correct """
import pytest

pytest.register_assert_rewrite("end_to_end_tests.end_to_end_test_helpers")
203 changes: 203 additions & 0 deletions end_to_end_tests/end_to_end_test_helpers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
import importlib
import os
import shutil
from filecmp import cmpfiles, dircmp
from pathlib import Path
import sys
import tempfile
from typing import Any, Callable, Dict, Generator, List, Optional, Set, Tuple

from attrs import define
import pytest
from click.testing import Result
from typer.testing import CliRunner

from openapi_python_client.cli import app
from openapi_python_client.utils import snake_case


@define
class GeneratedClientContext:
"""A context manager with helpers for tests that run against generated client code.

On entering this context, sys.path is changed to include the root directory of the
generated code, so its modules can be imported. On exit, the original sys.path is
restored, and any modules that were loaded within the context are removed.
Copy link
Author

@eli-bl eli-bl Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, the proof that this works is simply that running all the tests in generated_code_live_tests in a single pytest run passes. Since many of the test classes reuse the same generated modules and class names for different things, if the sandboxing didn't work then the tests would inevitably pollute each other and fail.

"""

output_path: Path
generator_result: Result
base_module: str
monkeypatch: pytest.MonkeyPatch
old_modules: Optional[Set[str]] = None

def __enter__(self) -> "GeneratedClientContext":
self.monkeypatch.syspath_prepend(self.output_path)
self.old_modules = set(sys.modules.keys())
return self

def __exit__(self, exc_type, exc_value, traceback):
self.monkeypatch.undo()
for module_name in set(sys.modules.keys()) - self.old_modules:
del sys.modules[module_name]
shutil.rmtree(self.output_path, ignore_errors=True)

def import_module(self, module_path: str) -> Any:
"""Attempt to import a module from the generated code."""
return importlib.import_module(f"{self.base_module}{module_path}")


def _run_command(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved from test_end_to_end.py.

command: str,
extra_args: Optional[List[str]] = None,
openapi_document: Optional[str] = None,
url: Optional[str] = None,
config_path: Optional[Path] = None,
raise_on_error: bool = True,
) -> Result:
"""Generate a client from an OpenAPI document and return the result of the command."""
runner = CliRunner()
if openapi_document is not None:
openapi_path = Path(__file__).parent / openapi_document
source_arg = f"--path={openapi_path}"
else:
source_arg = f"--url={url}"
config_path = config_path or (Path(__file__).parent / "config.yml")
args = [command, f"--config={config_path}", source_arg]
if extra_args:
args.extend(extra_args)
result = runner.invoke(app, args)
if result.exit_code != 0 and raise_on_error:
raise Exception(result.stdout)
return result


def generate_client(
openapi_document: str,
extra_args: List[str] = [],
output_path: str = "my-test-api-client",
base_module: str = "my_test_api_client",
overwrite: bool = True,
raise_on_error: bool = True,
) -> GeneratedClientContext:
"""Run the generator and return a GeneratedClientContext for accessing the generated code."""
full_output_path = Path.cwd() / output_path
if not overwrite:
shutil.rmtree(full_output_path, ignore_errors=True)
args = [
*extra_args,
"--output-path",
str(full_output_path),
]
if overwrite:
args = [*args, "--overwrite"]
generator_result = _run_command("generate", args, openapi_document, raise_on_error=raise_on_error)
print(generator_result.stdout)
return GeneratedClientContext(
full_output_path,
generator_result,
base_module,
pytest.MonkeyPatch(),
)


def generate_client_from_inline_spec(
openapi_spec: str,
extra_args: List[str] = [],
filename_suffix: Optional[str] = None,
config: str = "",
base_module: str = "testapi_client",
add_openapi_info = True,
raise_on_error: bool = True,
) -> GeneratedClientContext:
"""Run the generator on a temporary file created with the specified contents.

You can also optionally tell it to create a temporary config file.
"""
if add_openapi_info and not openapi_spec.lstrip().startswith("openapi:"):
openapi_spec += """
openapi: "3.1.0"
info:
title: "testapi"
description: "my test api"
version: "0.0.1"
"""

output_path = tempfile.mkdtemp()
file = tempfile.NamedTemporaryFile(suffix=filename_suffix, delete=False)
file.write(openapi_spec.encode('utf-8'))
file.close()

if config:
config_file = tempfile.NamedTemporaryFile(delete=False)
config_file.write(config.encode('utf-8'))
config_file.close()
extra_args = [*extra_args, "--config", config_file.name]

generated_client = generate_client(
file.name,
extra_args,
output_path,
base_module,
raise_on_error=raise_on_error,
)
os.unlink(file.name)
if config:
os.unlink(config_file.name)

return generated_client


def with_generated_client_fixture(
openapi_spec: str,
name: str="generated_client",
config: str="",
extra_args: List[str] = [],
):
"""Decorator to apply to a test class to create a fixture inside it called 'generated_client'.

The fixture value will be a GeneratedClientContext created by calling
generate_client_from_inline_spec().
"""
def _decorator(cls):
def generated_client(self):
with generate_client_from_inline_spec(openapi_spec, extra_args=extra_args, config=config) as g:
yield g

setattr(cls, name, pytest.fixture(scope="class")(generated_client))
return cls

return _decorator


def with_generated_code_import(import_path: str, alias: Optional[str] = None):
"""Decorator to apply to a test class to create a fixture from a generated code import.

The 'generated_client' fixture must also be present.

If import_path is "a.b.c", then the fixture's value is equal to "from a.b import c", and
its name is "c" unless you specify a different name with the alias parameter.
"""
parts = import_path.split(".")
module_name = ".".join(parts[0:-1])
import_name = parts[-1]

def _decorator(cls):
nonlocal alias

def _func(self, generated_client):
module = generated_client.import_module(module_name)
return getattr(module, import_name)

alias = alias or import_name
_func.__name__ = alias
setattr(cls, alias, pytest.fixture(scope="class")(_func))
return cls

return _decorator


def assert_model_decode_encode(model_class: Any, json_data: dict, expected_instance: Any):
instance = model_class.from_dict(json_data)
assert instance == expected_instance
assert instance.to_dict() == json_data
37 changes: 37 additions & 0 deletions end_to_end_tests/generated_code_live_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## The `generated_code_live_tests` module

These are end-to-end tests which run the code generator command, but unlike the other tests in `end_to_end_tests`, they are also unit tests _of the behavior of the generated code_.

Each test class follows this pattern:

- Use the decorator `@with_generated_client_fixture`, providing an inline API spec (JSON or YAML) that contains whatever schemas/paths/etc. are relevant to this test class.
- The spec can omit the `openapi:` and `info:` blocks, unless those are relevant to the test.
- The decorator creates a temporary file for the inline spec and a temporary directory for the generated code, and runs the client generator.
- It creates a `GeneratedClientContext` object (defined in `end_to_end_test_helpers.py`) to keep track of things like the location of the generated code and the output of the generator command.
- This object is injected into the test class as a fixture called `generated_client`, although most tests will not need to reference the fixture directly.
- `sys.path` is temporarily changed, for the scope of this test class, to allow imports from the generated code.
- Use the decorator `@with_generated_code_import` to make classes or functions from the generated code available to the tests.
- `@with_generated_code_import(".models.MyModel")` would execute `from [client package name].models import MyModel` and inject the imported object into the test class as a fixture called `MyModel`.
- `@with_generated_code_import(".models.MyModel", alias="model1")` would do the same thing, but the fixture would be named `model1`.
- After the test class finishes, these imports are discarded.

Example:

```python

@with_generated_client_fixture(
"""
paths: {}
components:
schemas:
MyModel:
type: object
properties:
stringProp: {"type": "string"}
""")
@with_generated_code_import(".models.MyModel")
class TestSimpleJsonObject:
def test_encoding(MyModel):
instance = MyModel(string_prop="abc")
assert instance.to_dict() == {"stringProp": "abc"}
```
Loading