|
15 | 15 |
|
16 | 16 | ## Writing Code
|
17 | 17 |
|
18 |
| -1. Write some code and make sure it's covered by unit tests. All unit tests are in the `tests` directory and the file |
19 |
| - structure should mirror the structure of the source code in the `openapi_python_client` directory. |
20 |
| -2. When in a Poetry shell (`poetry shell`) run `task check` in order to run most of the same checks CI runs. This will |
21 |
| - auto-reformat the code, check type annotations, run unit tests, check code coverage, and lint the code. |
| 18 | +1. Write some code and make sure it's covered by unit tests. All unit tests are in the `tests` directory and the file structure should mirror the structure of the source code in the `openapi_python_client` directory. |
| 19 | +2. When in a Poetry shell (`poetry shell`) run `task check` in order to run most of the same checks CI runs. This will auto-reformat the code, check type annotations, run unit tests, check code coverage, and lint the code. |
22 | 20 | 3. If you're writing a new feature, try to add it to the end to end test.
|
23 | 21 | 1. If adding support for a new OpenAPI feature, add it somewhere in `end_to_end_tests/openapi.json`
|
24 |
| - 2. Regenerate the "golden records" with `task regen` and `task regen_custom`. These are clients generated from the OpenAPI document used for end to end testing. |
25 |
| - 3. Check the changes to `end_to_end_tests/golden-record` and `end_to_end_tests/golden-record-custom` to confirm only what you intended to change did change and that the changes look correct. |
26 |
| -4. Run the end to end tests with `task e2e`. This will generate clients against `end_to_end_tests/openapi.json` and |
27 |
| - compare them with the golden records. The tests will fail if **anything is different**. The end to end |
28 |
| - tests are not included in `task check` as they take longer to run and don't provide very useful feedback in the |
29 |
| - event of failure. If an e2e test does fail, the easiest way to check what's wrong is to run `task regen` and/or |
30 |
| - `task regen_custom` and check the diffs. You can also use `task re` which will run `regen`, `regen_custom` and `e2e` |
31 |
| - in that order. |
32 |
| -5. Include a summary of your changes in `CHANGELOG.md`. If there isn't an "Unreleased" version in the CHANGELOG yet, |
33 |
| - go ahead and add one. |
| 22 | + 2. Regenerate the "golden records" with `task regen`. This client is generated from the OpenAPI document used for end to end testing. |
| 23 | + 3. Check the changes to `end_to_end_tests/golden-record` to confirm only what you intended to change did change and that the changes look correct. |
| 24 | +4. Run the end to end tests with `task e2e`. This will generate clients against `end_to_end_tests/openapi.json` and compare them with the golden record. The tests will fail if **anything is different**. The end to end tests are not included in `task check` as they take longer to run and don't provide very useful feedback in the event of failure. If an e2e test does fail, the easiest way to check what's wrong is to run `task regen` and check the diffs. You can also use `task re` which will run `regen` and `e2e` in that order. |
34 | 25 |
|
35 | 26 | ## Creating a Pull Request
|
36 | 27 |
|
37 |
| -Once you've written the code and run the checks, the next step is to create a pull request against the `main` branch of this repository. Currently @dbanty is the |
38 |
| -only reviewer / approver of pull requests for this repo, so you should @mention him to make sure he sees it. Once your PR is created, a series of automated checks |
39 |
| -should run. If any of them fail, try your best to fix them. Note that currently deepsource tends to find "issues" that aren't actually issues, so there might be |
40 |
| -failures that don't have anything to do with the code you changed. If that's the case, feel free to ignore them. |
| 28 | +Once you've written the code and run the checks, the next step is to create a pull request against the `main` branch of this repository. This repository uses [conventional commits] squashed on each PR, then uses [Dobby] to auto-generate CHANGELOG.md entries for release. So the title of your PR should be in the format of a conventional commit written in plain english as it will end up in the CHANGELOG. Some example PR titles: |
| 29 | + |
| 30 | +- feat: Support for `allOf` in OpenAPI documents (closes #123). |
| 31 | +- refactor!: Removed support for Python 3.5 |
| 32 | +- fix: Data can now be passed to multipart bodies along with files. |
| 33 | + |
| 34 | +Once your PR is created, a series of automated checks should run. If any of them fail, try your best to fix them. |
41 | 35 |
|
42 | 36 | ## Wait for Review
|
43 | 37 |
|
44 |
| -As soon as possible, your PR will be reviewed. If there are any changes requested there will likely be a bit of back and forth. Once this process is done, |
45 |
| -your changes will be merged into main and included in the next release. If you need your changes available on PyPI by a certain time, please mention it in the |
46 |
| -PR and I'll do my best to accomodate. |
| 38 | +As soon as possible, your PR will be reviewed. If there are any changes requested there will likely be a bit of back and forth. Once this process is done, your changes will be merged into main and included in the next release. If you need your changes available on PyPI by a certain time, please mention it in the PR, and we'll do our best to accommodate. |
| 39 | + |
| 40 | +[Conventional Commits]: https://www.conventionalcommits.org/en/v1.0.0/ |
| 41 | +[Dobby]: https://triaxtec.github.io/dobby/introduction.html |
0 commit comments