Skip to content

Commit 67a3afe

Browse files
committed
fmt markdown
1 parent 9fb4ea3 commit 67a3afe

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+669
-626
lines changed

CHANGELOG.md

+68-77
Large diffs are not rendered by default.

README.md

-1
Original file line numberDiff line numberDiff line change
@@ -34,4 +34,3 @@ More details in [build documentation](https://github.com/DataDog/system-tests/bl
3434
![Output on success](./utils/assets/output.png?raw=true)
3535

3636
**[Complete documentation](https://github.com/DataDog/system-tests/blob/main/docs)**
37-

binaries/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
This folder will contains binaries to be tested
1+
This folder will contains binaries to be tested

docs/CI/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,12 @@ You'll need a CI that with `docker` and `python 3.9` installed, among with very
77
A valid `DD_API_KEY` env var for staging must be set.
88

99
1. Clone this repo
10-
2. Copy paste your components' build inside `./binaries` (See [documentation](./binaries.md))
11-
3. `./build.sh` with relevant `library` (see [documentation](../execute/build.md)). Exemple: `./build.sh java`
12-
4. `./run.sh`
10+
1. Copy paste your components' build inside `./binaries` (See [documentation](./binaries.md))
11+
1. `./build.sh` with relevant `library` (see [documentation](../execute/build.md)). Exemple: `./build.sh java`
12+
1. `./run.sh`
1313

1414
You will find different template or example:
1515

16-
* [github actions](./github-actions.md)
17-
* [gitlab CI](./gitlab-ci.md): TODO
18-
* [azure](https://github.com/DataDog/dd-trace-dotnet/blob/master/.azure-pipelines/ultimate-pipeline.yml) (look for `stage: system_tests`)
16+
- [github actions](./github-actions.md)
17+
- [gitlab CI](./gitlab-ci.md): TODO
18+
- [azure](https://github.com/DataDog/dd-trace-dotnet/blob/master/.azure-pipelines/ultimate-pipeline.yml) (look for `stage: system_tests`)

docs/CI/github-actions.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,4 +52,4 @@ jobs:
5252
path: |
5353
logs/
5454
binaries/
55-
```
55+
```

docs/CI/gitlab-ci.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
## Add secrets
44

55
1. Install aws-cli
6-
2. Save a valid github PATH token in a file named GH_TOKEN
7-
3. Save a valid staging API key in a file named DD_API_KEY
8-
4. then execute
6+
1. Save a valid github PATH token in a file named GH_TOKEN
7+
1. Save a valid staging API key in a file named DD_API_KEY
8+
1. then execute
99

1010
```
1111
aws-vault exec --debug build-stable-developer # Enter a token from your MFA device

docs/RC/RC-API.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
The RC API is the official way to interact with remote config. It allows to send RC payload to the library durint setup phase, and send request before/after each state change. Here is an example a scenario activating/deactivating ASM:
22

33
1. the library starts in an initial state where ASM is disabled. This state is validated with an assertion on a request containing an attack : the request should not been caught by ASM
4-
2. Then a RC command is sent to activate ASM
5-
3. another request containing an attack is sent, this one must be reported by ASM
6-
4. A second command is sent to deactivate ASM
7-
5. a thirst request containing an attack is sent, this last one should not be seen
8-
4+
1. Then a RC command is sent to activate ASM
5+
1. another request containing an attack is sent, this one must be reported by ASM
6+
1. A second command is sent to deactivate ASM
7+
1. a thirst request containing an attack is sent, this last one should not be seen
98

109
Here is the test code performing that test. Please note the magic constants `ACTIVATE_ASM_PAYLOAD` and `DEACTIVATE_ASM_PAYLOAD`: they are encoded RC payload (exemple [here](https://github.com/DataDog/system-tests/blob/7644ceaa3c7ea44ade8bcca8c3bb2a5991d03e34/utils/proxy/rc_mocked_responses_asm_activate_only.json)). We still miss a tool that generate them from human-readable content, it will come in a near future.
1110

docs/RC/README.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,9 @@ appsec_api_security_rc = EndToEndScenario(
1313
""",
1414
)
1515
```
16+
1617
In this code example, we can see that we are defining a proxy for remote config, with the name **APPSEC_API_SECURITY_RC**,
17-
it means that this scenario will mock calls from libraries to `/v7/config` by the content in *utils/proxy/rc_mocked_responses_**appsec_api_security_rc**.json* file.
18+
it means that this scenario will mock calls from libraries to `/v7/config` by the content in *utils/proxy/rc_mocked_responses\_**appsec_api_security_rc**.json* file.
1819

1920
### Create mock file
2021

@@ -45,4 +46,4 @@ def remote_config_is_applied(data):
4546
return False
4647

4748
interfaces.library.wait_for(remote_config_is_applied, timeout=30)
48-
```
49+
```

docs/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22

33
System tests is a test workbench that allows any kind of functional testing over libraries (AKA tracers) and agents. It's built with several key principles:
44

5-
* *Black box testing*: only components' interfaces are checked. As those interfaces are very stable, our tests can make assertions without any assumptions regarding underlying implementations. "Check that the car moves, regardless of the engine"
6-
* *No test isolation*: Yes, it's surprising. But it allows to be very fast. So never hesitate to add a new test. And if you need a very specific test case, we can run it separately.
5+
- *Black box testing*: only components' interfaces are checked. As those interfaces are very stable, our tests can make assertions without any assumptions regarding underlying implementations. "Check that the car moves, regardless of the engine"
6+
- *No test isolation*: Yes, it's surprising. But it allows to be very fast. So never hesitate to add a new test. And if you need a very specific test case, we can run it separately.
77

88
## How to run them locally?
99

docs/RFCs/manifest.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# RFC status
22

3-
* Initiated : June 2023
4-
* Validated : july 2023
5-
* Implemented : August 2023
3+
- Initiated : June 2023
4+
- Validated : july 2023
5+
- Implemented : August 2023
66

77
### Context
88

@@ -20,7 +20,7 @@ Unfortunatly, it comes with a major drawback: Activating a single test can becom
2020
To solve this, we'll try to leverage two points :
2121

2222
1. Often, those declarations involve only one component
23-
2. Often, a component is modified by only one developper
23+
1. Often, a component is modified by only one developper
2424

2525
The idea is to offer an alternative way to declare those metadata, using one file per component, also known as manifest file. Those files will be inside system tests repo, declaring for a given component some metadata.
2626

@@ -54,12 +54,11 @@ Will support:
5454
Won't support:
5555

5656
- any complex logic
57-
_ because there is not limit on the complexity. We need to draw a line based on the ratio format simplicity / number of occurrences. The cutoff point is only test classes, declaring version for weblog variants, or skip reason for the entire class.
57+
\_ because there is not limit on the complexity. We need to draw a line based on the ratio format simplicity / number of occurrences. The cutoff point is only test classes, declaring version for weblog variants, or skip reason for the entire class.
5858
- declaring metadata (bug, flaky, irrelevant) for test methods
5959
- because their namings are not stable, it would lead to frequent modifications of manifest files, spaming every team
6060
- because conflict mostly happen at class level
6161

62-
6362
## Example
6463

6564
```yaml
@@ -79,7 +78,7 @@ tests/:
7978
"*": missing_feature # All other weblogs: not yet available
8079
```
8180
82-
## Format [WIP]
81+
## Format \[WIP\]
8382
8483
```json
8584
{

docs/architecture/overview.md

+73-61
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,16 @@ The components that make up a running test are simple from the outside.
55
The idea behind system tests is that we can share the tests for a given feature across implementations.
66

77
Enabling a feature within system tests might go like this:
8-
1. [Run the system test suite](#running-the-system-tests)
9-
1. Inspect `./logs/interfaces` folders to see if the data you want to validate is present
10-
1. If the feature you want to validate isn't enabled, enable it.
11-
* Probably the correct option: Change the weblog/application image
12-
* Enable it through run.sh
13-
* Enable it through an environment variable
14-
1. [Add a test to verify your data, sending any requests as needed](#how-do-i-add-a-new-test).
15-
1. Disable the test for languages which don't yet implement it
16-
1. Submit a pull request, ask for review
8+
9+
1. [Run the system test suite](#running-the-system-tests)
10+
1. Inspect `./logs/interfaces` folders to see if the data you want to validate is present
11+
1. If the feature you want to validate isn't enabled, enable it.
12+
- Probably the correct option: Change the weblog/application image
13+
- Enable it through run.sh
14+
- Enable it through an environment variable
15+
1. [Add a test to verify your data, sending any requests as needed](#how-do-i-add-a-new-test).
16+
1. Disable the test for languages which don't yet implement it
17+
1. Submit a pull request, ask for review
1718

1819
However, there are many scenarios where a test may not be so simple to implement.
1920

@@ -22,16 +23,17 @@ This document aims to give a working understanding of the parts of system-tests,
2223
## What are the components of a running test?
2324

2425
When the system tests are executing, there are several main containers of concern.
25-
- [Tests Container](#tests-container) (aka "runner")
26-
- Responsible for running the actual tests, sending traffic, and asserting results
27-
- [Application Container](#application-container) (aka "weblog")
28-
- Swappable webapp language module that must meet an interface
29-
- [Application Proxy Container](#application-proxy-container)
30-
- Mechanism to inspect payloads from the datadog libraries
31-
- [Agent Container](#agent-container)
32-
- Basic Datadog agent image
33-
- [Agent Proxy Container](#agent-proxy-container)
34-
- Mechanism to inspect payloads from the Agent to the Backend
26+
27+
- [Tests Container](#tests-container) (aka "runner")
28+
- Responsible for running the actual tests, sending traffic, and asserting results
29+
- [Application Container](#application-container) (aka "weblog")
30+
- Swappable webapp language module that must meet an interface
31+
- [Application Proxy Container](#application-proxy-container)
32+
- Mechanism to inspect payloads from the datadog libraries
33+
- [Agent Container](#agent-container)
34+
- Basic Datadog agent image
35+
- [Agent Proxy Container](#agent-proxy-container)
36+
- Mechanism to inspect payloads from the Agent to the Backend
3537

3638
```mermaid
3739
flowchart TD
@@ -51,30 +53,32 @@ The tests then wait on the results, which are available as the logs are collecte
5153

5254
## What are system-tests bad for?
5355

54-
- Combinatorial-style tests (Permutations of framework runtimes, 3rd libraries versions, operating systems)
55-
- Cloud deployments, kubernetes, distributed deployments
56-
- Immediately knowing the reason a feature fails
57-
- Problems or features which are not shared across tracers
58-
- Performance or throughput testing
56+
- Combinatorial-style tests (Permutations of framework runtimes, 3rd libraries versions, operating systems)
57+
- Cloud deployments, kubernetes, distributed deployments
58+
- Immediately knowing the reason a feature fails
59+
- Problems or features which are not shared across tracers
60+
- Performance or throughput testing
61+
62+
*Examples of bad candidates:*
5963

60-
*Examples of bad candidates:*
61-
- The .NET tracer must not write invalid [IL](https://en.wikipedia.org/wiki/Common_Intermediate_Language) for it's earliest supported runtime
62-
- The startup overhead of the Java tracer is less than 3s for a given sample application
63-
- The python tracer must not fail to retrieve traces for a version range of the mongodb library
64+
- The .NET tracer must not write invalid [IL](https://en.wikipedia.org/wiki/Common_Intermediate_Language) for it's earliest supported runtime
65+
- The startup overhead of the Java tracer is less than 3s for a given sample application
66+
- The python tracer must not fail to retrieve traces for a version range of the mongodb library
6467

6568
## What are system-tests good for?
6669

67-
- Catching regressions on shared features
68-
- Wide coverage in a short time frame
69-
- Shared test coverage across all tracer libraries
70-
- Ensuring requirements for shared features are met across tracer libraries
71-
- testing a set of version of any datadog component
70+
- Catching regressions on shared features
71+
- Wide coverage in a short time frame
72+
- Shared test coverage across all tracer libraries
73+
- Ensuring requirements for shared features are met across tracer libraries
74+
- testing a set of version of any datadog component
7275

7376
*Examples of good candidates:*
74-
- `DD_TAGS` must be parsed correctly and carried as tags on all traces
75-
- Tracer libraries must be able to communicate with the agent through Unix Domain Sockets
76-
- Sampling rates from the agent are respected when not explicitly configured
77-
- All tracer libraries log consistent diagnostic information at startup
77+
78+
- `DD_TAGS` must be parsed correctly and carried as tags on all traces
79+
- Tracer libraries must be able to communicate with the agent through Unix Domain Sockets
80+
- Sampling rates from the agent are respected when not explicitly configured
81+
- All tracer libraries log consistent diagnostic information at startup
7882

7983
## How do I add a new test?
8084

@@ -114,43 +118,49 @@ The `./run.sh` script starts the containers in the background.
114118
Often, knowing how a container fails to start is as simple as running `docker-compose up {container}` and observing the output.
115119

116120
If there are more in depth problems within a container you may need to adjust the Dockerfile.
117-
- re-run `./build.sh`
118-
- start the container via `docker-compose up`
119-
- `docker exec -it {container-id} bash` to diagnose from within the container
121+
122+
- re-run `./build.sh`
123+
- start the container via `docker-compose up`
124+
- `docker exec -it {container-id} bash` to diagnose from within the container
120125

121126
## What is the structure of the code base?
122127

123128
The entry points of system-tests are observable from `./.github/workflows/ci.yml`.
124129

125130
The `./build.sh` script calls into a nested `./utils/build/build.sh` script.
126-
- [Click for details about the `./build.sh` script and options available](#building-the-system-tests).
131+
132+
- [Click for details about the `./build.sh` script and options available](#building-the-system-tests).
127133

128134
The first argument to the `./build.sh` script is the language which is built: `./utils/build/docker/{language}`.
129-
- e.g., `./build.sh dotnet`
135+
136+
- e.g., `./build.sh dotnet`
130137

131138
The `./run.sh` script runs the tests and relies 1-to-1 on what is built in the `./build.sh` step.
132-
- [Click for details about the `./run.sh` script and options available](#running-the-system-tests).
139+
140+
- [Click for details about the `./run.sh` script and options available](#running-the-system-tests).
133141

134142
The run script ultimately calls the `./docker-compose.yml` file and whichever image is built with the `weblog` tag is tested.
135-
- [Click for detail about how the images interact with eachother](#what-are-the-components-of-a-running-test)
143+
144+
- [Click for detail about how the images interact with eachother](#what-are-the-components-of-a-running-test)
136145

137146
## Building the System Tests
138147

139148
The first argument to the `./build.sh` script is the language (`$TEST_LIBRARY`) which is built: `./utils/build/docker/{language}`.
140-
- `./build.sh cpp`
141-
- `./build.sh ruby`
142-
- `./build.sh python`
143-
- `./build.sh php`
144-
- `./build.sh nodejs`
145-
- `./build.sh java`
146-
- `./build.sh golang`
147-
- `./build.sh dotnet`
149+
150+
- `./build.sh cpp`
151+
- `./build.sh ruby`
152+
- `./build.sh python`
153+
- `./build.sh php`
154+
- `./build.sh nodejs`
155+
- `./build.sh java`
156+
- `./build.sh golang`
157+
- `./build.sh dotnet`
148158

149159
There are explicit arguments available for more specific configuration of the build.
150-
- i.e., `./build.sh {language} --weblog-variant {dockerfile-prefix}`
151-
- e.g., `./build.sh python --weblog-variant flask-poc`
152-
- shorter version: ./build.sh python -w flask-poc
153160

161+
- i.e., `./build.sh {language} --weblog-variant {dockerfile-prefix}`
162+
- e.g., `./build.sh python --weblog-variant flask-poc`
163+
- shorter version: ./build.sh python -w flask-poc
154164

155165
These arguments determine which Dockerfile is ultimately used in the format of: `./utils/build/docker/{language}/{dockerfile-prefix}.Dockerfile`
156166

@@ -159,18 +169,20 @@ These arguments determine which Dockerfile is ultimately used in the format of:
159169
The build script must be successful before running the tests.
160170

161171
The first argument to the `./run.sh` script is the scenario (`$SCENARIO`) which defaults to `DEFAULT`.
162-
- `./run.sh`
163-
- `./run.sh DEFAULT`
164-
- `./run.sh SAMPLING`
165-
- `./run.sh PROFILING`
172+
173+
- `./run.sh`
174+
- `./run.sh DEFAULT`
175+
- `./run.sh SAMPLING`
176+
- `./run.sh PROFILING`
166177

167178
You can see all available scenarios within the `./run.sh` script.
168179

169180
The run script sets necessary variables for each scenario, which are then used within the `docker-compose.yml` file.
170181

171182
When debugging tests, it may be useful to only run individual tests, following this example:
172-
- `./run.sh tests/appsec/test_conf.py::Test_StaticRuleSet::test_basic_hardcoded_ruleset`
173-
- `./run.sh tests/test_traces.py::Test_Misc::test_main`
183+
184+
- `./run.sh tests/appsec/test_conf.py::Test_StaticRuleSet::test_basic_hardcoded_ruleset`
185+
- `./run.sh tests/test_traces.py::Test_Misc::test_main`
174186

175187
## Tests Container
176188

docs/edit/CI-and-scenarios.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
When a modification is made in system tests, the CI tries to detect which scenario to run :
22

33
1. based on modified files in `tests/`, by extracting scenarios targerted by those files
4-
2. based on any modification in a `tests/**/utils.py`, and applying the logic 1. on any sub file in `tests/**`
5-
3. based on labels applyied to the PR, if anything outside `tests/` folder is modified.
4+
1. based on any modification in a `tests/**/utils.py`, and applying the logic 1. on any sub file in `tests/**`
5+
1. based on labels applyied to the PR, if anything outside `tests/` folder is modified.

docs/edit/add-test-class.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
When you add a test class (see [features](./features.md)), you need to declare what feature it belongs to in the [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/). To achieve that, use `@features` decorators :
22

3-
43
```python
54
@features.awesome_tests
65
class Test_AwesomeFeature:
@@ -14,9 +13,9 @@ The link to the feature is in the docstring: hover the name, this link will show
1413
## Use case 2: the feature does not exists
1514

1615
1. Create it in [Feature Parity Dashbaord](https://feature-parity.us1.prod.dog/)
17-
2. pick its feature ID (the number in the URL)
18-
3. copy pasta in `utils/_features.py` (its straightforward)
16+
1. pick its feature ID (the number in the URL)
17+
1. copy pasta in `utils/_features.py` (its straightforward)
1918

20-
----
19+
______________________________________________________________________
2120

2221
If you need any help, please ask on [slack](https://dd.enterprise.slack.com/archives/C025TJ4RZ8X)

0 commit comments

Comments
 (0)