You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[CI][Benchmark] Merge benchmark suite presets implementation (#17660)
In continuation of the effort outlined in
#17545 (comment), this
PR merges further changes introduced in
#17229. Specifically, it merges
@pbalcer's changes for adding the ability to run different benchmarking
presets
**Note:** I am relying on this PR having its commits squashed during
merge (which should be the default behavior for intel/llvm)
---------
Co-authored-by: Piotr Balcer <[email protected]>
Co-authored-by: Łukasz Stolarczuk <[email protected]>
@@ -27,8 +29,6 @@ You can also include additional benchmark parameters, such as environment variab
27
29
28
30
Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request.
29
31
30
-
By default, all benchmark runs are compared against `baseline`, which is a well-established set of the latest data.
31
-
32
32
You must be a member of the `oneapi-src` organization to access these features.
33
33
34
34
## Comparing results
@@ -37,8 +37,8 @@ By default, the benchmark results are not stored. To store them, use the option
37
37
38
38
You can compare benchmark results using `--compare` option. The comparison will be presented in a markdown output file (see below). If you want to calculate the relative performance of the new results against the previously saved data, use `--compare <previously_saved_data>` (i.e. `--compare baseline`). In case of comparing only stored data without generating new results, use `--dry-run --compare <name1> --compare <name2> --relative-perf <name1>`, where `name1` indicates the baseline for the relative performance calculation and `--dry-run` prevents the script for running benchmarks. Listing more than two `--compare` options results in displaying only execution time, without statistical analysis.
39
39
40
-
Baseline, as well as baseline-v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
41
-
are stored [here](https://oneapi-src.github.io/unified-runtime/benchmark_results.html).
40
+
Baseline_L0, as well as Baseline_L0v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
41
+
are stored [here](https://oneapi-src.github.io/unified-runtime/performance/).
42
42
43
43
## Output formats
44
44
You can display the results in the form of a HTML file by using `--ouptut-html` and a markdown file by using `--output-markdown`. Due to character limits for posting PR comments, the final content of the markdown file might be reduced. In order to obtain the full markdown output, use `--output-markdown full`.
0 commit comments