Skip to content

Commit 874a0b3

Browse files
authored
Merge pull request #81 from zixianwang2022/mlperf-inference-results-scc24
UCSD Base Mlperf inference results scc24
2 parents a89e9eb + 5901e6c commit 874a0b3

File tree

67 files changed

+14507
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+14507
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
TBD
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
2+
|---------------------|------------|-----------------------|--------------|-------------------|
3+
| stable-diffusion-xl | offline | (15.22786, 236.96183) | 0.209 | - |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).
2+
3+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
4+
5+
## Host platform
6+
7+
* OS version: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34
8+
* CPU version: x86_64
9+
* Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
10+
* MLCommons CM version: 3.1.0
11+
12+
## CM Run Command
13+
14+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
15+
16+
```bash
17+
pip install -U cmind
18+
19+
cm rm cache -f
20+
21+
cm pull repo mlcommons@cm4mlops --checkout=e8235832b1ca225f65ecc8272c597d5c1a112d82
22+
23+
cm run script \
24+
--tags=run-mlperf,inference,_r4.1-dev,_short,_scc24-base \
25+
--model=sdxl \
26+
--implementation=reference \
27+
--framework=pytorch \
28+
--category=datacenter \
29+
--scenario=Offline \
30+
--execution_mode=test \
31+
--device=rocm \
32+
--quiet \
33+
--precision=float16 \
34+
--env.CM_GET_PLATFORM_DETAILS=no
35+
```
36+
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
37+
you should simply reload mlcommons@cm4mlops without checkout and clean CM cache as follows:*
38+
39+
```bash
40+
cm rm repo mlcommons@cm4mlops
41+
cm pull repo mlcommons@cm4mlops
42+
cm rm cache -f
43+
44+
```
45+
46+
## Results
47+
48+
Platform: aqua-reference-rocm-pytorch-v2.6.0.dev20241109-scc24-base
49+
50+
Model Precision: fp32
51+
52+
### Accuracy Results
53+
`CLIP_SCORE`: `15.22786`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
54+
`FID_SCORE`: `236.96183`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`
55+
56+
### Performance Results
57+
`Samples per second`: `0.209132`

open/UCSD/measurements/aqua-reference-rocm-pytorch-v2.6.0.dev20241109-scc24-base/stable-diffusion-xl/offline/accuracy_console.out

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"starting_weights_filename": "https://github.com/mlcommons/inference/tree/master/text_to_image#download-model",
3+
"retraining": "no",
4+
"input_data_types": "fp32",
5+
"weight_data_types": "fp32",
6+
"weight_transformations": "no"
7+
}

0 commit comments

Comments
 (0)