Skip to content

Commit f593b16

Browse files
authored
Merge pull request #56 from o-krussow/mlperf-inference-results-scc24
scc132 submission
2 parents 043eeeb + 4bf16b9 commit f593b16

File tree

21 files changed

+905
-0
lines changed

21 files changed

+905
-0
lines changed

open/scc132/code/bert-99/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
TBD
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
2+
|---------|------------|------------|--------------|-------------------|
3+
| bert-99 | offline | 90.8749 | 1.97 | - |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).
2+
3+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
4+
5+
## Host platform
6+
7+
* OS version: Linux-6.1.110-1.el9.elrepo.x86_64-x86_64-with-glibc2.34
8+
* CPU version: x86_64
9+
* Python version: 3.11.7 (main, Oct 21 2024, 23:35:59) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)]
10+
* MLCommons CM version: 3.2.6
11+
12+
## CM Run Command
13+
14+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
15+
16+
```bash
17+
pip install -U cmind
18+
19+
cm rm cache -f
20+
21+
cm pull repo mlcommons@cm4mlops --checkout=6dbb26a3da6b8ebdbc96be3be3a0e9817d3b6d26
22+
23+
cm run script \
24+
--tags=run-mlperf,inference,_r4.1-dev \
25+
--model=bert-99 \
26+
--implementation=reference \
27+
--framework=pytorch \
28+
--category=edge \
29+
--scenario=Offline \
30+
--execution_mode=valid \
31+
--device=cpu \
32+
--quiet
33+
```
34+
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
35+
you should simply reload mlcommons@cm4mlops without checkout and clean CM cache as follows:*
36+
37+
```bash
38+
cm rm repo mlcommons@cm4mlops
39+
cm pull repo mlcommons@cm4mlops
40+
cm rm cache -f
41+
42+
```
43+
44+
## Results
45+
46+
Platform: scc132_gpu0.novalocal-reference-cpu-pytorch_v2.5.0-default_config
47+
48+
Model Precision: fp32
49+
50+
### Accuracy Results
51+
`F1`: `90.87487`, Required accuracy for closed division `>= 89.96526`
52+
53+
### Performance Results
54+
`Samples per second`: `1.96994`

open/scc132/measurements/scc132_gpu0.novalocal-reference-cpu-pytorch_v2.5.0-default_config/bert-99/offline/accuracy_console.out

Whitespace-only changes.

0 commit comments

Comments
 (0)