Skip to content

Commit b54f727

Browse files
authored
Merge pull request #55 from NeverGpDzy/mlperf-inference-results-scc24
scc125 Dreambrook Team form SWPU Tuned Mlperf inference results
2 parents b41d8ab + 68f07b4 commit b54f727

File tree

19 files changed

+1012
-0
lines changed

19 files changed

+1012
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
2+
|---------|------------|------------|--------------|-------------------|
3+
| bert-99 | offline | 90.8792 | 279.352 | - |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"starting_weights_filename": "https://armi.in/files/fp32/model.pytorch",
3+
"retraining": "no",
4+
"input_data_types": "fp32",
5+
"weight_data_types": "fp32",
6+
"weight_transformations": "none"
7+
}
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).
2+
3+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
4+
5+
## Host platform
6+
7+
* OS version: Linux-6.1.112-1.el9.elrepo.x86_64-x86_64-with-glibc2.35
8+
* CPU version: x86_64
9+
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
10+
* MLCommons CM version: 3.2.2
11+
12+
## CM Run Command
13+
14+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
15+
16+
```bash
17+
pip install -U cmind
18+
19+
cm rm cache -f
20+
21+
cm pull repo mlcommons@cm4mlops --checkout=5aeaffdca72142871dcde95ebf8a37e65fe3e06e
22+
23+
cm run script \
24+
--tags=run-mlperf,inference,_r4.1-dev \
25+
--model=bert-99 \
26+
--implementation=reference \
27+
--framework=pytorch \
28+
--category=edge \
29+
--scenario=Offline \
30+
--execution_mode=valid \
31+
--device=cuda \
32+
--quiet \
33+
--adr.mlperf-implementation.tags=_repo.https://github.com/NeverGpDzy/inference,_branch.master \
34+
--clean
35+
```
36+
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
37+
you should simply reload mlcommons@cm4mlops without checkout and clean CM cache as follows:*
38+
39+
```bash
40+
cm rm repo mlcommons@cm4mlops
41+
cm pull repo mlcommons@cm4mlops
42+
cm rm cache -f
43+
44+
```
45+
46+
## Results
47+
48+
Platform: 0e7e43cc4195-reference-gpu-pytorch-cu124
49+
50+
Model Precision: fp32
51+
52+
### Accuracy Results
53+
`F1`: `90.87917`, Required accuracy for closed division `>= 89.96526`
54+
55+
### Performance Results
56+
`Samples per second`: `279.352`

open/Dreambrook_Team/measurements/0e7e43cc4195-reference-gpu-pytorch-cu124/bert-99/offline/accuracy_console.out

Whitespace-only changes.

0 commit comments

Comments
 (0)