Skip to content

Commit 28ac8b0

Browse files
committed
Generate README.md with new metrics
1 parent cf46f46 commit 28ac8b0

File tree

1 file changed

+15
-16
lines changed

1 file changed

+15
-16
lines changed

README.md

Lines changed: 15 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ The table below summarizes the results of running various ML models through our
1010

1111
| Model | Run Success | Torch Ops Before (Unique Ops) | Torch Ops Remain (Unique Ops) | To/From Device Ops | Original Run Time (ms) | Compiled Run Time (ms) | Accuracy (%) |
1212
|:------------------------------------|:--------------|:--------------------------------|:--------------------------------|---------------------:|-------------------------:|:-------------------------|:---------------|
13-
| [Mnist (Eval)](tests/models/mnist) || 14 (8) | 5 (4) | 16 | 38.64 | 501.5 | 99.85 |
14-
| [Mnist (Train)](tests/models/mnist) || 14 (8) | 7 (5) | 14 | 136.38 | 2709.01 | 66.84 |
15-
| [ResNet18](tests/models/resnet) || 70 (9) | 42 (4) | 47 | 2131.05 | 9985.44 | 99.99 |
16-
| [Bloom](tests/models/bloom) || 1407 (29) | 626 (11) | 1379 | 28892.3 | 68470.67 | 45.77 |
17-
| [YOLOS](tests/models/yolos) || 964 (28) | 409 (11) | 919 | 1410.28 | 45328.58 | 71.71 |
18-
| [Llama](tests/models/llama) || 5 (4) | 3 (2) | 3 | 206771 | 187910.29 | 45.46 |
19-
| [BERT](tests/models/bert) || 1393 (21) | 539 (5) | 1513 | 67347.3 | 60024.8 | 98.64 |
20-
| [Falcon](tests/models/falcon) || 3 (3) | 2 (2) | 5 | 51366.6 | N/A | N/A |
21-
| [GPT-2](tests/models/gpt2) || 748 (31) | 316 (12) | 569 | 5711.32 | N/A | N/A |
13+
| [Mnist (Eval)](tests/models/mnist) || 14 (8) | 5 (4) | 16 | 35.53 | 556.63 | 99.72 |
14+
| [Mnist (Train)](tests/models/mnist) || 14 (8) | 7 (5) | 14 | 114.16 | 3076.17 | 76.59 |
15+
| [ResNet18](tests/models/resnet) || 70 (9) | 42 (4) | 44 | 2023.95 | 10673.42 | 99.99 |
16+
| [Bloom](tests/models/bloom) || 1407 (29) | 626 (11) | 1378 | 28504 | 68025.6 | 45.77 |
17+
| [YOLOS](tests/models/yolos) || 964 (28) | 320 (11) | 825 | 1340.21 | 46101.1 | 71.71 |
18+
| [Llama](tests/models/llama) || 3 (2) | 2 (2) | 2 | 164063 | 166348.21 | 100.0 |
19+
| [BERT](tests/models/bert) || 1393 (21) | 491 (5) | 1465 | 63591.6 | 55096.44 | 98.64 |
20+
| [Falcon](tests/models/falcon) || 3 (3) | 2 (2) | 5 | 46268.6 | N/A | N/A |
21+
| [GPT-2](tests/models/gpt2) || 748 (31) | 307 (12) | 644 | 1793.52 | N/A | N/A |
2222

2323
### Explanation of Metrics
2424

@@ -135,12 +135,10 @@ The table below summarizes the results of running various ML models through our
135135
| aten.unsqueeze.default || 1 |
136136
| aten.view.default || 283 |
137137
#### Llama
138-
| aten ops | status | count |
139-
|:----------------------|:---------|--------:|
140-
| aten._to_copy.default || 1 |
141-
| aten.mm.default || 1 |
142-
| aten.t.default || 1 |
143-
| aten.view.default || 2 |
138+
| aten ops | status | count |
139+
|:-----------------------|:---------|--------:|
140+
| aten.slice.Tensor || 1 |
141+
| aten.unsqueeze.default || 2 |
144142
#### BERT
145143
| aten ops | status | count |
146144
|:-------------------------------|:---------|--------:|
@@ -291,7 +289,7 @@ Then you can upload the `.whl` file to the PyPI (Python Package Index).
291289
## Run transformer models
292290
To run transformer model with ttnn backend, run:
293291
```
294-
PYTHONPATH=${TT_METAL_HOME}:$(pwd) python3 tools/run_transformers.py --model "phiyodr/bert-large-finetuned-squad2" --backend torch_ttnn
292+
PYTHONPATH="$TT_METAL_HOME:$(pwd)" python3 tools/run_transformers.py --model "phiyodr/bert-large-finetuned-squad2" --backend torch_ttnn
295293
```
296294

297295
You can also substitute the backend with `torch_stat` to run a reference comparison.
@@ -319,3 +317,4 @@ def test_model_name(record_property):
319317
# Can be set once all three objects for the tuple are defined
320318
record_property("torch_ttnn", (model, test_input(s), outputs))
321319
```
320+

0 commit comments

Comments
 (0)