@@ -27,25 +27,23 @@ The evaluation will include five key benchmarks:
27
27
#### (Network) Bandwidth
28
28
Big (1MB) sequential I/O requests, 32 concurrently, to stress the network
29
29
30
- ```
31
- 100MB 300GB 600GB
32
- Writes: 193 MiB/s 219 MiB/s 200 MiB/s
33
- Reads: 1990 MiB/s 690 MiB/s 1003 MiB/s
34
- ```
30
+ | | * NESE * 100MB | * NESE * 300GB | * NESE * 600GB | * Weka * 300GB |
31
+ | --------- | -------------- | -------------- | -------------- | ------------- |
32
+ | Writes: | 193 MiB/s | 219 MiB/s | 200 MiB/s | 1060 MiB/s |
33
+ | Reads: | 1990 MiB/s | 690 MiB/s | 1003 MiB/s | 1404 MiB/s |
34
+
35
35
36
36
#### Latency
37
37
Small (4KB) random I/O requests, no concurrency, to measure good latency
38
38
39
- ```
40
- [in ms] 100MB 300GB 600GB
41
- Writes Avg.: 37.23 38.7 45.8
42
- Writes Median: 5.1 5.21 7.1
43
- Writes 99%: 371.1 337.6 405.5
44
- Reads Avg.: 0.8 17.62 10.6
45
- Reads Median: 0.39 13.43 10.58
46
- Reads 99%: 10.6 109.57 96.5
47
- ```
48
-
39
+ | in ms | * NESE* 100MB | * NESE* 300GB | * NESE* 600GB | * Weka* 300GB |
40
+ | ----------------| -------------| --------------| --------------| --------------|
41
+ | Writes Avg.: | 37.23 | 38.7 | 45.8 | 0.275 |
42
+ | Writes Median: | 5.1 | 5.21 | 7.1 | 0.247 |
43
+ | Writes 99%: | 371.1 | 337.6 | 405.5 | 0.525 |
44
+ | Reads Avg.: | 0.8 | 17.62 | 10.6 | 0.355 |
45
+ | Reads Median: | 0.39 | 13.43 | 10.58 | 0.311 |
46
+ | Reads 99%: | 10.6 | 109.57 | 96.5 | 0.545 |
49
47
50
48
51
49
Full results can be found in the [ results/] ( results ) folder.
@@ -75,8 +73,7 @@ The results can be found in the [results/](results) folder.
75
73
| A100 | 3500 | NESE Ceph PVC | 10.81 | 165.35 |
76
74
| H100 | 3500 | NESE Ceph PVC | 5.58 | 168.05 |
77
75
| A100 | 1000 | Local EmptyDir PVC | 24.51 | 729.10 |
78
- | | | Weka PVC | | |
79
- | | | Weka PVC | | |
76
+ | A100 | 1000 | Weka PVC | 58.08 | 871.68 |
80
77
81
78
The below results have not been run on the NERC and are provided purely for reference.
82
79
@@ -86,5 +83,7 @@ The below results have not been run on the NERC and are provided purely for refe
86
83
87
84
Other results that have been contributed from organizations can be found on the [ MLPerf Storage website] ( https://mlcommons.org/benchmarks/storage/ ) .
88
85
89
- ## Real Inference Workload
90
- To be derived from Sanjay’s work about the average model to use as an example. Granit (consult Perf group). OPT13B and LLAMA.
86
+ ## Training Workload
87
+ To be performed:
88
+ - Resnet with ImageNet dataset
89
+ - BERT with [ Wikipedia and bookcorpusopen] ( https://huggingface.co/datasets/sradc/chunked-shuffled-wikipedia20220301en-bookcorpusopen )
0 commit comments