9
9
Utils for benchmark - wrapper over python timeit.
10
10
<!-- cell -->
11
11
[ ![ PyPI - Python Version] ( https://img.shields.io/pypi/pyversions/benchmark-utils )] ( https://pypi.org/project/benchmark-utils/ )
12
- [ ![ PyPI Status] ( https://badge.fury.io/py/benchmark-utils.svg )] ( https://badge.fury.io/py/benchmark-utils )
12
+ [ ![ PyPI Status] ( https://badge.fury.io/py/benchmark-utils.svg )] ( https://badge.fury.io/py/benchmark-utils )
13
13
[ ![ Tests] ( https://github.com/ayasyrev/benchmark_utils/workflows/Tests/badge.svg )] ( https://github.com/ayasyrev/benchmark_utils/actions?workflow=Tests ) [ ![ Codecov] ( https://codecov.io/gh/ayasyrev/benchmark_utils/branch/main/graph/badge.svg )] ( https://codecov.io/gh/ayasyrev/benchmark_utils )
14
14
<!-- cell -->
15
15
Tested on python 3.8 - 3.12
16
16
<!-- cell -->
17
17
## Install
18
18
<!-- cell -->
19
- Install from pypi:
19
+ Install from pypi:
20
20
21
21
` pip install benchmark_utils `
22
22
@@ -28,7 +28,7 @@ Or install from github repo:
28
28
<!-- cell -->
29
29
Lets benchmark some (dummy) functions.
30
30
<!-- cell -->
31
- <details open > <summary >output</summary >
31
+ <details open > <summary >output</summary >
32
32
``` python
33
33
from time import sleep
34
34
@@ -43,18 +43,18 @@ def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:
43
43
sleep(sleep_time * mult)< / details>
44
44
45
45
< !-- cell -- >
46
- < details open > < summary> output< / summary>
46
+ < details open > < summary> output< / summary>
47
47
48
48
Let' s create benchmark.</details>
49
49
50
50
< !-- cell -- >
51
- < details open > < summary> output< / summary>
51
+ < details open > < summary> output< / summary>
52
52
```python
53
53
from benchmark_utils import Benchmark
54
54
```< / details>
55
55
56
56
< !-- cell -- >
57
- < details open > < summary> output< / summary>
57
+ < details open > < summary> output< / summary>
58
58
```python
59
59
bench = Benchmark(
60
60
[func_to_test_1, func_to_test_2],
@@ -65,7 +65,7 @@ bench = Benchmark(
65
65
```python
66
66
bench
67
67
```
68
- <details open > <summary >output</summary >
68
+ <details open > <summary >output</summary >
69
69
70
70
71
71
@@ -79,7 +79,7 @@ Now we can benchmark that functions.
79
79
# we can run bench.run() or just:
80
80
bench()
81
81
```
82
- <details open > <summary >output</summary >
82
+ <details open > <summary >output</summary >
83
83
84
84
85
85
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " ></pre >
@@ -114,7 +114,7 @@ We can run it again, all functions, some of it, exclude some and change number o
114
114
``` python
115
115
bench.run(num_repeats = 10 )
116
116
```
117
- <details open > <summary >output</summary >
117
+ <details open > <summary >output</summary >
118
118
119
119
120
120
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " ></pre >
@@ -149,7 +149,7 @@ After run, we can print results - sorted or not, reversed, compare results with
149
149
``` python
150
150
bench.print_results(reverse = True )
151
151
```
152
- <details open > <summary >output</summary >
152
+ <details open > <summary >output</summary >
153
153
154
154
155
155
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " > Func name | Sec <span style =" color : #800080 ; text-decoration-color : #800080 " >/</span > run
@@ -170,7 +170,7 @@ bench.print_results(reverse=True)
170
170
<!-- cell -->
171
171
We can add functions to benchmark as list of functions (or partial) or as dictionary: ` {"name": function} ` .
172
172
<!-- cell -->
173
- <details open > <summary >output</summary >
173
+ <details open > <summary >output</summary >
174
174
``` python
175
175
bench = Benchmark(
176
176
[
@@ -185,7 +185,7 @@ bench = Benchmark(
185
185
```python
186
186
bench
187
187
```
188
- <details open > <summary >output</summary >
188
+ <details open > <summary >output</summary >
189
189
190
190
191
191
@@ -196,7 +196,7 @@ bench
196
196
``` python
197
197
bench.run()
198
198
```
199
- <details open > <summary >output</summary >
199
+ <details open > <summary >output</summary >
200
200
201
201
202
202
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " ></pre >
@@ -232,7 +232,7 @@ bench.run()
232
232
</pre ></details >
233
233
234
234
<!-- cell -->
235
- <details open > <summary >output</summary >
235
+ <details open > <summary >output</summary >
236
236
``` python
237
237
bench = Benchmark(
238
238
{
@@ -246,7 +246,7 @@ bench = Benchmark(
246
246
```python
247
247
bench
248
248
```
249
- <details open > <summary >output</summary >
249
+ <details open > <summary >output</summary >
250
250
251
251
252
252
@@ -262,7 +262,7 @@ When we run benchmark script in terminal, we got pretty progress thanks to rich.
262
262
<!-- cell -->
263
263
With BenchmarkIter we can benchmark functions over iterables, for example read list of files or run functions with different arguments.
264
264
<!-- cell -->
265
- <details open > <summary >output</summary >
265
+ <details open > <summary >output</summary >
266
266
``` python
267
267
def func_to_test_1 (x : int ) -> None :
268
268
""" simple 'sleep' func for test"""
@@ -278,7 +278,7 @@ dummy_params = list(range(10))
278
278
```< / details>
279
279
280
280
< !-- cell -- >
281
- < details open > < summary> output< / summary>
281
+ < details open > < summary> output< / summary>
282
282
```python
283
283
from benchmark_utils import BenchmarkIter
284
284
@@ -292,7 +292,7 @@ bench = BenchmarkIter(
292
292
```python
293
293
bench()
294
294
```
295
- <details open > <summary >output</summary >
295
+ <details open > <summary >output</summary >
296
296
297
297
298
298
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " ></pre >
@@ -328,7 +328,7 @@ And we can limit number of items with `num_samples` argument:
328
328
<!-- cell -->
329
329
## Multiprocessing
330
330
<!-- cell -->
331
- By default we tun functions in one thread.
331
+ By default we tun functions in one thread.
332
332
But we can use multiprocessing with ` multiprocessing=True ` argument:
333
333
` bench.run(multiprocessing=True) `
334
334
It will use all available cpu cores.
@@ -338,7 +338,7 @@ And we can use `num_workers` argument to limit used cpu cores:
338
338
``` python
339
339
bench.run(multiprocessing = True , num_workers = 2 )
340
340
```
341
- <details open > <summary >output</summary >
341
+ <details open > <summary >output</summary >
342
342
343
343
344
344
<pre style =" white-space :pre ;overflow-x :auto ;line-height :normal ;font-family :Menlo,' DejaVu Sans Mono' ,consolas,' Courier New' ,monospace " ></pre >
0 commit comments