Skip to content

Commit 94aab98

Browse files
authored
Merge pull request #48 from ayasyrev:pre-commit
pre-commit, fixes from ruff etc
2 parents dbe9e2b + b9bd02c commit 94aab98

File tree

10 files changed

+180
-139
lines changed

10 files changed

+180
-139
lines changed

.github/workflows/deploy_docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,6 @@ jobs:
1111
- uses: actions/setup-python@main
1212
with:
1313
python-version: 3.x
14-
- run: pip install mkdocs-material
14+
- run: pip install mkdocs-material
1515
- run: pip install pymdown-extensions
1616
- run: mkdocs gh-deploy --force

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,4 +113,4 @@ venv.bak/
113113
.vscode/settings.json
114114

115115
# nox
116-
.nox
116+
.nox

.pre-commit-config.yaml

Lines changed: 72 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,77 @@
11
repos:
22
- repo: https://github.com/ayasyrev/nbmetaclean
3-
rev: 0.0.7
3+
rev: 0.0.8
44
hooks:
55
- id: nbclean
66
name: nbclean
7-
entry: nbclean
7+
entry: nbclean
8+
9+
- repo: https://github.com/pre-commit/pre-commit-hooks
10+
rev: v4.6.0
11+
hooks:
12+
- id: check-added-large-files
13+
- id: check-ast
14+
- id: check-builtin-literals
15+
- id: check-case-conflict
16+
- id: check-docstring-first
17+
- id: check-executables-have-shebangs
18+
- id: check-shebang-scripts-are-executable
19+
- id: check-symlinks
20+
- id: check-toml
21+
- id: check-xml
22+
- id: detect-private-key
23+
- id: forbid-new-submodules
24+
- id: forbid-submodules
25+
- id: mixed-line-ending
26+
- id: destroyed-symlinks
27+
- id: fix-byte-order-marker
28+
- id: check-json
29+
- id: debug-statements
30+
- id: end-of-file-fixer
31+
- id: trailing-whitespace
32+
- id: requirements-txt-fixer
33+
34+
- repo: https://github.com/astral-sh/ruff-pre-commit
35+
# Ruff version.
36+
rev: v0.5.3
37+
hooks:
38+
# Run the linter.
39+
- id: ruff
40+
exclude: '__pycache__/'
41+
args: [ --fix ]
42+
# Run the formatter.
43+
- id: ruff-format
44+
45+
- repo: https://github.com/pre-commit/pygrep-hooks
46+
rev: v1.10.0
47+
hooks:
48+
- id: python-check-mock-methods
49+
- id: python-use-type-annotations
50+
- id: python-check-blanket-noqa
51+
- id: python-use-type-annotations
52+
- id: text-unicode-replacement-char
53+
54+
- repo: https://github.com/codespell-project/codespell
55+
rev: v2.3.0
56+
hooks:
57+
- id: codespell
58+
additional_dependencies: ["tomli"]
59+
60+
# - repo: https://github.com/igorshubovych/markdownlint-cli
61+
# rev: v0.41.0
62+
# hooks:
63+
# - id: markdownlint
64+
65+
- repo: https://github.com/tox-dev/pyproject-fmt
66+
rev: "2.1.4"
67+
hooks:
68+
- id: pyproject-fmt
69+
70+
- repo: https://github.com/pre-commit/mirrors-mypy
71+
rev: v1.10.1
72+
hooks:
73+
- id: mypy
74+
files: ^albumentations/
75+
additional_dependencies: [ types-PyYAML, types-setuptools, pydantic>=2.7]
76+
args:
77+
[ --config-file=pyproject.toml ]

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@ dist: clean
99
python setup.py sdist bdist_wheel
1010

1111
clean:
12-
rm -rf dist
12+
rm -rf dist

README.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@ hide:
99
Utils for benchmark - wrapper over python timeit.
1010
<!-- cell -->
1111
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/benchmark-utils)](https://pypi.org/project/benchmark-utils/)
12-
[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)
12+
[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)
1313
[![Tests](https://github.com/ayasyrev/benchmark_utils/workflows/Tests/badge.svg)](https://github.com/ayasyrev/benchmark_utils/actions?workflow=Tests) [![Codecov](https://codecov.io/gh/ayasyrev/benchmark_utils/branch/main/graph/badge.svg)](https://codecov.io/gh/ayasyrev/benchmark_utils)
1414
<!-- cell -->
1515
Tested on python 3.8 - 3.12
1616
<!-- cell -->
1717
## Install
1818
<!-- cell -->
19-
Install from pypi:
19+
Install from pypi:
2020

2121
`pip install benchmark_utils`
2222

@@ -28,7 +28,7 @@ Or install from github repo:
2828
<!-- cell -->
2929
Lets benchmark some (dummy) functions.
3030
<!-- cell -->
31-
<details open> <summary>output</summary>
31+
<details open> <summary>output</summary>
3232
```python
3333
from time import sleep
3434

@@ -43,18 +43,18 @@ def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:
4343
sleep(sleep_time * mult)</details>
4444

4545
<!-- cell -->
46-
<details open> <summary>output</summary>
46+
<details open> <summary>output</summary>
4747

4848
Let's create benchmark.</details>
4949

5050
<!-- cell -->
51-
<details open> <summary>output</summary>
51+
<details open> <summary>output</summary>
5252
```python
5353
from benchmark_utils import Benchmark
5454
```</details>
5555

5656
<!-- cell -->
57-
<details open> <summary>output</summary>
57+
<details open> <summary>output</summary>
5858
```python
5959
bench = Benchmark(
6060
[func_to_test_1, func_to_test_2],
@@ -65,7 +65,7 @@ bench = Benchmark(
6565
```python
6666
bench
6767
```
68-
<details open> <summary>output</summary>
68+
<details open> <summary>output</summary>
6969

7070

7171

@@ -79,7 +79,7 @@ Now we can benchmark that functions.
7979
# we can run bench.run() or just:
8080
bench()
8181
```
82-
<details open> <summary>output</summary>
82+
<details open> <summary>output</summary>
8383

8484

8585
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
@@ -114,7 +114,7 @@ We can run it again, all functions, some of it, exclude some and change number o
114114
```python
115115
bench.run(num_repeats=10)
116116
```
117-
<details open> <summary>output</summary>
117+
<details open> <summary>output</summary>
118118

119119

120120
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
@@ -149,7 +149,7 @@ After run, we can print results - sorted or not, reversed, compare results with
149149
```python
150150
bench.print_results(reverse=True)
151151
```
152-
<details open> <summary>output</summary>
152+
<details open> <summary>output</summary>
153153

154154

155155
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
@@ -170,7 +170,7 @@ bench.print_results(reverse=True)
170170
<!-- cell -->
171171
We can add functions to benchmark as list of functions (or partial) or as dictionary: `{"name": function}`.
172172
<!-- cell -->
173-
<details open> <summary>output</summary>
173+
<details open> <summary>output</summary>
174174
```python
175175
bench = Benchmark(
176176
[
@@ -185,7 +185,7 @@ bench = Benchmark(
185185
```python
186186
bench
187187
```
188-
<details open> <summary>output</summary>
188+
<details open> <summary>output</summary>
189189

190190

191191

@@ -196,7 +196,7 @@ bench
196196
```python
197197
bench.run()
198198
```
199-
<details open> <summary>output</summary>
199+
<details open> <summary>output</summary>
200200

201201

202202
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
@@ -232,7 +232,7 @@ bench.run()
232232
</pre></details>
233233

234234
<!-- cell -->
235-
<details open> <summary>output</summary>
235+
<details open> <summary>output</summary>
236236
```python
237237
bench = Benchmark(
238238
{
@@ -246,7 +246,7 @@ bench = Benchmark(
246246
```python
247247
bench
248248
```
249-
<details open> <summary>output</summary>
249+
<details open> <summary>output</summary>
250250

251251

252252

@@ -262,7 +262,7 @@ When we run benchmark script in terminal, we got pretty progress thanks to rich.
262262
<!-- cell -->
263263
With BenchmarkIter we can benchmark functions over iterables, for example read list of files or run functions with different arguments.
264264
<!-- cell -->
265-
<details open> <summary>output</summary>
265+
<details open> <summary>output</summary>
266266
```python
267267
def func_to_test_1(x: int) -> None:
268268
"""simple 'sleep' func for test"""
@@ -278,7 +278,7 @@ dummy_params = list(range(10))
278278
```</details>
279279

280280
<!-- cell -->
281-
<details open> <summary>output</summary>
281+
<details open> <summary>output</summary>
282282
```python
283283
from benchmark_utils import BenchmarkIter
284284

@@ -292,7 +292,7 @@ bench = BenchmarkIter(
292292
```python
293293
bench()
294294
```
295-
<details open> <summary>output</summary>
295+
<details open> <summary>output</summary>
296296

297297

298298
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
@@ -328,7 +328,7 @@ And we can limit number of items with `num_samples` argument:
328328
<!-- cell -->
329329
## Multiprocessing
330330
<!-- cell -->
331-
By default we tun functions in one thread.
331+
By default we tun functions in one thread.
332332
But we can use multiprocessing with `multiprocessing=True` argument:
333333
`bench.run(multiprocessing=True)`
334334
It will use all available cpu cores.
@@ -338,7 +338,7 @@ And we can use `num_workers` argument to limit used cpu cores:
338338
```python
339339
bench.run(multiprocessing=True, num_workers=2)
340340
```
341-
<details open> <summary>output</summary>
341+
<details open> <summary>output</summary>
342342

343343

344344
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>

0 commit comments

Comments
 (0)