Skip to content

Commit 5cd8a8d

Browse files
authored
chore: bump dependencies and fix linter issues (#186)
* chore: bump dependencies and fix linter issues * chore: bump helm values * chore: bump dockerfile image version * fix: linebreak
1 parent c31967a commit 5cd8a8d

File tree

16 files changed

+784
-435
lines changed

16 files changed

+784
-435
lines changed

.github/workflows/python-tests.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@ jobs:
1717

1818
steps:
1919
- uses: actions/checkout@v4
20-
- name: Set up Python 3.9
20+
- name: Set up Python 3.10
2121
uses: actions/setup-python@v5
2222
with:
23-
python-version: 3.9
23+
python-version: "3.10"
2424
- name: Install dependencies
2525
run: |
2626
pip install poetry

.pre-commit-config.yaml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -64,29 +64,29 @@ repos:
6464

6565
# formatters
6666
- repo: https://github.com/asottile/reorder_python_imports
67-
rev: v3.15.0
67+
rev: v3.16.0
6868
hooks:
6969
- id: reorder-python-imports
7070

7171
- repo: https://github.com/astral-sh/ruff-pre-commit
72-
rev: v0.12.10
72+
rev: v0.14.11
7373
hooks:
7474
# Run the linter.
75-
- id: ruff
75+
- id: ruff-check
7676
args: [--fix]
7777
# Run the formatter.
7878
- id: ruff-format
7979

8080
# linters
8181
- repo: https://github.com/PyCQA/bandit
82-
rev: 1.8.6
82+
rev: 1.9.2
8383
hooks:
8484
- id: bandit
8585
args: ["-x", "tests"]
8686
stages: [pre-push]
8787

8888
- repo: https://github.com/pre-commit/mirrors-mypy
89-
rev: v1.17.1
89+
rev: v1.19.1
9090
hooks:
9191
- id: mypy
9292
additional_dependencies: [types-pytz, types-requests]
@@ -109,7 +109,7 @@ repos:
109109
- id: python-check-blanket-noqa
110110

111111
- repo: https://github.com/rbubley/mirrors-prettier
112-
rev: v3.6.2
112+
rev: v3.7.4
113113
hooks:
114114
- id: prettier
115115
files: \.md$

.travis.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ dist: bionic
22
sudo: yes
33
language: python
44
python:
5-
- "3.9"
5+
- "3.10"
66
services:
77
- docker
88
install:

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM python:3.12.12-alpine3.22 AS builder
1+
FROM python:3.12.12-alpine3.23 AS builder
22

33
WORKDIR /
44

README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -296,15 +296,17 @@ You can also use simplified time specifications.
296296
If the weekday range or the timezone is not included in the specification, the downscaler can automatically complete the rule when corresponding environment variables are defined.
297297
298298
The following environment variables can be used to provide default values:
299+
299300
- `DEFAULT_TIMEZONE` defines the time zone to use when not specified in the specification (for example `Europe/Paris`).
300301
- `DEFAULT_WEEKFRAME` defines the weekday range to use to use when not specified in the specification (for example `Mon-Sun`).
301302
302303
For example:
304+
303305
- `Mon-Fri 08:00-20:00` interpreted as `Mon-Fri 08:00-20:00 Europe/Paris` when `DEFAULT_TIMEZONE` is set to `Europe/Paris`.
304306
- `08:00-20:00 Europe/Paris` interpreted as `Mon-Sun 08:00-20:00 Europe/Paris` when `DEFAULT_WEEKFRAME` is set to `Mon-Sun`.
305307
- `08:00-20:00` will only be valid if both `DEFAULT_TIMEZONE` and `DEFAULT_WEEKFRAME` are defined.
306308
307-
If the timezone and/or the weekday range are not provided in the specification and the corresponding default environment variable is not set, the downscaler will raise a clear `ValueError` to prevent ambiguous or unintended behavior.
309+
If the timezone and/or the weekday range are not provided in the specification and the corresponding default environment variable is not set, the downscaler will raise a clear `ValueError` to prevent ambiguous or unintended behavior.
308310
309311
If you want to schedule downtime from 23:30 to 09:30 the following day,
310312
a configuration like this would be incorrect:
@@ -489,19 +491,18 @@ argument is strongly recommended when using the `--once` argument to process lar
489491
490492
`--qps`
491493
492-
: Optional: rate of API requests a KubeDownscaler is allowed to send per second
494+
: Optional: rate of API requests a KubeDownscaler is allowed to send per second
493495
(default: 0, meaning qps control is disabled)
494496
495497
`--burst`
496498
497-
: Optional: maximum number of requests a KubeDownscaler can send at once above the QPS limit for
499+
: Optional: maximum number of requests a KubeDownscaler can send at once above the QPS limit for
498500
(default: 0, meaning burst control is disabled) Burst must be greater-equal than qps
499501
500-
501502
`--max-retries-on-throttling`
502503
503-
: Optional: How many retries to perform when KubeDownscaler hits API Server throttling limits (default: 0). The
504-
retries are performed using an exponential backoff strategy to reduce the chance of hitting the rate limit again.
504+
: Optional: How many retries to perform when KubeDownscaler hits API Server throttling limits (default: 0).
505+
The retries are performed using an exponential backoff strategy to reduce the chance of hitting the rate limit again.
505506
506507
### Constrained Mode (Limited Access Mode)
507508

chart/Chart.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@ name: py-kube-downscaler
33
description: A Helm chart for deploying py-kube-downscaler
44

55
type: application
6-
version: 0.3.8
7-
appVersion: "25.11.0"
6+
version: 0.3.9
7+
appVersion: "26.1.0"

kube_downscaler/helper.py

Lines changed: 74 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,19 @@
11
import datetime
22
import json
33
import logging
4-
import re
54
import os
5+
import re
66
import sys
77
import time
8-
from typing import Match, Callable, TypeVar, Optional
8+
from typing import Callable
9+
from typing import Match
10+
from typing import Optional
11+
from typing import TypeVar
12+
913
import pykube
1014
import pytz
1115
import requests
16+
1217
from kube_downscaler.tokenbucket import TokenBucket
1318

1419
logger = logging.getLogger(__name__)
@@ -21,8 +26,10 @@
2126
TIME_SPEC_PATTERN = re.compile(
2227
r"^([a-zA-Z]{3})-([a-zA-Z]{3}) (\d\d):(\d\d)-(\d\d):(\d\d) (?P<tz>[a-zA-Z/_]+)$"
2328
)
24-
TIME_SPEC_PATTERN_WO_TZ = re.compile(r'.*(\d\d)$')
25-
TIME_SPEC_PATTERN_WO_WF = re.compile(r'^(\d\d):(\d\d)-(\d\d):(\d\d) (?P<tz>[a-zA-Z/_]+)$')
29+
TIME_SPEC_PATTERN_WO_TZ = re.compile(r".*(\d\d)$")
30+
TIME_SPEC_PATTERN_WO_WF = re.compile(
31+
r"^(\d\d):(\d\d)-(\d\d):(\d\d) (?P<tz>[a-zA-Z/_]+)$"
32+
)
2633
_ISO_8601_TIME_SPEC_PATTERN = r"(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}[-+]\d{2}:\d{2})"
2734
ABSOLUTE_TIME_SPEC_PATTERN = re.compile(
2835
r"^{0}-{0}$".format(_ISO_8601_TIME_SPEC_PATTERN)
@@ -41,14 +48,18 @@ def matches_time_spec(time: datetime.datetime, spec: str):
4148
match = TIME_SPEC_PATTERN_WO_TZ.match(spec_)
4249
if match and not ABSOLUTE_TIME_SPEC_PATTERN.match(spec_):
4350
if DEFAULT_TIMEZONE:
44-
spec_ = spec_ + ' ' + DEFAULT_TIMEZONE
51+
spec_ = spec_ + " " + DEFAULT_TIMEZONE
4552
else:
46-
raise ValueError("No default timezone defined in environment variable 'DEFAULT_TIMEZONE'")
53+
raise ValueError(
54+
"No default timezone defined in environment variable 'DEFAULT_TIMEZONE'"
55+
)
4756
if TIME_SPEC_PATTERN_WO_WF.match(spec_):
4857
if DEFAULT_WEEKFRAME:
49-
spec_ = DEFAULT_WEEKFRAME + ' ' + spec_
58+
spec_ = DEFAULT_WEEKFRAME + " " + spec_
5059
else:
51-
raise ValueError("No default week frame defined in environment variable 'DEFAULT_WEEKFRAME'")
60+
raise ValueError(
61+
"No default week frame defined in environment variable 'DEFAULT_WEEKFRAME'"
62+
)
5263
recurring_match = TIME_SPEC_PATTERN.match(spec_)
5364
if recurring_match is not None and _matches_recurring_time_spec(
5465
time, recurring_match
@@ -153,7 +164,7 @@ def add_event(resource, message: str, reason: str, event_type: str, dry_run: boo
153164
try:
154165
call_with_exponential_backoff(
155166
lambda: event.update(),
156-
context_msg = f"updating event for id {uid}",
167+
context_msg=f"updating event for id {uid}",
157168
)
158169
return event
159170
except requests.HTTPError as e:
@@ -212,47 +223,59 @@ def create_event(resource, message: str, reason: str, event_type: str, dry_run:
212223
else:
213224
raise e
214225

226+
227+
class JsonFormatter(logging.Formatter):
228+
def format(self, record: logging.LogRecord) -> str:
229+
return json.dumps(
230+
{
231+
"time": self.formatTime(record),
232+
"severity": record.levelname,
233+
"message": record.getMessage().replace('"', "'"),
234+
}
235+
)
236+
237+
215238
def setup_logging(debug: bool, json_logs: bool):
216-
logging.getLogger().handlers.clear()
217239
root_logger = logging.getLogger()
240+
root_logger.handlers.clear()
218241
root_logger.setLevel(logging.DEBUG if debug else logging.INFO)
219242

220243
stderr_handler = logging.StreamHandler(sys.stderr)
221244

245+
formatter: logging.Formatter
222246
if json_logs:
223-
formatter = logging.Formatter()
224-
formatter.format = lambda record: json.dumps({
225-
"time": logging.Formatter.formatTime(logging.Formatter(), record),
226-
"severity": record.levelname,
227-
"message": record.getMessage().replace('"', "'")
228-
})
247+
formatter = JsonFormatter()
229248
else:
230249
formatter = logging.Formatter("%(asctime)s %(levelname)s: %(message)s")
231250

232251
stderr_handler.setFormatter(formatter)
233252
root_logger.addHandler(stderr_handler)
234253

254+
235255
def initialize_token_bucket(qps, burst):
236256
global TOKEN_BUCKET
237257
if qps == 0 and burst == 0:
238258
TOKEN_BUCKET = None
239259
TOKEN_BUCKET = TokenBucket(qps=qps, burst=burst)
240260

261+
241262
def initialize_max_retries(max_retries):
242263
global MAX_RETRIES
243264
MAX_RETRIES = max_retries
244265

245-
T = TypeVar('T')
266+
267+
T = TypeVar("T")
268+
246269

247270
def call_with_exponential_backoff(
248-
func: Callable[..., T],
249-
base_delay: float = 1.0,
250-
max_delay: float = 60.0,
251-
backoff_factor: int = 2,
252-
jitter: bool = True,
253-
retry_on_status_codes: tuple = (429,),
254-
context_msg: Optional[str] = None,
255-
use_token_bucket: bool = True
271+
func: Callable[..., T],
272+
base_delay: float = 1.0,
273+
max_delay: float = 60.0,
274+
backoff_factor: int = 2,
275+
jitter: bool = True,
276+
retry_on_status_codes: tuple = (429,),
277+
context_msg: Optional[str] = None,
278+
use_token_bucket: bool = True,
256279
) -> T:
257280
"""
258281
Generic function to call any function with exponential backoff on HTTP errors.
@@ -296,34 +319,45 @@ def call_with_exponential_backoff(
296319
logger.error(error_msg)
297320
raise e
298321

299-
#check for "Retry-After" header
300-
retry_after = e.response.headers.get('Retry-After')
322+
# check for "Retry-After" header
323+
retry_after = e.response.headers.get("Retry-After")
301324

302325
if retry_after:
303326
try:
304-
#retry-After can be in seconds (integer) or HTTP date format
327+
# retry-After can be in seconds (integer) or HTTP date format
305328
if retry_after.isdigit():
306329
delay = float(retry_after)
307330
else:
308-
#try parsing as HTTP date
331+
# try parsing as HTTP date
309332
from email.utils import parsedate_to_datetime
333+
310334
retry_date = parsedate_to_datetime(retry_after)
311-
delay = (retry_date - datetime.datetime.now(retry_date.tzinfo)).total_seconds()
335+
delay = (
336+
retry_date
337+
- datetime.datetime.now(retry_date.tzinfo)
338+
).total_seconds()
312339

313-
#cap the delay at max_delay
340+
# cap the delay at max_delay
314341
delay = min(delay, max_delay)
315342

316-
logger.info(f"using Retry-After header value: {delay:.2f} seconds")
343+
logger.info(
344+
f"using Retry-After header value: {delay:.2f} seconds"
345+
)
317346
except (ValueError, TypeError) as parse_error:
318347
logger.warning(
319-
f"failed to parse Retry-After header '{retry_after}': {parse_error}. Using exponential backoff.")
320-
#fall back to exponential backoff
321-
delay = min(base_delay * (backoff_factor ** retry_count), max_delay)
348+
f"failed to parse Retry-After header '{retry_after}': {parse_error}. Using exponential backoff."
349+
)
350+
# fall back to exponential backoff
351+
delay = min(
352+
base_delay * (backoff_factor**retry_count), max_delay
353+
)
322354
else:
323-
#calculate exponential backoff
324-
delay = min(base_delay * (backoff_factor ** retry_count), max_delay)
355+
# calculate exponential backoff
356+
delay = min(
357+
base_delay * (backoff_factor**retry_count), max_delay
358+
)
325359

326-
#add jitter if not using "Retry-After" header
360+
# add jitter if not using "Retry-After" header
327361
if jitter and not retry_after:
328362
jitter_amount = delay * 0.1 * (time.time() % 1)
329363
delay += jitter_amount
@@ -337,7 +371,7 @@ def call_with_exponential_backoff(
337371
time.sleep(delay)
338372
retry_count += 1
339373
else:
340-
#re-raise non-retryable errors immediately
374+
# re-raise non-retryable errors immediately
341375
raise e
342376

343377
if last_exception:
@@ -348,4 +382,4 @@ def call_with_exponential_backoff(
348382
if use_token_bucket and TOKEN_BUCKET:
349383
TOKEN_BUCKET.acquire()
350384

351-
return func()
385+
return func()

kube_downscaler/main.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,15 +24,15 @@ def main(args=None):
2424
args = parser.parse_args(args)
2525

2626
if args.burst < args.qps:
27-
logger.error("Failed to start, burst value must be greater than or equal to qps value")
27+
logger.error(
28+
"Failed to start, burst value must be greater than or equal to qps value"
29+
)
2830
return None
2931

30-
3132
helper.setup_logging(args.debug, args.json_logs)
3233
helper.initialize_token_bucket(args.qps, args.burst)
3334
helper.initialize_max_retries(args.max_retries_on_throttling)
3435

35-
3636
config_str = ", ".join(f"{k}={v}" for k, v in sorted(vars(args).items()))
3737
logger.info(f"Downscaler v{__version__} started with {config_str}")
3838

0 commit comments

Comments
 (0)