Skip to content
This repository has been archived by the owner on Feb 26, 2025. It is now read-only.

Should we use HPA #19

Open
paulgmiller opened this issue Dec 1, 2024 · 0 comments
Open

Should we use HPA #19

paulgmiller opened this issue Dec 1, 2024 · 0 comments

Comments

@paulgmiller
Copy link
Owner

paulgmiller commented Dec 1, 2024

You would still need something to gather information about failed evictions though long term that could be kubernetes/kubernetes#128815.

However once you have that data you could model hpa resources instead of pdb watcher resources someting like this

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  minReplicas: <min available from pdb> 
  maxReplicas:<original metrics + maxsurge>
  metrics:
    - type: Pods
      pods:
        metric:
          name: pods_without_eviction
        target:
          type: Value
          value: 5 <original repicas count> 

Positives

  • scaling all types supported by HPA are supported
  • can merge with other Hpas
  • No new crd needed?

Downsides:

  • something else (deployment watcher) need to track if user that is not changes deployment and reset minReplicas
  • depenendency on metric-server (who ensures HA of metric server?)
  • Need a new metric api and need to point metric server at it?

https://github.com/kubernetes-sigs/custom-metrics-apiserver

https://chatgpt.com/share/674cae4f-a758-8009-a2bf-288ef25ba215

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant