Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(container): update image coredns ( 1.39.0 → 1.39.1 ) #5227

Merged
merged 1 commit into from
Feb 25, 2025

Conversation

repo-jeeves[bot]
Copy link
Contributor

@repo-jeeves repo-jeeves bot commented Feb 25, 2025

This PR contains the following updates:

Package Update Change
coredns (source) patch 1.39.0 -> 1.39.1

@repo-jeeves repo-jeeves bot added renovate/container Issue relates to a Renovate container update type/patch Issue relates to a patch version bump cluster/main Changes made in the main cluster size/XS Marks a PR that changes 0-9 lines, ignoring generated files labels Feb 25, 2025
@repo-jeeves
Copy link
Contributor Author

repo-jeeves bot commented Feb 25, 2025

--- HelmRelease: observability/kube-prometheus-stack DaemonSet: observability/node-exporter

+++ HelmRelease: observability/kube-prometheus-stack DaemonSet: observability/node-exporter

@@ -40,13 +40,13 @@

         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
       serviceAccountName: node-exporter
       containers:
       - name: node-exporter
-        image: quay.io/prometheus/node-exporter:v1.9.0
+        image: quay.io/prometheus/node-exporter:v1.8.2
         imagePullPolicy: IfNotPresent
         args:
         - --path.procfs=/host/proc
         - --path.sysfs=/host/sys
         - --path.rootfs=/host/root
         - --path.udev.data=/host/root/run/udev/data
--- HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-state-metrics

+++ HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-state-metrics

@@ -43,13 +43,13 @@

       containers:
       - name: kube-state-metrics
         args:
         - --port=8080
         - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
         imagePullPolicy: IfNotPresent
-        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0
+        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.14.0
         ports:
         - containerPort: 8080
           name: http
         livenessProbe:
           failureThreshold: 3
           httpGet:
--- HelmRelease: observability/kube-prometheus-stack Deployment: observability/prometheus-operator

+++ HelmRelease: observability/kube-prometheus-stack Deployment: observability/prometheus-operator

@@ -31,20 +31,20 @@

         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
       - name: kube-prometheus-stack
-        image: quay.io/prometheus-operator/prometheus-operator:v0.80.1
+        image: quay.io/prometheus-operator/prometheus-operator:v0.80.0
         imagePullPolicy: IfNotPresent
         args:
         - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
         - --kubelet-endpoints=true
         - --kubelet-endpointslice=false
         - --localhost=127.0.0.1
-        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.1
+        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.0
         - --config-reloader-cpu-request=0
         - --config-reloader-cpu-limit=0
         - --config-reloader-memory-request=0
         - --config-reloader-memory-limit=0
         - --thanos-default-base-image=quay.io/thanos/thanos:v0.37.2
         - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
--- HelmRelease: observability/kube-prometheus-stack Prometheus: observability/kube-prometheus-stack

+++ HelmRelease: observability/kube-prometheus-stack Prometheus: observability/kube-prometheus-stack

@@ -16,14 +16,14 @@

   alerting:
     alertmanagers:
     - apiVersion: v2
       name: alertmanager
       namespace: observability
       port: 9093
-  image: quay.io/prometheus/prometheus:v3.2.0
-  version: v3.2.0
+  image: quay.io/prometheus/prometheus:v3.1.0
+  version: v3.1.0
   externalUrl: http://prometheus.zinn.ca/
   paused: false
   replicas: 1
   shards: 1
   logLevel: info
   logFormat: logfmt
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-apps

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-apps

@@ -99,13 +99,13 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubestatefulsetreplicasmismatch
         summary: StatefulSet has not matched the expected number of replicas.
       expr: |-
         (
           kube_statefulset_status_replicas_ready{job="kube-state-metrics", namespace=~".*"}
             !=
-          kube_statefulset_replicas{job="kube-state-metrics", namespace=~".*"}
+          kube_statefulset_status_replicas{job="kube-state-metrics", namespace=~".*"}
         ) and (
           changes(kube_statefulset_status_replicas_updated{job="kube-state-metrics", namespace=~".*"}[10m])
             ==
           0
         )
       for: 15m
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-kubernetes-system-kubelet

@@ -44,22 +44,20 @@

       annotations:
         description: Kubelet '{{ $labels.node }}' is running at {{ $value | humanizePercentage
           }} of its Pod capacity on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubelettoomanypods
         summary: Kubelet is running at capacity.
       expr: |-
-        (
-          max by (cluster, instance) (
-            kubelet_running_pods{job="kubelet", metrics_path="/metrics"} > 1
-          )
-          * on (cluster, instance) group_left(node)
-          max by (cluster, instance, node) (
-            kubelet_node_name{job="kubelet", metrics_path="/metrics"}
+        count by (cluster, node) (
+          (kube_pod_status_phase{job="kube-state-metrics", phase="Running"} == 1)
+          * on (cluster, namespace, pod) group_left (node)
+          group by (cluster, namespace, pod, node) (
+            kube_pod_info{job="kube-state-metrics"}
           )
         )
-        / on (cluster, node) group_left()
+        /
         max by (cluster, node) (
           kube_node_status_capacity{job="kube-state-metrics", resource="pods"} != 1
         ) > 0.95
       for: 15m
       labels:
         severity: info
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-node-exporter

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-node-exporter

@@ -340,24 +340,12 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodesystemdservicefailed
         summary: Systemd service has entered failed state.
       expr: node_systemd_unit_state{job="node-exporter", state="failed"} == 1
       for: 5m
       labels:
         severity: warning
-    - alert: NodeSystemdServiceCrashlooping
-      annotations:
-        description: Systemd service {{ $labels.name }} has being restarted too many
-          times at {{ $labels.instance }} for the last 15 minutes. Please check if
-          service is crash looping.
-        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodesystemdservicecrashlooping
-        summary: Systemd service keeps restaring, possibly crash looping.
-      expr: increase(node_systemd_service_restart_total{job="node-exporter"}[5m])
-        > 2
-      for: 15m
-      labels:
-        severity: warning
     - alert: NodeBondingDegraded
       annotations:
         description: Bonding interface {{ $labels.master }} on {{ $labels.instance
           }} is in degraded state due to one or more slave failures.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodebondingdegraded
         summary: Bonding interface is degraded
--- HelmRelease: observability/kube-prometheus-stack ConfigMap: observability/kube-prometheus-stack-crds-upgrade

+++ HelmRelease: observability/kube-prometheus-stack ConfigMap: observability/kube-prometheus-stack-crds-upgrade

@@ -15,8 +15,8 @@

     release: kube-prometheus-stack
     heritage: Helm
     app: crds-operator
     app.kubernetes.io/name: crds-prometheus-operator
     app.kubernetes.io/component: crds-upgrade
 binaryData:
[Diff truncated by flux-local]

@repo-jeeves
Copy link
Contributor Author

repo-jeeves bot commented Feb 25, 2025

--- kubernetes/main/apps/kube-system/coredns/app Kustomization: kube-system/coredns HelmRelease: kube-system/coredns

+++ kubernetes/main/apps/kube-system/coredns/app Kustomization: kube-system/coredns HelmRelease: kube-system/coredns

@@ -13,12 +13,12 @@

     spec:
       chart: coredns
       sourceRef:
         kind: HelmRepository
         name: coredns-charts
         namespace: flux-system
-      version: 1.39.0
+      version: 1.39.1
   interval: 30m
   valuesFrom:
   - kind: ConfigMap
     name: coredns-values-g6hd9ck4hh
 

| datasource | package                        | from   | to     |
| ---------- | ------------------------------ | ------ | ------ |
| docker     | ghcr.io/coredns/charts/coredns | 1.39.0 | 1.39.1 |
@repo-jeeves repo-jeeves bot force-pushed the renovate/main-coredns-1.x branch from 6a8a0be to e7fa361 Compare February 25, 2025 11:11
@szinn szinn merged commit 5b063de into main Feb 25, 2025
18 checks passed
@szinn szinn deleted the renovate/main-coredns-1.x branch February 25, 2025 11:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cluster/main Changes made in the main cluster renovate/container Issue relates to a Renovate container update size/XS Marks a PR that changes 0-9 lines, ignoring generated files type/patch Issue relates to a patch version bump
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant