Skip to content

Upgrading cluster-api-operator via Helm causes provider namespaces to be removed #851

@matofeder

Description

@matofeder

What steps did you take and what happened:

  1. Install cluster-api-operator in the Kubernetes cluster using Helm, providing a values.yaml file that also deploys two providers:
addon:
  helm:
    namespace: caaph-system
    version: v0.3.1
core:
  cluster-api:
    namespace: capi-system
    version: v1.10.2
  1. Upgrade cluster-api-operator using the same values.yaml file (with no changes)

  2. During the upgrade, the namespaces of the providers are removed and then recreated

What did you expect to happen:

Do not recreate the provider namespace

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Steps to reproduce (using kind):

$ kind create cluster
$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true

$ cat values.yaml    
addon:
  helm:
    namespace: caaph-system
    version: v0.3.1
core:
  cluster-api:
    namespace: capi-system
    version: v1.10.2

$ helm upgrade --install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system --wait --timeout 90s -f values.yaml

If you hit the helm upgrade --install capi-operator capi-operator/cluster-api-operator --create-namespace -n capi-operator-system --wait --timeout 90s -f values.yaml again without any values change you can observe the following:

$ kubectl get ns --watch

NAME                   STATUS   AGE
caaph-system           Active   88s
capi-operator-system   Active   4m44s
capi-system            Active   71s
cert-manager           Active   5m44s
default                Active   6m51s
kube-node-lease        Active   6m51s
kube-public            Active   6m51s
kube-system            Active   6m51s
local-path-storage     Active   6m47s
caaph-system           Terminating   100s
caaph-system           Terminating   105s
caaph-system           Terminating   110s
caaph-system           Terminating   115s
caaph-system           Terminating   115s
caaph-system           Active        0s
capi-system            Terminating   99s
capi-system            Terminating   104s
capi-system            Terminating   109s
capi-system            Terminating   114s
capi-system            Terminating   114s
capi-system            Active        0s

Environment:

  • Cluster-api-operator version: v0.21.0
  • Cluster-api version: v1.10.2
  • KIND version: v0.29.0
  • Kubernetes version: (use kubectl version): v1.33.1
  • OS (e.g. from /etc/os-release): Ubuntu 25.04

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-operator/labels?q=area for the list of labels]

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions