Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Kubernetes cluster always recreated when enable_firewall set to true #529

Closed
bensherred opened this issue Nov 16, 2024 · 0 comments · Fixed by #531
Closed

[BUG] - Kubernetes cluster always recreated when enable_firewall set to true #529

bensherred opened this issue Nov 16, 2024 · 0 comments · Fixed by #531
Assignees
Labels

Comments

@bensherred
Copy link

Describe the bug
I'm trying to create a Kubernetes cluster with the enable_firewall argument set to true. I have the following resource which successfully creates a Kubernetes cluster with the firewall enabled.

resource "vultr_kubernetes" "my_cluster" {
  label           = "my-cluster"
  region          = "fra"
  version         = "v1.31.2+1"
  enable_firewall = true

  node_pools {
    label         = "small"
    node_quantity = 1
    plan          = "vc2-1c-2gb"
    auto_scaler   = true
    min_nodes     = 1
    max_nodes     = 3
  }
}

However, when I run terraform plan again (without any changes), I get the following output and the cluster is recreated (when applying the changes). The terraform plan command thinks the value of enable_firewall has changed from false to true and so it recreates the cluster everytime.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.k8s.vultr_kubernetes.my_cluster must be replaced
-/+ resource "vultr_kubernetes" "my_cluster" {
      ~ client_certificate     = (sensitive value)
      ~ client_key             = (sensitive value)
      ~ cluster_ca_certificate = (sensitive value)
      ~ cluster_subnet         = "10.244.0.0/16" -> (known after apply)
      ~ date_created           = "2024-11-16T10:24:47+00:00" -> (known after apply)
      ~ enable_firewall        = false -> true # forces replacement
      ~ endpoint               = "{hidden}" -> (known after apply)
      ~ firewall_group_id      = "552f8af8-29e5-41d0-a9a3-501c1ebc697d" -> (known after apply)
      ~ id                     = "f54046c4-02c9-4eee-bc53-7c5d4f599a23" -> (known after apply)
      ~ ip                     = "{hidden}" -> (known after apply)
      ~ kube_config            = (sensitive value)
      ~ service_subnet         = "10.96.0.0/12" -> (known after apply)
      ~ status                 = "active" -> (known after apply)
        # (4 unchanged attributes hidden)

      ~ node_pools {
          ~ date_created  = "2024-11-16T10:24:48+00:00" -> (known after apply)
          ~ date_updated  = "2024-11-16T10:27:08+00:00" -> (known after apply)
          ~ id            = "def650fc-8447-48f2-8575-6b048897a9d5" -> (known after apply)
          ~ nodes         = [
              - {
                  - date_created = "2024-11-16T10:24:48+00:00"
                  - id           = "036cf40a-b4a2-4b72-9129-ba5e7fe89901"
                  - label        = "small-f79470ec0277"
                  - status       = "active"
                },
            ] -> (known after apply)
          ~ status        = "active" -> (known after apply)
          ~ tag           = "tf-vke-default" -> (known after apply)
            # (6 unchanged attributes hidden)
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if
you run "terraform apply" now.

To Reproduce
Steps to reproduce the behaviour:

  1. Create a Kubernetes resource with enable_firewall set to true
  2. Run terraform apply
  3. Once the cluster has been created and terraform has finished, rerun terraform apply
  4. Note that the cluster is then recreated

Expected behaviour
The cluster shouldn't be recreated if the value of enable_firewall hasn't changed.

Screenshots
n/a

Desktop (please complete the following information where applicable:

  • OS: MacOS 15.0
  • Terraform version: v1.6.4
  • Provider version: 2.22.1

Additional context

n/a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants