-
Notifications
You must be signed in to change notification settings - Fork 566
Description
Installation method
Own AWS account
What happened?
EKS provisioning using terraform fails. I had to update the code to provision eks. code is as below
locals {
remote_node_cidr = var.remote_network_cidr
remote_pod_cidr = var.remote_pod_cidr
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 21.0"
1. CLUSTER CORE (Renamed Variables)
name = var.cluster_name # Renamed from cluster_name
kubernetes_version = var.cluster_version # Renamed from cluster_version
endpoint_public_access = true # Renamed from cluster_endpoint_public_access
enable_cluster_creator_admin_permissions = true
2. ADDONS (Renamed Block and Updated CNI Configuration)
addons = { # Renamed from cluster_addons
vpc-cni = {
# The workshop code uses the older configuration_values (v20.x)
# For v21.x, we recommend using the new addon_context structure for CNI config:
most_recent = true
addon_context = jsonencode({
env = {
ENABLE_POD_ENI = "true"
ENABLE_PREFIX_DELEGATION = "true"
POD_SECURITY_GROUP_ENFORCING_MODE = "standard"
}
nodeAgent = {
enablePolicyEventLogs = "true"
}
enableNetworkPolicy = "true"
})
# before_compute is no longer needed/supported for vpc-cni in this version
}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
3. SECURITY GROUP MANAGEMENT (Renamed Variables)
NOTE: create_cluster_security_group = false is likely removed or obsolete in v21.x.
If you want to use an existing SG, you would pass cluster_security_group_id.
We are removing it, as the module defaults to creating one if no ID is passed.
create_node_security_group = false
security_group_additional_rules = { # Renamed from cluster_security_group_additional_rules
hybrid-node = {
cidr_blocks = [local.remote_node_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
hybrid-pod = {
cidr_blocks = [local.remote_pod_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}
node_security_group_additional_rules = {
hybrid_node_rule = {
cidr_blocks = [local.remote_node_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
hybrid_pod_rule = {
cidr_blocks = [local.remote_pod_cidr]
description = "Allow all traffic from remote node/pod network"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}
4. REMOTE NETWORK CONFIG (Replaced/Removed Variable)
cluster_remote_network_config is removed in v21.x.
This functionality is now handled by the module's IAM Access Entries
for hybrid network scenarios, or by vpc-cni configuration.
Remove this entire block:
/*
cluster_remote_network_config = {
remote_node_networks = {
cidrs = [local.remote_node_cidr]
}
remote_pod_networks = {
cidrs = [local.remote_pod_cidr]
}
}
*/
5. NODE GROUPS (No changes needed here based on the errors)
eks_managed_node_groups = {
default = {
instance_types = ["m5.large"]
force_update_version = true
release_version = var.ami_release_version
use_name_prefix = false
iam_role_name = "${var.cluster_name}-ng-default"
iam_role_use_name_prefix = false
min_size = 3
max_size = 6
desired_size = 3
update_config = {
max_unavailable_percentage = 50
}
labels = {
workshop-default = "yes"
}
}
}
tags = merge(local.tags, {
"karpenter.sh/discovery" = var.cluster_name
})
}
What did you expect to happen?
EKS Cluster should have been provisioned and failed when followed instructions on this page
https://www.eksworkshop.com/docs/introduction/setup/your-account/using-terraform
How can we reproduce it?
Follow instructions on this page and try provisionining EKS cluster using terraform
https://www.eksworkshop.com/docs/introduction/setup/your-account/using-terraform
Anything else we need to know?
No response
EKS version
1.33
Metadata
Metadata
Assignees
Labels
Type
Projects
Status