Terraform module to provision an EKS Managed Node Group for Elastic Kubernetes Service.
Instantiate it multiple times to create EKS Managed Node Groups with specific settings such as GPUs, EC2 instance types, or autoscale parameters.
IMPORTANT: When SSH access is enabled without specifying a source security group, this module provisions EKS Node Group
nodes that are globally accessible by SSH (22) port. Normally, AWS recommends that no security group allows unrestricted ingress access to port 22 .
Tip
Cloud Posse uses atmos
to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.
Watch demo of using Atmos with Terraform

Example of running
atmos
to manage infrastructure from our Quick Start tutorial.
This module creates an EKS Managed Node Group
for an EKS cluster.
It assumes you have already created an EKS cluster, but you can create the cluster and the node group in the
same Terraform configuration. See our
full-featured root module (a.k.a. component) eks/cluster
for an example of how to do that.
This module always uses a launch template to create the node group. You can create your own launch template and pass in its ID, or else this module will create one for you.
The AWS default for EKS is that if the launch template is updated, the existing nodes will not be affected. Only new instances added to the node group would get the changes specified in the new launch template. In contrast, when the launch template changes, this module can immediately create a new node group from the new launch template to replace the old one.
See the inputs create_before_destroy
and immediately_apply_lt_changes
for details about how to control this behavior.
Currently, EKS supports 4 Operating Systems: Amazon Linux 2, Amazon Linux 2023, Bottlerocket, and Windows Server. This module supports all 4 OSes, but support for detailed configuration of the nodes varies by OS. The 4 inputs:
before_cluster_joining_userdata
kubelet_additional_options
bootstrap_additional_options
after_cluster_joining_userdata
are fully supported for Amazon Linux 2 and Windows, and take advantage of the bootstrap.sh supplied on those AMIs. NONE of these inputs are supported on Bottlerocket. On AL2023, only the first 2 are supported.
Note that for all OSes, you can supply the complete userdata
contents, which will be untouched by this module, via userdata_override_base64
.
Tip
Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.
β
We build it together with your team.
β
Your team owns everything.
β
100% Open Source and backed by fanatical support.
π Learn More
Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.
Your team can operate like a pro today.
Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.
- Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
- Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
- Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
- Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
- GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.
- Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
- Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
- Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
- Code Reviews. Enhance your teamβs code quality with our expert feedback, fostering continuous improvement and collaboration.
- Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
- Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
- Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.
With the v3.0.0 release of this module, support for Amazon Linux 2023 (AL2023) has been added, and some breaking changes have been made. Please see the release notes for details.
With the v2.0.0 (a.k.a. v0.25.0) release of this module, it has undergone major breaking changes and added new features. Please see the migration document for details.
For a complete example, see examples/complete.
For automated tests of the complete example using bats and Terratest (which tests and deploys the example on AWS), see test.
- The code examples below are manually updated and have a tendency to fall out of sync with actual code, particularly with respect to usage of other modules. Do not rely on them.
- The documentation on this page about this module's inputs, outputs, and compliance is all automatically generated and is up-to-date as of the release date. After the code itself, this is your best source of information.
- The code in examples/complete is automatically tested before every release, so that is a good place to look for verified example code. Keep in mind, however, it is code for testing, so it may not represent average use cases or best practices.
- Of course, the READMEs and
examples/complete
directories in the other modules' GitHub repos are more authoritative with respect to how to use those modules than this README is.
provider "aws" {
region = var.region
}
module "label" {
source = "cloudposse/label/null"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = var.namespace
name = var.name
stage = var.stage
delimiter = var.delimiter
attributes = ["cluster"]
tags = var.tags
}
locals {
# Prior to Kubernetes 1.19, the usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking
tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }
}
module "vpc" {
source = "cloudposse/vpc/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "1.x.x"
cidr_block = "172.16.0.0/16"
tags = local.tags
context = module.label.context
}
module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "2.x.x"
availability_zones = var.availability_zones
vpc_id = module.vpc.vpc_id
igw_id = [module.vpc.igw_id]
ipv4_cidr_block = [module.vpc.vpc_cidr_block]
nat_gateway_enabled = true
nat_instance_enabled = false
tags = local.tags
context = module.label.context
}
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "4.x.x"
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
kubernetes_version = var.kubernetes_version
oidc_provider_enabled = true
context = module.label.context
}
module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "3.x.x"
instance_types = [var.instance_type]
subnet_ids = module.subnets.public_subnet_ids
min_size = var.min_size
max_size = var.max_size
cluster_name = module.eks_cluster.eks_cluster_id
create_before_destroy = true
kubernetes_version = var.kubernetes_version == null || var.kubernetes_version == "" ? [] : [var.kubernetes_version]
# Enable the Kubernetes cluster auto-scaler to find the auto-scaling group
cluster_autoscaler_enabled = var.autoscaling_policies_enabled
context = module.label.context
# Ensure the cluster is fully created before trying to add the node group
module_depends_on = [module.eks_cluster.kubernetes_config_map_id]
}
Important
In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.
Windows managed node-groups have a few pre-requisites.
- Your cluster must contain at least one linux based worker node
- Your EKS Cluster must have the
AmazonEKSVPCResourceController
andAmazonEKSClusterPolicy
policies attached - Your cluster must have a config-map called amazon-vpc-cni with the following content
apiVersion: v1
kind: ConfigMap
metadata:
name: amazon-vpc-cni
namespace: kube-system
data:
enable-windows-ipam: "true"
- Windows nodes will automatically be tainted
kubernetes_taints = [{
key = "WINDOWS"
value = "true"
effect = "NO_SCHEDULE"
}]
- Any pods that target Windows will need to have the following attributes set in their manifest
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html
Check out these related projects.
- terraform-aws-eks-cluster - Terraform module to provision an EKS cluster on AWS
- terraform-aws-eks-workers - Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers
- terraform-aws-ec2-autoscale-group - Terraform module to provision Auto Scaling Group and Launch Template on AWS
- terraform-aws-ecs-container-definition - Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
- terraform-aws-ecs-alb-service-task - Terraform module which implements an ECS service which exposes a web service via ALB
- terraform-aws-ecs-web-app - Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more
- terraform-aws-ecs-codepipeline - Terraform module for CI/CD with AWS Code Pipeline and Code Build for ECS
- terraform-aws-ecs-cloudwatch-autoscaling - Terraform module to autoscale ECS Service based on CloudWatch metrics
- terraform-aws-ecs-cloudwatch-sns-alarms - Terraform module to create CloudWatch Alarms on ECS Service level metrics
- terraform-aws-ec2-instance - Terraform module for providing a general purpose EC2 instance
- terraform-aws-ec2-instance-group - Terraform module for provisioning multiple general purpose EC2 hosts for stateful applications
This project is under active development, and we encourage contributions from our community.
Many thanks to our outstanding contributors:
For π bug reports & feature requests, please use the issue tracker.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
- Review our Code of Conduct and Contributor Guidelines.
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull Request so that we can review your changes
NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!
Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.
Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week β and usually a 5-minute read.
Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you canβt find anywhere else. It's FREE for everyone!
Preamble to the Apache License, Version 2.0
Complete license is available in the LICENSE
file.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
All other trademarks referenced herein are the property of their respective owners.
Copyright Β© 2017-2025 Cloud Posse, LLC