Skip to content

Commit 2d96b78

Browse files
authoredSep 12, 2023
docs: Update documentation to align format across project (as much as possible) (aws-ia#1759)
1 parent 4cf6f14 commit 2d96b78

File tree

40 files changed

+1008
-1853
lines changed

40 files changed

+1008
-1853
lines changed
 

‎CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Looking at the existing issues is a great way to find something to contribute on
4646

4747
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
4848
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
49-
opensource-codeofconduct@amazon.com with any additional questions or comments.
49+
<opensource-codeofconduct@amazon.com> with any additional questions or comments.
5050

5151
## Security issue notifications
5252

‎docs/_partials/destroy.md

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
```sh
2+
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
3+
terraform destroy -target="module.eks" -auto-approve
4+
terraform destroy -auto-approve
5+
```
6+
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#destroy) for more details on cleaning up the resources created.

‎docs/patterns/sso-iam-identity-center.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@ title: SSO - IAM Identity Center
33
---
44

55
{%
6-
include-markdown "../../patterns/single-sign-on/iam-identity-center/README.md"
6+
include-markdown "../../patterns/sso-iam-identity-center/README.md"
77
%}

‎docs/patterns/sso-okta.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@ title: SSO - Okta
33
---
44

55
{%
6-
include-markdown "../../patterns/single-sign-on/okta/README.md"
6+
include-markdown "../../patterns/sso-okta/README.md"
77
%}
+34-125
Original file line numberDiff line numberDiff line change
@@ -1,154 +1,63 @@
11
# Amazon EKS Deployment with Agones Gaming Kubernetes Controller
22

3-
This example shows how to deploy and run Gaming applications on Amazon EKS with Agones Kubernetes Controller
3+
This pattern shows how to deploy and run gaming applications on Amazon EKS using the Agones Kubernetes Controller
44

5-
- Deploy Private VPC, Subnets and all the required VPC endpoints
6-
- Deploy EKS Cluster with one managed node group in an VPC
7-
- Deploy Agones Kubernetes Controller using Helm Providers
8-
- Deploy a simple gaming server and test the application
9-
10-
# What is Agones
11-
12-
Agones is an Open source Kubernetes Controller with custom resource definitions and is used to create, run, manage and scale dedicated game server processes within Kubernetes clusters using standard Kubernetes tooling and APIs.
5+
Agones is an open source Kubernetes controller that provisions and manages dedicated game server
6+
processes within Kubernetes clusters using standard Kubernetes tooling and APIs.
137
This model also allows any matchmaker to interact directly with Agones via the Kubernetes API to provision a dedicated game server
148

15-
# What is GameLift
16-
179
Amazon GameLift enables developers to deploy, operate, and scale dedicated, low-cost servers in the cloud for session-based, multiplayer games.
18-
Built on AWS global computing infrastructure, GameLift helps deliver high-performance, high-reliability, low-cost game servers while dynamically scaling your resource usage to meet worldwide player demand.
19-
20-
## How to Deploy
21-
22-
### Prerequisites:
23-
24-
Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply
25-
26-
1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
27-
2. [Kubectl](https://Kubernetes.io/docs/tasks/tools/)
28-
3. [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
29-
30-
### Deployment Steps
31-
32-
#### Step 1: Clone the repo using the command below
33-
34-
```sh
35-
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
36-
```
37-
38-
#### Step 2: Run Terraform INIT
39-
40-
Initialize a working directory with configuration files
41-
42-
```sh
43-
cd patterns/game-tech/agones-game-controller
44-
terraform init
45-
```
46-
47-
#### Step 3: Run Terraform PLAN
48-
49-
Verify the resources created by this execution
50-
51-
```sh
52-
export AWS_REGION=<ENTER YOUR REGION> # Select your own region
53-
terraform plan
54-
```
55-
56-
#### Step 4: Finally, Terraform APPLY
57-
58-
**Deploy the pattern**
59-
60-
```sh
61-
terraform apply
62-
```
63-
64-
Enter `yes` to apply.
10+
Built on AWS global computing infrastructure, GameLift helps deliver high-performance, high-reliability,
11+
low-cost game servers while dynamically scaling your resource usage to meet worldwide player demand. See below
12+
for more information on how GameLift FleetIQ can be integrated with Agones deployed on Amazon EKS.
6513

14+
Amazon GameLift FleetIQ optimizes the use of low-cost Spot Instances for cloud-based game hosting with Amazon EC2.
15+
With GameLift FleetIQ, you can work directly with your hosting resources in Amazon EC2 and Auto Scaling while
16+
taking advantage of GameLift optimizations to deliver inexpensive, resilient game hosting for your players
17+
and makes the use of low-cost Spot Instances viable for game hosting
6618

67-
#### Configure `kubectl` and test cluster
19+
This [blog](https://aws.amazon.com/blogs/gametech/introducing-the-gamelift-fleetiq-adapter-for-agones/) walks
20+
through the details of deploying EKS Cluster using eksctl and deploy Agones with GameLift FleetIQ.
6821

69-
EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster.
70-
This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster.
22+
## Deploy
7123

72-
#### Step 5: Run `update-kubeconfig` command
24+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
7325

74-
`~/.kube/config` file gets updated with cluster details and certificate from the below command
26+
## Validate
7527

76-
$ aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>
77-
78-
#### Step 6: List all the worker nodes by running the command below
79-
80-
$ kubectl get nodes
81-
82-
#### Step 7: List all the pods running in `agones-system` namespace
83-
84-
$ kubectl get pods -n agones-system
85-
86-
#### Step 8: Install K9s (OPTIONAL)
87-
88-
This step is to install K9s client tool to interact with EKS Cluster
89-
90-
curl -sS https://webinstall.dev/k9s | bash
91-
92-
Just type k9s after the installation and then you will see the output like this
93-
94-
k9s
95-
96-
![Alt Text](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/9c6f8ea3e710f7b0137be07835653a2bf4f9fdfe/images/k9s-agones-cluster.png "K9s")
97-
98-
99-
#### Step 9: Deploying the Sample game server
28+
1. Deploy the sample game server
10029

30+
```sh
10131
kubectl create -f https://raw.githubusercontent.com/googleforgames/agones/release-1.32.0/examples/simple-game-server/gameserver.yaml
102-
103-
10432
kubectl get gs
33+
```
10534

106-
Output looks like below
107-
35+
```text
10836
NAME STATE ADDRESS PORT NODE AGE
10937
simple-game-server-7r6jr Ready 34.243.345.22 7902 ip-10-1-23-233.eu-west-1.compute.internal 11h
38+
```
11039

111-
#### Step 10: Testing the Sample Game Server
112-
113-
sudo yum install netcat
114-
115-
nc -u <ADDRESS> <PORT>
116-
117-
e.g., nc -u 34.243.345.22 7902
40+
2. Test the sample game server using [`netcat`](https://netcat.sourceforge.net/)
11841

119-
Output looks like below
42+
```sh
43+
echo -n "UDP test - Hello EKS Blueprints!" | nc -u 34.243.345.22 7902
44+
```
12045

121-
TeamRole:~/environment/eks-blueprints (main) $ echo -n "UDP test - Hello Workshop" | nc -u 34.243.345.22 7902
122-
Hello Workshop
123-
ACK: Hello Workshop
46+
```text
47+
Hello EKS Blueprints!
48+
ACK: Hello EKS Blueprints!
12449
EXIT
12550
ACK: EXIT
51+
```
12652

127-
# Deploy GameLift FleetIQ
128-
129-
Amazon GameLift FleetIQ optimizes the use of low-cost Spot Instances for cloud-based game hosting with Amazon EC2. With GameLift FleetIQ, you can work directly with your hosting resources in Amazon EC2 and Auto Scaling while taking advantage of GameLift optimizations to deliver inexpensive, resilient game hosting for your players and makes the use of low-cost Spot Instances viable for game hosting
130-
131-
This [blog](https://aws.amazon.com/blogs/gametech/introducing-the-gamelift-fleetiq-adapter-for-agones/) will go through the details of deploying EKS Cluster using eksctl and deploy Agones with GameLift FleetIQ
132-
133-
Download the sh and execute
53+
## Destroy
13454

135-
curl -O https://raw.githubusercontent.com/awslabs/fleetiq-adapter-for-agones/master/Agones_EKS_FleetIQ_Integration_Package%5BBETA%5D/quick_install/fleet_eks_agones_quickinstall.sh
136-
137-
## Cleanup
138-
139-
To clean up your environment, destroy the Terraform modules in reverse order.
140-
141-
Destroy the Kubernetes Add-ons, EKS cluster with Node groups and VPC
55+
Delete the resources created by the sample game server first:
14256

14357
```sh
144-
terraform destroy -target="helm_release.agones" -auto-approve
145-
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
146-
terraform destroy -target="module.eks" -auto-approve
147-
terraform destroy -target="module.vpc" -auto-approve
58+
kubectl -n default delete gs --all || true
14859
```
14960

150-
Finally, destroy any additional resources that are not in the above modules
151-
152-
```sh
153-
terraform destroy -auto-approve
154-
```
61+
{%
62+
include-markdown "../../docs/_partials/destroy.md"
63+
%}

‎patterns/agones-game-controller/destroy.sh

-8
This file was deleted.

‎patterns/agones-game-controller/main.tf

+21-27
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ module "eks" {
7171
max_size = 5
7272
desired_size = 2
7373
}
74+
7475
agones_system = {
7576
instance_types = ["m5.large"]
7677
labels = {
@@ -87,6 +88,7 @@ module "eks" {
8788
max_size = 1
8889
desired_size = 1
8990
}
91+
9092
agones_metrics = {
9193
instance_types = ["m5.large"]
9294
labels = {
@@ -123,7 +125,6 @@ module "eks" {
123125
type = "ingress"
124126
source_cluster_security_group = true
125127
}
126-
127128
}
128129

129130
tags = local.tags
@@ -135,7 +136,7 @@ module "eks" {
135136

136137
module "eks_blueprints_addons" {
137138
source = "aws-ia/eks-blueprints-addons/aws"
138-
version = "~> 1.0"
139+
version = "~> 1.7"
139140

140141
cluster_name = module.eks.cluster_name
141142
cluster_endpoint = module.eks.cluster_endpoint
@@ -153,32 +154,25 @@ module "eks_blueprints_addons" {
153154
enable_metrics_server = true
154155
enable_cluster_autoscaler = true
155156

156-
tags = local.tags
157-
}
158-
159-
################################################################################
160-
# Agones Helm Chart
161-
################################################################################
157+
helm_releases = {
158+
agones = {
159+
description = "A Helm chart for Agones game server"
160+
namespace = "agones-system"
161+
create_namespace = true
162+
chart = "agones"
163+
chart_version = "1.32.0"
164+
repository = "https://agones.dev/chart/stable"
165+
values = [
166+
templatefile("${path.module}/helm_values/agones-values.yaml", {
167+
expose_udp = true
168+
gameserver_minport = local.gameserver_minport
169+
gameserver_maxport = local.gameserver_maxport
170+
})
171+
]
172+
}
173+
}
162174

163-
# NOTE: Agones requires a Node group in Public Subnets and enable Public IP
164-
resource "helm_release" "agones" {
165-
name = "agones"
166-
chart = "agones"
167-
version = "1.32.0"
168-
repository = "https://agones.dev/chart/stable"
169-
description = "Agones helm chart"
170-
namespace = "agones-system"
171-
create_namespace = true
172-
173-
values = [templatefile("${path.module}/helm_values/agones-values.yaml", {
174-
expose_udp = true
175-
gameserver_minport = local.gameserver_minport
176-
gameserver_maxport = local.gameserver_maxport
177-
})]
178-
179-
depends_on = [
180-
module.eks_blueprints_addons
181-
]
175+
tags = local.tags
182176
}
183177

184178
################################################################################

‎patterns/appmesh-mtls/README.md

+243-261
Large diffs are not rendered by default.

‎patterns/appmesh-mtls/main.tf

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ module "eks" {
9494

9595
module "eks_blueprints_addons" {
9696
source = "aws-ia/eks-blueprints-addons/aws"
97-
version = "~> 1.0"
97+
version = "~> 1.7"
9898

9999
cluster_name = module.eks.cluster_name
100100
cluster_endpoint = module.eks.cluster_endpoint

‎patterns/argocd/README.md

+13-52
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,24 @@
11
# Amazon EKS Cluster w/ ArgoCD
22

3-
This example shows how to provision an EKS cluster with:
3+
This pattern demonstrates an EKS cluster that uses ArgoCD for application deployments.
44

5-
- ArgoCD
6-
- Workloads and addons deployed by ArgoCD
7-
8-
To better understand how ArgoCD works with EKS Blueprints, read the EKS Blueprints ArgoCD [Documentation](https://aws-ia.github.io/terraform-aws-eks-blueprints/latest/add-ons/argocd/)
9-
10-
## Reference Documentation
11-
12-
- [Documentation](https://aws-ia.github.io/terraform-aws-eks-blueprints/latest/add-ons/argocd/)
5+
- [Documentation](https://argo-cd.readthedocs.io/en/stable/)
136
- [EKS Blueprints Add-ons Repo](https://github.com/aws-samples/eks-blueprints-add-ons)
147
- [EKS Blueprints Workloads Repo](https://github.com/aws-samples/eks-blueprints-workloads)
158

16-
## Prerequisites
17-
18-
Ensure that you have the following tools installed locally:
19-
20-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
21-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
22-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
23-
24-
### Minimum IAM Policy
25-
26-
> **Note**: The policy resource is set as `*` to allow all resources, this is not a recommended practice.
27-
28-
You can find the policy [here](min-iam-policy.json)
29-
309
## Deploy
3110

32-
To provision this example:
33-
34-
```sh
35-
terraform init
36-
terraform apply
37-
```
38-
39-
Enter `yes` at command prompt to apply
11+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
4012

4113
## Validate
4214

43-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the deployment.
44-
45-
1. Run `update-kubeconfig` command:
46-
47-
```sh
48-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME> --alias <CLUSTER_NAME>
49-
```
50-
51-
2. List out the pods running currently:
15+
1. List out the pods running currently:
5216

5317
```sh
5418
kubectl get pods -A
19+
```
5520

21+
```text
5622
NAMESPACE NAME READY STATUS RESTARTS AGE
5723
argo-rollouts argo-rollouts-5d47ccb8d4-854s6 1/1 Running 0 23h
5824
argo-rollouts argo-rollouts-5d47ccb8d4-srjk9 1/1 Running 0 23h
@@ -118,7 +84,7 @@ The following command will update the `kubeconfig` on your local machine and all
11884
team-riker guestbook-ui-c86c478bd-zg2z4 1/1 Running 0 25m
11985
```
12086

121-
3. You can access the ArgoCD UI by running the following command:
87+
2. Access the ArgoCD UI by running the following command:
12288

12389
```sh
12490
kubectl port-forward svc/argo-cd-argocd-server 8080:443 -n argocd
@@ -140,17 +106,12 @@ The following command will update the `kubeconfig` on your local machine and all
140106
141107
## Destroy
142108
143-
To teardown and remove the resources created in this example:
144-
145109
First, we need to ensure that the ArgoCD applications are properly cleaned up from the cluster, this can be achieved in multiple ways:
146110
147-
1) Disabling the `argocd_applications` configuration and running `terraform apply` again
148-
2) Deleting the apps using `argocd` [cli](https://argo-cd.readthedocs.io/en/stable/user-guide/app_deletion/#deletion-using-argocd)
149-
3) Deleting the apps using `kubectl` following [ArgoCD guidance](https://argo-cd.readthedocs.io/en/stable/user-guide/app_deletion/#deletion-using-kubectl)
111+
- Disabling the `argocd_applications` configuration and running `terraform apply` again
112+
- Deleting the apps using `argocd` [cli](https://argo-cd.readthedocs.io/en/stable/user-guide/app_deletion/#deletion-using-argocd)
113+
- Deleting the apps using `kubectl` following [ArgoCD guidance](https://argo-cd.readthedocs.io/en/stable/user-guide/app_deletion/#deletion-using-kubectl)
150114
151-
Then you can start delete the terraform resources:
152-
```sh
153-
terraform destroy -target=module.eks_blueprints_kubernetes_addons -auto-approve
154-
terraform destroy -target=module.eks -auto-approve
155-
terraform destroy -auto-approve
156-
````
115+
{%
116+
include-markdown "../../docs/_partials/destroy.md"
117+
%}

‎patterns/argocd/min-iam-policy.json

-112
This file was deleted.
+125-232
Original file line numberDiff line numberDiff line change
@@ -1,245 +1,138 @@
11
# EKS Cluster w/ Elastic Fabric Adapter
22

3-
This example shows how to provision an Amazon EKS Cluster with an EFA-enabled nodegroup.
4-
5-
## Prerequisites:
6-
7-
Ensure that you have the following tools installed locally:
8-
9-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
10-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
11-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
3+
This pattern demonstrates an Amazon EKS Cluster with an EFA-enabled nodegroup.
124

135
## Deploy
146

15-
To provision this example:
16-
17-
```sh
18-
terraform init
19-
terraform apply
20-
21-
```
22-
23-
Enter `yes` at command prompt to apply
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
248

259
## Validate
2610

27-
1. Run `update-kubeconfig` command, using the Terraform provided Output, replace with your `$AWS_REGION` and your `$CLUSTER_NAME` variables.
28-
29-
```sh
30-
aws eks --region <$AWS_REGION> update-kubeconfig --name <$CLUSTER_NAME>
31-
```
32-
33-
2. Test by listing Nodes in in the Cluster, you should see Fargate instances as your Cluster Nodes.
34-
35-
```sh
36-
kubectl get nodes
37-
kubectl get nodes -o yaml | grep instance-type | grep node | grep -v f:
38-
```
39-
40-
Your nodes and node types will be listed:
41-
42-
```text
43-
# kubectl get nodes
44-
NAME STATUS ROLES AGE VERSION
45-
ip-10-11-10-103.ec2.internal Ready <none> 4m1s v1.25.7-eks-a59e1f0
46-
ip-10-11-19-28.ec2.internal Ready <none> 11m v1.25.7-eks-a59e1f0
47-
ip-10-11-2-151.ec2.internal Ready <none> 11m v1.25.7-eks-a59e1f0
48-
ip-10-11-2-18.ec2.internal Ready <none> 5m1s v1.25.7-eks-a59e1f0
49-
# kubectl get nodes -o yaml | grep instance-type | grep node | grep -v f:
50-
node.kubernetes.io/instance-type: g5.8xlarge
51-
node.kubernetes.io/instance-type: m5.large
52-
node.kubernetes.io/instance-type: m5.large
53-
node.kubernetes.io/instance-type: g5.8xlarge
54-
```
55-
56-
You should see two EFA-enabled (in this example `g5.8xlarge`) nodes in the list.
57-
This verifies that you are connected to your EKS cluster and it is configured with EFA nodes.
58-
59-
3. Deploy Kubeflow MPI Operator
60-
61-
Kubeflow MPI Operator is required for running MPIJobs on EKS. We will use an MPIJob to test EFA.
62-
To deploy the MPI operator execute the following:
63-
64-
```sh
65-
kubectl apply -f https://raw.githubusercontent.com/kubeflow/mpi-operator/v0.3.0/deploy/v2beta1/mpi-operator.yaml
66-
```
67-
68-
Output:
69-
70-
```text
71-
namespace/mpi-operator created
72-
customresourcedefinition.apiextensions.k8s.io/mpijobs.kubeflow.org created
73-
serviceaccount/mpi-operator created
74-
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-admin created
75-
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-edit created
76-
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-view created
77-
clusterrole.rbac.authorization.k8s.io/mpi-operator created
78-
clusterrolebinding.rbac.authorization.k8s.io/mpi-operator created
79-
deployment.apps/mpi-operator created
80-
```
81-
82-
In addition to deploying the operator, please apply a patch to the mpi-operator clusterrole
83-
to allow the mpi-operator service account access to `leases` resources in the `coordination.k8s.io` apiGroup.
84-
85-
```sh
86-
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/kubeflow/mpi-operator/clusterrole-mpi-operator.yaml
87-
```
88-
89-
Output:
90-
91-
```text
92-
clusterrole.rbac.authorization.k8s.io/mpi-operator configured
93-
```
94-
95-
4. Test EFA
96-
97-
We will run two tests. The first one will show the presence of EFA adapters on our EFA-enabled nodes. The second will test EFA performance.
98-
99-
5. EFA Info Test
100-
101-
To run the EFA info test, execute the following commands:
102-
103-
```sh
104-
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/efa-device-plugin/test-efa.yaml
105-
```
106-
107-
Output:
108-
109-
```text
110-
mpijob.kubeflow.org/efa-info-test created
111-
```
112-
113-
```sh
114-
kubectl get pods
115-
```
11+
1. List the nodes by instance type:
11612

117-
Output:
13+
```sh
14+
kubectl get nodes -o yaml | grep instance-type | grep node | grep -v f:
15+
```
11816

119-
```text
120-
NAME READY STATUS RESTARTS AGE
121-
efa-info-test-launcher-hckkj 0/1 Completed 2 37s
122-
efa-info-test-worker-0 1/1 Running 0 38s
123-
efa-info-test-worker-1 1/1 Running 0 38s
124-
```
125-
126-
Once the test launcher pod enters status `Running` or `Completed`, see the test logs using the command below:
127-
128-
```sh
129-
kubectl logs -f $(kubectl get pods | grep launcher | cut -d ' ' -f 1)
130-
```
131-
132-
Output:
133-
134-
```text
135-
Warning: Permanently added 'efa-info-test-worker-1.efa-info-test-worker.default.svc,10.11.13.224' (ECDSA) to the list of known hosts.
136-
Warning: Permanently added 'efa-info-test-worker-0.efa-info-test-worker.default.svc,10.11.4.63' (ECDSA) to the list of known hosts.
137-
[1,1]<stdout>:provider: efa
138-
[1,1]<stdout>: fabric: efa
139-
[1,1]<stdout>: domain: rdmap197s0-rdm
140-
[1,1]<stdout>: version: 116.10
141-
[1,1]<stdout>: type: FI_EP_RDM
142-
[1,1]<stdout>: protocol: FI_PROTO_EFA
143-
[1,0]<stdout>:provider: efa
144-
[1,0]<stdout>: fabric: efa
145-
[1,0]<stdout>: domain: rdmap197s0-rdm
146-
[1,0]<stdout>: version: 116.10
147-
[1,0]<stdout>: type: FI_EP_RDM
148-
[1,0]<stdout>: protocol: FI_PROTO_EFA
149-
```
150-
151-
This result shows that two EFA adapters are available (one for each worker pod).
152-
153-
Lastly, delete the test job:
154-
155-
```sh
156-
kubectl delete mpijob efa-info-test
157-
```
158-
159-
Output:
160-
161-
```text
162-
mpijob.kubeflow.org "efa-info-test" deleted
163-
```
164-
165-
6. EFA NCCL Test
166-
167-
To run the EFA NCCL test please execute the following kubectl command:
168-
169-
```sh
170-
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/efa-device-plugin/test-nccl-efa.yaml
171-
```
172-
173-
Output:
174-
175-
```text
176-
mpijob.kubeflow.org/test-nccl-efa created
177-
```
178-
179-
Then display the pods in the current namespace:
180-
181-
```sh
182-
kubectl get pods
183-
```
184-
185-
Output:
186-
187-
```text
188-
NAME READY STATUS RESTARTS AGE
189-
test-nccl-efa-launcher-tx47t 1/1 Running 2 (31s ago) 33s
190-
test-nccl-efa-worker-0 1/1 Running 0 33s
191-
test-nccl-efa-worker-1 1/1 Running 0 33s
192-
```
193-
194-
Once the launcher pod enters `Running` or `Completed` state, execute the following to see the test logs:
195-
196-
```sh
197-
kubectl logs -f $(kubectl get pods | grep launcher | cut -d ' ' -f 1)
198-
```
199-
200-
The following section from the beginning of the log, indicates that the test is being performed using EFA:
201-
202-
```text
203-
[1,0]<stdout>:test-nccl-efa-worker-0:21:21 [0] NCCL INFO NET/OFI Selected Provider is efa (found 1 nics)
204-
[1,0]<stdout>:test-nccl-efa-worker-0:21:21 [0] NCCL INFO Using network AWS Libfabric
205-
[1,0]<stdout>:NCCL version 2.12.7+cuda11.4
206-
```
207-
208-
Columns 8 and 12 in the output table show the in-place and out-of-place bus bandwidth calculated for the data size listed in column 1. In this case it is 3.13 and 3.12 GB/s respectively.
209-
Your actual results may be slightly different. The calculated average bus bandwidth is displayed at the bottom of the log when the test finishes after it reaches the max data size,
210-
specified in the mpijob manifest. In this result the average bus bandwidth is 1.15 GB/s.
211-
212-
```
213-
[1,0]<stdout>:# size count type redop root time algbw busbw #wrong time algbw busbw #wrong
214-
[1,0]<stdout>:# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
215-
...
216-
[1,0]<stdout>: 262144 65536 float sum -1 195.0 1.34 1.34 0 194.0 1.35 1.35 0
217-
[1,0]<stdout>: 524288 131072 float sum -1 296.9 1.77 1.77 0 291.1 1.80 1.80 0
218-
[1,0]<stdout>: 1048576 262144 float sum -1 583.4 1.80 1.80 0 579.6 1.81 1.81 0
219-
[1,0]<stdout>: 2097152 524288 float sum -1 983.3 2.13 2.13 0 973.9 2.15 2.15 0
220-
[1,0]<stdout>: 4194304 1048576 float sum -1 1745.4 2.40 2.40 0 1673.2 2.51 2.51 0
221-
...
222-
[1,0]<stdout>:# Avg bus bandwidth : 1.15327
223-
```
224-
225-
Finally, delete the test mpi job:
226-
227-
```sh
228-
kubectl delete mpijob test-nccl-efa
229-
```
230-
231-
Output:
232-
233-
```text
234-
mpijob.kubeflow.org "test-nccl-efa" deleted
235-
```
17+
```text
18+
node.kubernetes.io/instance-type: g5.8xlarge
19+
node.kubernetes.io/instance-type: m5.large
20+
node.kubernetes.io/instance-type: m5.large
21+
node.kubernetes.io/instance-type: g5.8xlarge
22+
```
23+
24+
You should see two EFA-enabled (in this example `g5.8xlarge`) nodes in the list.
25+
26+
2. Deploy Kubeflow MPI Operator
27+
28+
Kubeflow MPI Operator is required for running MPIJobs on EKS. We will use an MPIJob to test EFA.
29+
To deploy the MPI operator execute the following:
30+
31+
```sh
32+
kubectl apply -f https://raw.githubusercontent.com/kubeflow/mpi-operator/v0.3.0/deploy/v2beta1/mpi-operator.yaml
33+
```
34+
35+
```text
36+
namespace/mpi-operator created
37+
customresourcedefinition.apiextensions.k8s.io/mpijobs.kubeflow.org created
38+
serviceaccount/mpi-operator created
39+
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-admin created
40+
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-edit created
41+
clusterrole.rbac.authorization.k8s.io/kubeflow-mpijobs-view created
42+
clusterrole.rbac.authorization.k8s.io/mpi-operator created
43+
clusterrolebinding.rbac.authorization.k8s.io/mpi-operator created
44+
deployment.apps/mpi-operator created
45+
```
46+
47+
In addition to deploying the operator, please apply a patch to the mpi-operator clusterrole
48+
to allow the mpi-operator service account access to `leases` resources in the `coordination.k8s.io` apiGroup.
49+
50+
```sh
51+
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/kubeflow/mpi-operator/clusterrole-mpi-operator.yaml
52+
```
53+
54+
```text
55+
clusterrole.rbac.authorization.k8s.io/mpi-operator configured
56+
```
57+
58+
3. EFA test
59+
60+
The results should shown that two EFA adapters are available (one for each worker pod)
61+
62+
```sh
63+
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/efa-device-plugin/test-efa.yaml
64+
```
65+
66+
```text
67+
mpijob.kubeflow.org/efa-info-test created
68+
```
69+
70+
Once the test launcher pod enters status `Running` or `Completed`, see the test logs using the command below:
71+
72+
```sh
73+
kubectl logs -f $(kubectl get pods | grep launcher | cut -d ' ' -f 1)
74+
```
75+
76+
```text
77+
Warning: Permanently added 'efa-info-test-worker-1.efa-info-test-worker.default.svc,10.11.13.224' (ECDSA) to the list of known hosts.
78+
Warning: Permanently added 'efa-info-test-worker-0.efa-info-test-worker.default.svc,10.11.4.63' (ECDSA) to the list of known hosts.
79+
[1,1]<stdout>:provider: efa
80+
[1,1]<stdout>: fabric: efa
81+
[1,1]<stdout>: domain: rdmap197s0-rdm
82+
[1,1]<stdout>: version: 116.10
83+
[1,1]<stdout>: type: FI_EP_RDM
84+
[1,1]<stdout>: protocol: FI_PROTO_EFA
85+
[1,0]<stdout>:provider: efa
86+
[1,0]<stdout>: fabric: efa
87+
[1,0]<stdout>: domain: rdmap197s0-rdm
88+
[1,0]<stdout>: version: 116.10
89+
[1,0]<stdout>: type: FI_EP_RDM
90+
[1,0]<stdout>: protocol: FI_PROTO_EFA
91+
```
92+
93+
4. EFA NCCL test
94+
95+
To run the EFA NCCL test please execute the following kubectl command:
96+
97+
```sh
98+
kubectl apply -f https://raw.githubusercontent.com/aws-samples/aws-do-eks/main/Container-Root/eks/deployment/efa-device-plugin/test-nccl-efa.yaml
99+
```
100+
101+
```text
102+
mpijob.kubeflow.org/test-nccl-efa created
103+
```
104+
105+
Once the launcher pod enters `Running` or `Completed` state, execute the following to see the test logs:
106+
107+
```sh
108+
kubectl logs -f $(kubectl get pods | grep launcher | cut -d ' ' -f 1)
109+
```
110+
111+
```text
112+
[1,0]<stdout>:test-nccl-efa-worker-0:21:21 [0] NCCL INFO NET/OFI Selected Provider is efa (found 1 nics)
113+
[1,0]<stdout>:test-nccl-efa-worker-0:21:21 [0] NCCL INFO Using network AWS Libfabric
114+
[1,0]<stdout>:NCCL version 2.12.7+cuda11.4
115+
```
116+
117+
Columns 8 and 12 in the output table show the in-place and out-of-place bus bandwidth calculated for the data size listed in column 1. In this case it is 3.13 and 3.12 GB/s respectively.
118+
Your actual results may be slightly different. The calculated average bus bandwidth is displayed at the bottom of the log when the test finishes after it reaches the max data size,
119+
specified in the mpijob manifest. In this result the average bus bandwidth is 1.15 GB/s.
120+
121+
```text
122+
[1,0]<stdout>:# size count type redop root time algbw busbw #wrong time algbw busbw #wrong
123+
[1,0]<stdout>:# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
124+
...
125+
[1,0]<stdout>: 262144 65536 float sum -1 195.0 1.34 1.34 0 194.0 1.35 1.35 0
126+
[1,0]<stdout>: 524288 131072 float sum -1 296.9 1.77 1.77 0 291.1 1.80 1.80 0
127+
[1,0]<stdout>: 1048576 262144 float sum -1 583.4 1.80 1.80 0 579.6 1.81 1.81 0
128+
[1,0]<stdout>: 2097152 524288 float sum -1 983.3 2.13 2.13 0 973.9 2.15 2.15 0
129+
[1,0]<stdout>: 4194304 1048576 float sum -1 1745.4 2.40 2.40 0 1673.2 2.51 2.51 0
130+
...
131+
[1,0]<stdout>:# Avg bus bandwidth : 1.15327
132+
```
236133

237134
## Destroy
238135

239-
To teardown and remove the resources created in this example:
240-
241-
```sh
242-
terraform destroy -target module.eks_blueprints_addons -auto-approve
243-
terraform destroy -target module.eks -auto-approve
244-
terraform destroy -auto-approve
245-
```
136+
{%
137+
include-markdown "../../docs/_partials/destroy.md"
138+
%}

‎patterns/external-secrets/README.md

+16-74
Original file line numberDiff line numberDiff line change
@@ -1,83 +1,25 @@
11
# Amazon EKS Cluster w/ External Secrets Operator
22

3-
This example deploys an EKS Cluster with the External Secrets Operator. The cluster is populated with a ClusterSecretStore and SecretStore example using SecretManager and Parameter Store respectively. A secret for each store is also created. Both stores use IRSA to retrieve the secret values from AWS.
3+
This pattern deploys an EKS Cluster with the External Secrets Operator.
4+
The cluster is populated with a ClusterSecretStore and SecretStore example
5+
using SecretManager and Parameter Store respectively. A secret for each
6+
store is also created. Both stores use IRSA to retrieve the secret values from AWS.
47

5-
## How to Deploy
8+
## Deploy
69

7-
### Prerequisites:
10+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
811

9-
Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply
12+
## Validate
1013

11-
1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
12-
2. [Kubectl](https://Kubernetes.io/docs/tasks/tools/)
13-
3. [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
14+
1. List the secret resources in the `external-secrets` namespace
1415

15-
### Deployment Steps
16+
```sh
17+
kubectl get externalsecrets -n external-secrets
18+
kubectl get secrets -n external-secrets
19+
```
1620

17-
#### Step 1: Clone the repo using the command below
21+
## Destroy
1822

19-
```sh
20-
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
21-
```
22-
23-
#### Step 2: Run Terraform INIT
24-
25-
Initialize a working directory with configuration files
26-
27-
```sh
28-
cd patterns/external-secrets/
29-
terraform init
30-
```
31-
32-
#### Step 3: Run Terraform PLAN
33-
34-
Verify the resources created by this execution
35-
36-
```sh
37-
export AWS_REGION=<ENTER YOUR REGION> # Select your own region
38-
terraform plan
39-
```
40-
41-
#### Step 4: Finally, Terraform APPLY
42-
43-
**Deploy the pattern**
44-
45-
```sh
46-
terraform apply
47-
```
48-
49-
Enter `yes` to apply.
50-
51-
### Configure `kubectl` and test cluster
52-
53-
EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster.
54-
This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster.
55-
56-
#### Step 5: Run `update-kubeconfig` command
57-
58-
`~/.kube/config` file gets updated with cluster details and certificate from the below command
59-
60-
$ aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>
61-
62-
### Step 6: List the secret resources in the `external-secrets` namespace
63-
64-
$ kubectl get externalsecrets -n external-secrets
65-
$ kubectl get secrets -n external-secrets
66-
67-
## Cleanup
68-
69-
To clean up your environment, destroy the Terraform modules in reverse order.
70-
71-
Destroy the Kubernetes Add-ons, EKS cluster with Node groups and VPC
72-
73-
```sh
74-
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
75-
terraform destroy -target="module.eks_blueprints" -auto-approve
76-
terraform destroy -target="module.vpc" -auto-approve
77-
```
78-
79-
Finally, destroy any additional resources that are not in the above modules
80-
81-
```sh
82-
terraform destroy -auto-approve
83-
```
23+
{%
24+
include-markdown "../../docs/_partials/destroy.md"
25+
%}

‎patterns/fargate-serverless/README.md

+155-176
Original file line numberDiff line numberDiff line change
@@ -1,193 +1,172 @@
11
# Serverless Amazon EKS Cluster
22

3-
This example shows how to provision an Amazon EKS Cluster (serverless data plane) using Fargate Profiles.
3+
This pattern demonstrates an Amazon EKS Cluster that utilizes Fargate profiles for a serverless data plane.
44

5-
This example solution provides:
5+
## Deploy
66

7-
- AWS EKS Cluster (control plane)
8-
- AWS EKS Fargate Profiles for the `kube-system` namespace which is used by the `coredns`, `vpc-cni`, and `kube-proxy` addons, as well as profile that will match on `app-*` namespaces using a wildcard pattern.
9-
- AWS EKS managed addons `coredns`, `vpc-cni` and `kube-proxy`
10-
- AWS Load Balancer Controller add-on deployed through a Helm chart. The default AWS Load Balancer Controller add-on configuration is overridden so that it can be deployed on Fargate compute.
11-
- A sample-app is provided (in-line) to demonstrate how to configure the Ingress so that application can be accessed over the internet.
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
128

13-
## Prerequisites:
9+
## Validate
1410

15-
Ensure that you have the following tools installed locally:
11+
1. List the nodes in in the cluster; you should see Fargate instances:
12+
13+
```sh
14+
kubectl get nodes
15+
```
16+
17+
```text
18+
NAME STATUS ROLES AGE VERSION
19+
fargate-ip-10-0-17-17.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
20+
fargate-ip-10-0-20-244.us-west-2.compute.internal Ready <none> 71s v1.26.3-eks-f4dc2c0
21+
fargate-ip-10-0-41-143.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
22+
fargate-ip-10-0-44-95.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
23+
fargate-ip-10-0-45-153.us-west-2.compute.internal Ready <none> 77s v1.26.3-eks-f4dc2c0
24+
fargate-ip-10-0-47-31.us-west-2.compute.internal Ready <none> 75s v1.26.3-eks-f4dc2c0
25+
fargate-ip-10-0-6-175.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
26+
```
27+
28+
2. List the pods. All the pods should reach a status of `Running` after approximately 60 seconds:
29+
30+
```sh
31+
kubectl get pods -A
32+
```
33+
34+
```text
35+
NAMESPACE NAME READY STATUS RESTARTS AGE
36+
app-2048 app-2048-65bd744dfb-7g9rx 1/1 Running 0 2m34s
37+
app-2048 app-2048-65bd744dfb-nxcbm 1/1 Running 0 2m34s
38+
app-2048 app-2048-65bd744dfb-z4b6z 1/1 Running 0 2m34s
39+
kube-system aws-load-balancer-controller-6cbdb58654-fvskt 1/1 Running 0 26m
40+
kube-system aws-load-balancer-controller-6cbdb58654-sc7dk 1/1 Running 0 26m
41+
kube-system coredns-7b7bddbc85-jmbv6 1/1 Running 0 26m
42+
kube-system coredns-7b7bddbc85-rgmzq 1/1 Running 0 26m
43+
```
44+
45+
3. Validate the `aws-logging` configMap for Fargate Fluentbit was created:
46+
47+
```sh
48+
kubectl -n aws-observability get configmap aws-logging -o yaml
49+
```
50+
51+
```yaml
52+
apiVersion: v1
53+
data:
54+
filters.conf: |
55+
[FILTER]
56+
Name parser
57+
Match *
58+
Key_Name log
59+
Parser regex
60+
Preserve_Key True
61+
Reserve_Data True
62+
flb_log_cw: "true"
63+
output.conf: |
64+
[OUTPUT]
65+
Name cloudwatch_logs
66+
Match *
67+
region us-west-2
68+
log_group_name /fargate-serverless/fargate-fluentbit-logs20230509014113352200000006
69+
log_stream_prefix fargate-logs-
70+
auto_create_group true
71+
parsers.conf: |
72+
[PARSER]
73+
Name regex
74+
Format regex
75+
Regex ^(?<time>[^ ]+) (?<stream>[^ ]+) (?<logtag>[^ ]+) (?<message>.+)$
76+
Time_Key time
77+
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
78+
Time_Keep On
79+
Decode_Field_As json message
80+
immutable: false
81+
kind: ConfigMap
82+
metadata:
83+
creationTimestamp: "2023-05-08T21:14:52Z"
84+
name: aws-logging
85+
namespace: aws-observability
86+
resourceVersion: "1795"
87+
uid: d822bcf5-a441-4996-857e-7fb1357bc07e
88+
```
89+
90+
You can also validate if the CloudWatch LogGroup was created accordingly, and LogStreams were populated:
91+
92+
```sh
93+
aws logs describe-log-groups \
94+
--log-group-name-prefix "/fargate-serverless/fargate-fluentbit"
95+
```
96+
97+
```json
98+
{
99+
"logGroups": [
100+
{
101+
"logGroupName": "/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006",
102+
"creationTime": 1683580491652,
103+
"retentionInDays": 90,
104+
"metricFilterCount": 0,
105+
"arn": "arn:aws:logs:us-west-2:111222333444:log-group:/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006:*",
106+
"storedBytes": 0
107+
}
108+
]
109+
}
110+
```
111+
112+
```sh
113+
aws logs describe-log-streams \
114+
--log-group-name "/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006" \
115+
--log-stream-name-prefix fargate-logs --query 'logStreams[].logStreamName'
116+
```
117+
118+
```json
119+
[
120+
"fargate-logs-flblogs.var.log.fluent-bit.log",
121+
"fargate-logs-kube.var.log.containers.aws-load-balancer-controller-7f989fc6c-grjsq_kube-system_aws-load-balancer-controller-feaa22b4cdaa71ecfc8355feb81d4b61ea85598a7bb57aef07667c767c6b98e4.log",
122+
"fargate-logs-kube.var.log.containers.aws-load-balancer-controller-7f989fc6c-wzr46_kube-system_aws-load-balancer-controller-69075ea9ab3c7474eac2a1696d3a84a848a151420cd783d79aeef960b181567f.log",
123+
"fargate-logs-kube.var.log.containers.coredns-7b7bddbc85-8cxvq_kube-system_coredns-9e4f3ab435269a566bcbaa606c02c146ad58508e67cef09fa87d5c09e4ac0088.log",
124+
"fargate-logs-kube.var.log.containers.coredns-7b7bddbc85-gcjwp_kube-system_coredns-11016818361cd68c32bf8f0b1328f3d92a6d7b8cf5879bfe8b301f393cb011cc.log"
125+
]
126+
```
16127

17-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
18-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
19-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
128+
### Example
20129

21-
## Deploy
130+
1. Create an ingress resource using the AWS load balancer controller deployed, pointing to our application service:
22131

23-
To provision this example:
132+
```sh
133+
kubectl get svc -n app-2048
134+
```
24135

25-
```sh
26-
terraform init
27-
terraform apply
136+
```text
137+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
138+
app-2048 NodePort 172.20.33.217 <none> 80:32568/TCP 2m48s
139+
```
28140

29-
```
141+
```sh
142+
kubectl -n app-2048 create ingress app-2048 --class alb --rule="/*=app-2048:80" \
143+
--annotation alb.ingress.kubernetes.io/scheme=internet-facing \
144+
--annotation alb.ingress.kubernetes.io/target-type=ip
145+
```
30146

31-
Enter `yes` at command prompt to apply
147+
```sh
148+
kubectl -n app-2048 get ingress
149+
```
32150

33-
## Validate
151+
```text
152+
NAME CLASS HOSTS ADDRESS PORTS AGE
153+
app-2048 alb * k8s-app2048-app2048-6d9c5e92d6-1234567890.us-west-2.elb.amazonaws.com 80 4m9s
154+
```
34155

35-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate.
36-
37-
1. Check the Terraform provided Output, to update your `kubeconfig`
38-
39-
```hcl
40-
Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
41-
42-
Outputs:
43-
44-
configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name fargate-serverless"
45-
```
46-
47-
2. Run `update-kubeconfig` command, using the Terraform provided Output, replace with your `$AWS_REGION` and your `$CLUSTER_NAME` variables.
48-
49-
```sh
50-
aws eks --region <$AWS_REGION> update-kubeconfig --name <$CLUSTER_NAME>
51-
```
52-
53-
3. Test by listing Nodes in in the Cluster, you should see Fargate instances as your Cluster Nodes.
54-
55-
56-
```sh
57-
kubectl get nodes
58-
NAME STATUS ROLES AGE VERSION
59-
fargate-ip-10-0-17-17.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
60-
fargate-ip-10-0-20-244.us-west-2.compute.internal Ready <none> 71s v1.26.3-eks-f4dc2c0
61-
fargate-ip-10-0-41-143.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
62-
fargate-ip-10-0-44-95.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
63-
fargate-ip-10-0-45-153.us-west-2.compute.internal Ready <none> 77s v1.26.3-eks-f4dc2c0
64-
fargate-ip-10-0-47-31.us-west-2.compute.internal Ready <none> 75s v1.26.3-eks-f4dc2c0
65-
fargate-ip-10-0-6-175.us-west-2.compute.internal Ready <none> 25m v1.26.3-eks-f4dc2c0
66-
```
67-
68-
4. Test by listing all the Pods running currently. All the Pods should reach a status of `Running` after approximately 60 seconds:
69-
70-
```sh
71-
kubectl get pods -A
72-
NAMESPACE NAME READY STATUS RESTARTS AGE
73-
app-2048 app-2048-65bd744dfb-7g9rx 1/1 Running 0 2m34s
74-
app-2048 app-2048-65bd744dfb-nxcbm 1/1 Running 0 2m34s
75-
app-2048 app-2048-65bd744dfb-z4b6z 1/1 Running 0 2m34s
76-
kube-system aws-load-balancer-controller-6cbdb58654-fvskt 1/1 Running 0 26m
77-
kube-system aws-load-balancer-controller-6cbdb58654-sc7dk 1/1 Running 0 26m
78-
kube-system coredns-7b7bddbc85-jmbv6 1/1 Running 0 26m
79-
kube-system coredns-7b7bddbc85-rgmzq 1/1 Running 0 26m
80-
```
81-
82-
5. Check if the `aws-logging` configMap for Fargate Fluentbit was created.
83-
84-
```sh
85-
kubectl -n aws-observability get configmap aws-logging -o yaml
86-
apiVersion: v1
87-
data:
88-
filters.conf: |
89-
[FILTER]
90-
Name parser
91-
Match *
92-
Key_Name log
93-
Parser regex
94-
Preserve_Key True
95-
Reserve_Data True
96-
flb_log_cw: "true"
97-
output.conf: |
98-
[OUTPUT]
99-
Name cloudwatch_logs
100-
Match *
101-
region us-west-2
102-
log_group_name /fargate-serverless/fargate-fluentbit-logs20230509014113352200000006
103-
log_stream_prefix fargate-logs-
104-
auto_create_group true
105-
parsers.conf: |
106-
[PARSER]
107-
Name regex
108-
Format regex
109-
Regex ^(?<time>[^ ]+) (?<stream>[^ ]+) (?<logtag>[^ ]+) (?<message>.+)$
110-
Time_Key time
111-
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
112-
Time_Keep On
113-
Decode_Field_As json message
114-
immutable: false
115-
kind: ConfigMap
116-
metadata:
117-
creationTimestamp: "2023-05-08T21:14:52Z"
118-
name: aws-logging
119-
namespace: aws-observability
120-
resourceVersion: "1795"
121-
uid: d822bcf5-a441-4996-857e-7fb1357bc07e
122-
```
123-
124-
You can also validate if the CloudWatch LogGroup was created accordingly, and LogStreams were populated.
125-
126-
```sh
127-
aws logs describe-log-groups --log-group-name-prefix "/fargate-serverless/fargate-fluentbit"
128-
{
129-
"logGroups": [
130-
{
131-
"logGroupName": "/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006",
132-
"creationTime": 1683580491652,
133-
"retentionInDays": 90,
134-
"metricFilterCount": 0,
135-
"arn": "arn:aws:logs:us-west-2:111222333444:log-group:/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006:*",
136-
"storedBytes": 0
137-
}
138-
]
139-
}
140-
```
141-
142-
```sh
143-
aws logs describe-log-streams --log-group-name "/fargate-serverless/fargate-fluentbit-logs20230509014113352200000006" --log-stream-name-prefix fargate-logs --query 'logStreams[].logStreamName'
144-
[
145-
"fargate-logs-flblogs.var.log.fluent-bit.log",
146-
"fargate-logs-kube.var.log.containers.aws-load-balancer-controller-7f989fc6c-grjsq_kube-system_aws-load-balancer-controller-feaa22b4cdaa71ecfc8355feb81d4b61ea85598a7bb57aef07667c767c6b98e4.log",
147-
"fargate-logs-kube.var.log.containers.aws-load-balancer-controller-7f989fc6c-wzr46_kube-system_aws-load-balancer-controller-69075ea9ab3c7474eac2a1696d3a84a848a151420cd783d79aeef960b181567f.log",
148-
"fargate-logs-kube.var.log.containers.coredns-7b7bddbc85-8cxvq_kube-system_coredns-9e4f3ab435269a566bcbaa606c02c146ad58508e67cef09fa87d5c09e4ac0088.log",
149-
"fargate-logs-kube.var.log.containers.coredns-7b7bddbc85-gcjwp_kube-system_coredns-11016818361cd68c32bf8f0b1328f3d92a6d7b8cf5879bfe8b301f393cb011cc.log"
150-
]
151-
```
152-
153-
6. (Optional) Test that the sample application.
154-
155-
Create an Ingress using the AWS LoadBalancer Controller deployed with the EKS Blueprints Add-ons module, pointing to our application Service.
156-
157-
```sh
158-
kubectl get svc -n app-2048
159-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
160-
app-2048 NodePort 172.20.33.217 <none> 80:32568/TCP 2m48s
161-
```
162-
163-
```sh
164-
kubectl -n app-2048 create ingress app-2048 --class alb --rule="/*=app-2048:80" \
165-
--annotation alb.ingress.kubernetes.io/scheme=internet-facing \
166-
--annotation alb.ingress.kubernetes.io/target-type=ip
167-
```
168-
169-
```sh
170-
kubectl -n app-2048 get ingress
171-
NAME CLASS HOSTS ADDRESS PORTS AGE
172-
app-2048 alb * k8s-app2048-app2048-6d9c5e92d6-1234567890.us-west-2.elb.amazonaws.com 80 4m9s
173-
```
174-
175-
Open the browser to access the application via the URL address shown in the last output in the ADDRESS column. In our example `k8s-app2048-app2048-6d9c5e92d6-1234567890.us-west-2.elb.amazonaws.com`.
176-
177-
> You might need to wait a few minutes, and then refresh your browser.
178-
> If your Ingress isn't created after several minutes, then run this command to view the AWS Load Balancer Controller logs:
179-
180-
```sh
181-
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
182-
```
156+
2. Open the browser to access the application via the URL address shown in the last output in the ADDRESS column.
183157

184-
## Destroy
158+
In our example: `k8s-app2048-app2048-6d9c5e92d6-1234567890.us-west-2.elb.amazonaws.com`
159+
160+
!!! info
161+
You might need to wait a few minutes, and then refresh your browser.
162+
If your Ingress isn't created after several minutes, then run this command to view the AWS Load Balancer Controller logs:
185163
186-
To teardown and remove the resources created in this example:
164+
```sh
165+
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
166+
```
167+
168+
## Destroy
187169
188-
```sh
189-
kubectl -n app-2048 delete ingress app-2048
190-
terraform destroy -target module.eks_blueprints_addons -auto-approve
191-
terraform destroy -target module.eks -auto-approve
192-
terraform destroy -auto-approve
193-
```
170+
{%
171+
include-markdown "../../docs/_partials/destroy.md"
172+
%}
+48-76
Original file line numberDiff line numberDiff line change
@@ -1,94 +1,66 @@
11
# Fully Private Amazon EKS Cluster
22

3-
This examples demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, but doesn't have outbound internet access. For that your cluster must pull images from a container registry that's in your VPC, and also must have endpoint private access enabled. This is required for nodes to register with the cluster endpoint.
3+
This pattern demonstrates an Amazon EKS cluster that does not have internet access.
4+
The private cluster must pull images from a container registry that is within in your VPC,
5+
and also must have endpoint private access enabled. This is required for nodes
6+
to register with the cluster endpoint.
47

58
Please see this [document](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html) for more details on configuring fully private EKS Clusters.
69

7-
For fully Private EKS clusters requires the following VPC endpoints to be created to communicate with AWS services. This example solution will provide these endpoints if you choose to create VPC. If you are using an existing VPC then you may need to ensure these endpoints are created.
10+
For fully Private EKS clusters requires the following VPC endpoints to be created to communicate with AWS services.
11+
This example solution will provide these endpoints if you choose to create VPC.
12+
If you are using an existing VPC then you may need to ensure these endpoints are created.
813

9-
com.amazonaws.region.aps-workspaces - For AWS Managed Prometheus Workspace
10-
com.amazonaws.region.ssm - Secrets Management
14+
com.amazonaws.region.aps-workspaces - If using AWS Managed Prometheus Workspace
15+
com.amazonaws.region.ssm - Secrets Management
1116
com.amazonaws.region.ec2
1217
com.amazonaws.region.ecr.api
1318
com.amazonaws.region.ecr.dkr
14-
com.amazonaws.region.logs – For CloudWatch Logs
15-
com.amazonaws.region.sts – If using AWS Fargate or IAM roles for service accounts
16-
com.amazonaws.region.elasticloadbalancing – If using Application Load Balancers
17-
com.amazonaws.region.autoscaling – If using Cluster Autoscaler
18-
com.amazonaws.region.s3 – Creates S3 gateway
19-
20-
## Prerequisites:
21-
22-
Ensure that you have the following tools installed locally:
23-
24-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
25-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
26-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
19+
com.amazonaws.region.logs – For CloudWatch Logs
20+
com.amazonaws.region.sts – If using AWS Fargate or IAM roles for service accounts
21+
com.amazonaws.region.elasticloadbalancing – If using Application Load Balancers
22+
com.amazonaws.region.autoscaling – If using Cluster Autoscaler
23+
com.amazonaws.region.s3
2724

2825
## Deploy
2926

30-
Since this is a Fully Private Amazon EKS Cluster, make sure that you'll have access to the Amazon VPC where the cluster will be deployed, otherwise you won't be able to access it.
31-
32-
See the [`privatelink-access`](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/patterns/privatelink-access) pattern for using AWS PrivateLink to access the private cluster from another VPC.
33-
34-
To provision this example:
35-
36-
```sh
37-
terraform init
38-
terraform apply
39-
```
40-
41-
Enter `yes` at command prompt to apply
27+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
4228

4329
## Validate
4430

45-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate.
46-
47-
1. Check the Terraform provided Output, to update your `kubeconfig`
48-
49-
```hcl
50-
Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
51-
52-
Outputs:
53-
54-
configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name fully-private-cluster"
55-
```
56-
57-
2. Run `update-kubeconfig` command, using the Terraform provided Output, replace with your `$AWS_REGION` and your `$CLUSTER_NAME` variables.
58-
59-
```sh
60-
aws eks --region <$AWS_REGION> update-kubeconfig --name <$CLUSTER_NAME>
61-
```
62-
63-
3. Test by listing Nodes in in the Cluster.
64-
65-
```sh
66-
kubectl get nodes
67-
NAME STATUS ROLES AGE VERSION
68-
ip-10-0-19-90.us-west-2.compute.internal Ready <none> 8m34s v1.26.2-eks-a59e1f0
69-
ip-10-0-44-110.us-west-2.compute.internal Ready <none> 8m36s v1.26.2-eks-a59e1f0
70-
ip-10-0-9-147.us-west-2.compute.internal Ready <none> 8m35s v1.26.2-eks-a59e1f0
71-
```
72-
73-
4. Test by listing all the Pods running currently. All the Pods should reach a status of `Running` after approximately 60 seconds:
74-
75-
```sh
76-
kubectl $ kubectl get pods -A
77-
NAMESPACE NAME READY STATUS RESTARTS AGE
78-
kube-system aws-node-jvn9x 1/1 Running 0 7m42s
79-
kube-system aws-node-mnjlf 1/1 Running 0 7m45s
80-
kube-system aws-node-q458h 1/1 Running 0 7m49s
81-
kube-system coredns-6c45d94f67-495rr 1/1 Running 0 14m
82-
kube-system coredns-6c45d94f67-5c8tc 1/1 Running 0 14m
83-
kube-system kube-proxy-47wfh 1/1 Running 0 8m32s
84-
kube-system kube-proxy-f6chz 1/1 Running 0 8m30s
85-
kube-system kube-proxy-xcfkc 1/1 Running 0 8m31s
86-
```
31+
1. Test by listing Nodes in in the cluster:
32+
33+
```sh
34+
kubectl get nodes
35+
```
36+
37+
```text
38+
NAME STATUS ROLES AGE VERSION
39+
ip-10-0-19-90.us-west-2.compute.internal Ready <none> 8m34s v1.26.2-eks-a59e1f0
40+
ip-10-0-44-110.us-west-2.compute.internal Ready <none> 8m36s v1.26.2-eks-a59e1f0
41+
ip-10-0-9-147.us-west-2.compute.internal Ready <none> 8m35s v1.26.2-eks-a59e1f0
42+
```
43+
44+
2. Test by listing all the Pods running currently. All the Pods should reach a status of `Running` after approximately 60 seconds:
45+
46+
```sh
47+
kubectl get pods -A
48+
```
49+
50+
```text
51+
NAMESPACE NAME READY STATUS RESTARTS AGE
52+
kube-system aws-node-jvn9x 1/1 Running 0 7m42s
53+
kube-system aws-node-mnjlf 1/1 Running 0 7m45s
54+
kube-system aws-node-q458h 1/1 Running 0 7m49s
55+
kube-system coredns-6c45d94f67-495rr 1/1 Running 0 14m
56+
kube-system coredns-6c45d94f67-5c8tc 1/1 Running 0 14m
57+
kube-system kube-proxy-47wfh 1/1 Running 0 8m32s
58+
kube-system kube-proxy-f6chz 1/1 Running 0 8m30s
59+
kube-system kube-proxy-xcfkc 1/1 Running 0 8m31s
60+
```
8761

8862
## Destroy
8963

90-
To teardown and remove the resources created in this example:
91-
92-
```sh
93-
terraform destroy -auto-approve
94-
```
64+
{%
65+
include-markdown "../../docs/_partials/destroy.md"
66+
%}

‎patterns/ipv6-eks-cluster/README.md

+29-50
Original file line numberDiff line numberDiff line change
@@ -1,66 +1,45 @@
11
# Amazon EKS Cluster w/ IPv6 Networking
22

3-
This example shows how to create an EKS cluster that utilizes IPv6 networking.
4-
5-
## Prerequisites:
6-
7-
Ensure that you have the following tools installed locally:
8-
9-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
10-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
11-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
3+
This pattern demonstrates an EKS cluster that utilizes IPv6 networking.
124

135
## Deploy
146

15-
To provision this example:
16-
17-
```sh
18-
terraform init
19-
terraform apply
20-
```
21-
22-
Enter `yes` at command prompt to apply
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
238

249
## Validate
2510

26-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate.
11+
1. Test by listing all the pods running currently; the `IP` should be an IPv6 address.
2712

28-
1. Run `update-kubeconfig` command:
13+
```sh
14+
kubectl get pods -A -o wide
15+
```
2916

30-
```sh
31-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
32-
```
17+
```text
18+
# Output should look like below
19+
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
20+
kube-system aws-node-bhd2s 1/1 Running 0 3m5s 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 ip-10-0-10-183.us-west-2.compute.internal <none> <none>
21+
kube-system aws-node-nmdgq 1/1 Running 0 3m21s 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
22+
kube-system coredns-799c5565b4-6wxrc 1/1 Running 0 10m 2600:1f13:6c4:a705:bbda:: ip-10-0-12-188.us-west-2.compute.internal <none> <none>
23+
kube-system coredns-799c5565b4-fjq4q 1/1 Running 0 10m 2600:1f13:6c4:a705:bbda::1 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
24+
kube-system kube-proxy-58tp7 1/1 Running 0 4m25s 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 ip-10-0-10-183.us-west-2.compute.internal <none> <none>
25+
kube-system kube-proxy-hqkgw 1/1 Running 0 4m25s 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
26+
```
3327

34-
2. Test by listing all the pods running currently; the `IP` should be an IPv6 address.
28+
2. Test by listing all the nodes running currently; the `INTERNAL-IP` should be an IPv6 address.
3529

36-
```sh
37-
kubectl get pods -A -o wide
30+
```sh
31+
kubectl nodes -A -o wide
32+
```
3833

39-
# Output should look like below
40-
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
41-
kube-system aws-node-bhd2s 1/1 Running 0 3m5s 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 ip-10-0-10-183.us-west-2.compute.internal <none> <none>
42-
kube-system aws-node-nmdgq 1/1 Running 0 3m21s 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
43-
kube-system coredns-799c5565b4-6wxrc 1/1 Running 0 10m 2600:1f13:6c4:a705:bbda:: ip-10-0-12-188.us-west-2.compute.internal <none> <none>
44-
kube-system coredns-799c5565b4-fjq4q 1/1 Running 0 10m 2600:1f13:6c4:a705:bbda::1 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
45-
kube-system kube-proxy-58tp7 1/1 Running 0 4m25s 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 ip-10-0-10-183.us-west-2.compute.internal <none> <none>
46-
kube-system kube-proxy-hqkgw 1/1 Running 0 4m25s 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 ip-10-0-12-188.us-west-2.compute.internal <none> <none>
47-
```
48-
49-
3. Test by listing all the nodes running currently; the `INTERNAL-IP` should be an IPv6 address.
50-
51-
```sh
52-
kubectl nodes -A -o wide
53-
54-
# Output should look like below
55-
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
56-
ip-10-0-10-183.us-west-2.compute.internal Ready <none> 4m57s v1.24.7-eks-fb459a0 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 <none> Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
57-
ip-10-0-12-188.us-west-2.compute.internal Ready <none> 4m57s v1.24.7-eks-fb459a0 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 <none> Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
58-
```
34+
```text
35+
# Output should look like below
36+
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
37+
ip-10-0-10-183.us-west-2.compute.internal Ready <none> 4m57s v1.24.7-eks-fb459a0 2600:1f13:6c4:a703:ecf8:3ac1:76b0:9303 <none> Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
38+
ip-10-0-12-188.us-west-2.compute.internal Ready <none> 4m57s v1.24.7-eks-fb459a0 2600:1f13:6c4:a705:a929:f8d4:9350:1b20 <none> Amazon Linux 2 5.4.226-129.415.amzn2.x86_64 containerd://1.6.6
39+
```
5940

6041
## Destroy
6142

62-
To teardown and remove the resources created in this example:
63-
64-
```sh
65-
terraform destroy -auto-approve
66-
```
43+
{%
44+
include-markdown "../../docs/_partials/destroy.md"
45+
%}

‎patterns/istio-multi-cluster/README.md

+2-9
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,7 @@
11
# Amazon EKS Multi-Cluster w/ Istio
22

3-
This example shows how to provision 2 Amazon EKS clusters with Istio setup on each of them.
4-
The Istio will be set-up to operate in a [Multi-Primary](https://istio.io/latest/docs/setup/install/multicluster/multi-primary/) way where services are shared across clusters.
5-
6-
* Deploy a VPC with additional security groups to allow cross-cluster communication and communication from nodes to the other cluster API Server endpoint
7-
* Deploy 2 EKS Cluster with one managed node group in an VPC
8-
* Add node_security_group rules for port access required for Istio communication
9-
* Install Istio using Helm resources in Terraform
10-
* Install Istio Ingress Gateway using Helm resources in Terraform
11-
* Deploy/Validate Istio communication using sample application
3+
This pattern demonstrates 2 Amazon EKS clusters configured with Istio.
4+
Istio will be set-up to operate in a [Multi-Primary](https://istio.io/latest/docs/setup/install/multicluster/multi-primary/) configuration, where services are shared across clusters.
125

136
Refer to the [documentation](https://istio.io/latest/docs/concepts/) for `Istio` concepts.
147

‎patterns/istio/README.md

+38-86
Original file line numberDiff line numberDiff line change
@@ -6,42 +6,17 @@ This example shows how to provision an EKS cluster with Istio.
66
* Add node_security_group rules for port access required for Istio communication
77
* Install Istio using Helm resources in Terraform
88
* Install Istio Ingress Gateway using Helm resources in Terraform
9-
* This step deploys a Service of type `LoadBalancer` that creates an AWS Network Load Balancer.
9+
* This step deploys a Service of type `LoadBalancer` that creates an AWS Network Load Balancer.
1010
* Deploy/Validate Istio communication using sample application
1111

1212
Refer to the [documentation](https://istio.io/latest/docs/concepts/) on Istio
1313
concepts.
1414

15-
## Prerequisites:
16-
17-
Ensure that you have the following tools installed locally:
18-
19-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
20-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
21-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
22-
4. [helm](https://helm.sh/docs/intro/install/)
23-
2415
## Deploy
2516

26-
To provision this example:
27-
28-
```sh
29-
terraform init
30-
terraform apply -target=module.vpc -target=module.eks
31-
terraform apply
32-
```
33-
34-
Enter `yes` at command prompt to apply
35-
36-
### Update local kubeconfig
37-
38-
Run the following command to update your local `kubeconfig` with latest cluster:
39-
40-
```sh
41-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
42-
```
17+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
4318

44-
### Istio Observability Add-ons
19+
### Observability Add-ons
4520

4621
Use the following code snippet to add the Istio Observability Add-ons on the EKS
4722
cluster with deployed Istio.
@@ -56,33 +31,14 @@ done
5631

5732
## Validate
5833

59-
The following command will update the `kubeconfig` on your local machine and
60-
allow you to interact with your EKS Cluster using `kubectl` to validate the
61-
deployment.
62-
63-
1. List all the worker nodes
64-
65-
```sh
66-
kubectl get nodes
67-
```
34+
1. List out all pods and services in the `istio-system` namespace:
6835

69-
Output should be similar to:
70-
```
71-
NAME STATUS ROLES AGE VERSION
72-
ip-10-0-2-141.us-west-2.compute.internal Ready <none> 9m36s v1.27.3-eks-a5565ad
73-
ip-10-0-30-86.us-west-2.compute.internal Ready <none> 9m37s v1.27.3-eks-a5565ad
74-
ip-10-0-47-71.us-west-2.compute.internal Ready <none> 9m21s v1.27.3-eks-a5565ad
36+
```sh
37+
kubectl get pods,svc -n istio-system
38+
kubectl get pods,svc -n istio-ingress
7539
```
7640

77-
2. List out all pods and services in the `istio-system` namespace:
78-
79-
```sh
80-
kubectl get pods,svc -n istio-system
81-
kubectl get pods,svc -n istio-ingress
82-
```
83-
84-
Output should be similar to:
85-
```
41+
```text
8642
NAME READY STATUS RESTARTS AGE
8743
pod/grafana-7d4f5589fb-4xj9m 1/1 Running 0 4m14s
8844
pod/istiod-ff577f8b8-c8ssk 1/1 Running 0 4m40s
@@ -106,23 +62,26 @@ deployment.
10662
service/istio-ingress LoadBalancer 172.20.104.27 k8s-istioing-istioing-844c89b6c2-875b8c9a4b4e9365.elb.us-west-2.amazonaws.com 15021:32760/TCP,80:31496/TCP,443:32534/TCP 4m28s
10763
```
10864

109-
3. Verify all the Helm releases installed in the `istio-system` and
110-
`istio-ingress` namespaces:
65+
2. Verify all the Helm releases installed in the `istio-system` and `istio-ingress` namespaces:
11166

112-
```sh
113-
helm list -n istio-system
114-
helm list -n istio-ingress
115-
```
67+
```sh
68+
helm list -n istio-system
69+
```
11670

117-
Output should be similar to:
118-
```
119-
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
120-
istio-base istio-system 1 2023-07-19 11:05:41.599921 -0700 PDT deployed base-1.18.1 1.18.1
121-
istiod istio-system 1 2023-07-19 11:05:48.087616 -0700 PDT deployed istiod-1.18.1 1.18.1
71+
```text
72+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
73+
istio-base istio-system 1 2023-07-19 11:05:41.599921 -0700 PDT deployed base-1.18.1 1.18.1
74+
istiod istio-system 1 2023-07-19 11:05:48.087616 -0700 PDT deployed istiod-1.18.1 1.18.1
75+
```
76+
77+
```sh
78+
helm list -n istio-ingress
79+
```
12280

123-
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
124-
istio-ingress istio-ingress 1 2023-07-19 11:06:03.41609 -0700 PDT deployed gateway-1.18.1 1.18.1
125-
```
81+
```text
82+
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
83+
istio-ingress istio-ingress 1 2023-07-19 11:06:03.41609 -0700 PDT deployed gateway-1.18.1 1.18.1
84+
```
12685

12786
### Observability Add-ons
12887

@@ -131,7 +90,6 @@ and accessing each of the service endpoints using this URL of the form
13190
[http://localhost:\<port>](http://localhost:<port>) where `<port>` is one of the
13291
port number for the corresponding service.
13392

134-
13593
```sh
13694
# Visualize Istio Mesh console using Kiali
13795
kubectl port-forward svc/kiali 20001:20001 -n istio-system
@@ -146,7 +104,7 @@ kubectl port-forward svc/grafana 3000:3000 -n istio-system
146104
kubectl port-forward svc/jaeger 16686:16686 -n istio-system
147105
```
148106

149-
## Test
107+
### Example
150108

151109
1. Create the `sample` namespace and enable the sidecar injection on it
152110

@@ -155,8 +113,7 @@ kubectl port-forward svc/jaeger 16686:16686 -n istio-system
155113
kubectl label namespace sample istio-injection=enabled
156114
```
157115

158-
Output should be:
159-
```
116+
```text
160117
namespace/sample created
161118
namespace/sample labeled
162119
```
@@ -212,8 +169,7 @@ kubectl port-forward svc/jaeger 16686:16686 -n istio-system
212169
kubectl apply -f helloworld.yaml -n sample
213170
```
214171
215-
Output should be:
216-
```
172+
```text
217173
service/helloworld created
218174
deployment.apps/helloworld-v1 created
219175
```
@@ -275,8 +231,7 @@ kubectl port-forward svc/jaeger 16686:16686 -n istio-system
275231
kubectl apply -f sleep.yaml -n sample
276232
```
277233
278-
Output should be:
279-
```
234+
```text
280235
serviceaccount/sleep created
281236
service/sleep created
282237
deployment.apps/sleep created
@@ -287,23 +242,23 @@ kubectl port-forward svc/jaeger 16686:16686 -n istio-system
287242
```sh
288243
kubectl get pods -n sample
289244
```
290-
Output should be similar to:
291-
```
245+
246+
```text
292247
NAME READY STATUS RESTARTS AGE
293248
helloworld-v1-b6c45f55-bx2xk 2/2 Running 0 50s
294249
sleep-9454cc476-p2zxr 2/2 Running 0 15s
295250
```
296-
5. Connect to `helloworld` app from `sleep` app and verify if the connection
297-
uses envoy proxy
251+
252+
5. Connect to `helloworld` app from `sleep` app and verify if the connection uses envoy proxy
298253
299254
```sh
300255
kubectl exec -n sample -c sleep \
301256
"$(kubectl get pod -n sample -l \
302257
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
303258
-- curl -v helloworld.sample:5000/hello
304259
```
305-
Output should be similar to:
306-
```
260+
261+
```text
307262
* processing: helloworld.sample:5000/hello
308263
% Total % Received % Xferd Average Speed Time Time Time Current
309264
Dload Upload Total Spent Left Speed
@@ -329,9 +284,6 @@ uses envoy proxy
329284
330285
## Destroy
331286
332-
To teardown and remove the resources created in this example:
333-
334-
```sh
335-
terraform destroy -target="helm_release.istio_ingress" --auto-approve
336-
terraform destroy --auto-approve
337-
```
287+
{%
288+
include-markdown "../../docs/_partials/destroy.md"
289+
%}

‎patterns/istio/main.tf

+52-61
Original file line numberDiff line numberDiff line change
@@ -108,78 +108,69 @@ module "eks_blueprints_addons" {
108108
cluster_version = module.eks.cluster_version
109109
oidc_provider_arn = module.eks.oidc_provider_arn
110110

111-
# This is required to expose Istio Ingress Gateway
112-
enable_aws_load_balancer_controller = true
113-
114-
tags = local.tags
115-
116111
eks_addons = {
117112
coredns = {}
118113
vpc-cni = {}
119114
kube-proxy = {}
120115
}
121-
}
122116

123-
################################################################################
124-
# Istio
125-
################################################################################
126-
127-
resource "helm_release" "istio_base" {
128-
repository = local.istio_chart_url
129-
chart = "base"
130-
name = "istio-base"
131-
namespace = "istio-system"
132-
version = local.istio_chart_version
133-
wait = false
117+
# This is required to expose Istio Ingress Gateway
118+
enable_aws_load_balancer_controller = true
134119

135-
create_namespace = true
120+
helm_releases = {
121+
istio-base = {
122+
chart = "base"
123+
version = local.istio_chart_version
124+
repository = local.istio_chart_url
125+
name = "istio-base"
126+
namespace = "istio-system"
127+
create_namespace = true
128+
}
136129

137-
depends_on = [
138-
module.eks_blueprints_addons
139-
]
140-
}
130+
istiod = {
131+
chart = "istiod"
132+
version = local.istio_chart_version
133+
repository = local.istio_chart_url
134+
name = "istiod"
135+
namespace = "istio-system"
136+
create_namespace = false
137+
138+
set = [
139+
{
140+
name = "meshConfig.accessLogFile"
141+
value = "/dev/stdout"
142+
}
143+
]
144+
}
141145

142-
resource "helm_release" "istiod" {
143-
repository = local.istio_chart_url
144-
chart = "istiod"
145-
name = "istiod"
146-
namespace = helm_release.istio_base.metadata[0].namespace
147-
version = local.istio_chart_version
148-
wait = false
149-
150-
set {
151-
name = "meshConfig.accessLogFile"
152-
value = "/dev/stdout"
146+
istio-ingress = {
147+
chart = "gateway"
148+
version = local.istio_chart_version
149+
repository = local.istio_chart_url
150+
name = "istio-ingress"
151+
namespace = "istio-ingress" # per https://github.com/istio/istio/blob/master/manifests/charts/gateways/istio-ingress/values.yaml#L2
152+
create_namespace = true
153+
154+
values = [
155+
yamlencode(
156+
{
157+
labels = {
158+
istio = "ingressgateway"
159+
}
160+
service = {
161+
annotations = {
162+
"service.beta.kubernetes.io/aws-load-balancer-type" = "nlb"
163+
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
164+
"service.beta.kubernetes.io/aws-load-balancer-attributes" = "load_balancing.cross_zone.enabled=true"
165+
}
166+
}
167+
}
168+
)
169+
]
170+
}
153171
}
154-
}
155172

156-
resource "helm_release" "istio_ingress" {
157-
repository = local.istio_chart_url
158-
chart = "gateway"
159-
name = "istio-ingress"
160-
namespace = "istio-ingress" # per https://github.com/istio/istio/blob/master/manifests/charts/gateways/istio-ingress/values.yaml#L2
161-
version = local.istio_chart_version
162-
wait = false
163-
164-
create_namespace = true
165-
166-
values = [
167-
yamlencode(
168-
{
169-
labels = {
170-
istio = "ingressgateway"
171-
}
172-
service = {
173-
annotations = {
174-
"service.beta.kubernetes.io/aws-load-balancer-type" = "nlb"
175-
"service.beta.kubernetes.io/aws-load-balancer-scheme" = "internet-facing"
176-
"service.beta.kubernetes.io/aws-load-balancer-attributes" = "load_balancing.cross_zone.enabled=true"
177-
}
178-
}
179-
}
180-
)
181-
]
182-
depends_on = [helm_release.istiod]
173+
tags = local.tags
183174
}
184175

185176
################################################################################

‎patterns/karpenter/README.md

+10-29
Original file line numberDiff line numberDiff line change
@@ -1,43 +1,24 @@
11
# Karpenter
22

3-
This example demonstrates how to provision a Karpenter on a serverless cluster (serverless data plane) using Fargate Profiles.
4-
5-
This example solution provides:
6-
7-
- Amazon EKS Cluster (control plane)
8-
- Amazon EKS Fargate Profiles for the `kube-system` namespace which is used by the `coredns`, `vpc-cni`, and `kube-proxy` addons, as well as profile that will match on the `karpenter` namespace which will be used by Karpenter.
9-
- Amazon EKS managed addons `coredns`, `vpc-cni` and `kube-proxy`
10-
`coredns` has been patched to run on Fargate, and `vpc-cni` has been configured to use prefix delegation to better support the max pods setting of 110 on the Karpenter provisioner
11-
- A sample deployment is provided to demonstrates scaling a deployment to view how Karpenter responds to provision, and de-provision, resources on-demand
12-
13-
## Prerequisites:
14-
15-
Ensure that you have the following tools installed locally:
16-
17-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
18-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
19-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
3+
This pattern demonstrates how to provision Karpenter on a serverless cluster (serverless data plane) using Fargate Profiles.
204

215
## Deploy
226

23-
To provision this example:
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
248

25-
```sh
26-
terraform init
27-
terraform apply -target module.vpc
28-
terraform apply -target module.eks
29-
terraform apply
30-
```
9+
## Validate
3110

32-
Enter `yes` at command prompt to apply
11+
!!! danger "TODO"
12+
Add in validation steps
3313

3414
## Destroy
3515

36-
To teardown and remove the resources created in this example:
16+
Scale down the deployment to de-provision Karpenter created resources first:
3717

3818
```sh
3919
kubectl delete deployment inflate
40-
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
41-
terraform destroy -target="module.eks" -auto-approve
42-
terraform destroy -auto-approve
4320
```
21+
22+
{%
23+
include-markdown "../../docs/_partials/destroy.md"
24+
%}
+7-31
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,23 @@
11
# Multi-Tenancy w/ Teams
22

3-
This example demonstrates how to provision and configure a multi-tenancy Amazon EKS cluster with safeguards for resource consumption and namespace isolation.
3+
This pattern demonstrates how to provision and configure a multi-tenancy Amazon EKS cluster with safeguards for resource consumption and namespace isolation.
44

55
This example solution provides:
66

7-
- Amazon EKS Cluster (control plane)
8-
- Amazon EKS managed nodegroup (data plane)
97
- Two development teams - `team-red` and `team-blue` - isolated to their respective namespaces
108
- An admin team with privileged access to the cluster (`team-admin`)
119

12-
## Prerequisites:
13-
14-
Ensure that you have the following tools installed locally:
15-
16-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
17-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
18-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
19-
2010
## Deploy
2111

22-
To provision this example:
23-
24-
```sh
25-
terraform init
26-
terraform apply
27-
```
28-
29-
Enter `yes` at command prompt to apply
12+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
3013

3114
## Validate
3215

33-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl`.
34-
35-
1. Run `update-kubeconfig` command:
36-
37-
```sh
38-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
39-
```
16+
!!! danger "TODO"
17+
Add in validation steps
4018

4119
## Destroy
4220

43-
To teardown and remove the resources created in this example:
44-
45-
```sh
46-
terraform destroy -auto-approve
47-
```
21+
{%
22+
include-markdown "../../docs/_partials/destroy.md"
23+
%}

‎patterns/private-public-ingress/README.md

+10-23
Original file line numberDiff line numberDiff line change
@@ -3,38 +3,25 @@
33
This example demonstrates how to provision an Amazon EKS cluster with two ingress-nginx controllers; one to expose applications publicly and the other to expose applications internally. It also assigns security groups to the Network Load Balancers used to expose the internal and external ingress controllers.
44

55
This solution:
6-
* Deploys Amazon EKS, with 1 Managed Node Group using the Bottlerocket Amazon EKS Optimized AMI spread accross 3 availability zones.
7-
* Installs the AWS Load Balancer controller for creating Network Load Balancers and Application Load Balancers. This is the recommended approach instead of the in-tree AWS cloud provider load balancer controller.
8-
* Installs an ingress-nginx controller for public traffic
9-
* Installs an ingress-nginx controller for internal traffic
6+
7+
- Installs an ingress-nginx controller for public traffic
8+
- Installs an ingress-nginx controller for internal traffic
109

1110
To expose your application services via an `Ingress` resource with this solution you can set the respective `ingressClassName` as either `ingress-nginx-external` or `ingress-nginx-internal`.
1211

1312
Refer to the [documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller) for `AWS Load Balancer controller` configuration options.
1413

15-
## Prerequisites:
16-
17-
Ensure that you have the following tools installed locally:
18-
19-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
20-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
21-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
22-
2314
## Deploy
2415

25-
To provision this example:
16+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
2617

27-
```sh
28-
terraform init
29-
terraform apply
30-
```
18+
## Validate
3119

32-
Enter `yes` at command prompt to apply
20+
!!! danger "TODO"
21+
Add in validation steps
3322

3423
## Destroy
3524

36-
To teardown and remove the resources created in this example:
37-
38-
```sh
39-
terraform destroy -auto-approve
40-
```
25+
{%
26+
include-markdown "../../docs/_partials/destroy.md"
27+
%}

‎patterns/privatelink-access/README.md

+29-55
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,13 @@
11
# Private EKS cluster access via AWS PrivateLink
22

3-
This example demonstrates how to access a private EKS cluster using AWS PrivateLink.
3+
This pattern demonstrates how to access a private EKS cluster using AWS PrivateLink.
44

55
Refer to the [documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html)
66
for further details on `AWS PrivateLink`.
77

8-
## Prerequisites:
9-
10-
Ensure that you have the following tools installed locally:
11-
12-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
13-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
14-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
15-
168
## Deploy
179

18-
To provision this example, first deploy the Lambda function that responds to
19-
`CreateNetworkInterface` API calls. This needs to exist before the cluster is
20-
created so that it can respond to the ENIs created by the EKS control plane:
21-
22-
```sh
23-
terraform init
24-
terraform apply -target=module.eventbridge -target=module.nlb
25-
```
26-
27-
Enter `yes` at command prompt to apply
28-
29-
Next, deploy the remaining resources:
30-
31-
```sh
32-
terraform apply
33-
```
34-
35-
Enter `yes` at command prompt to apply
10+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
3611

3712
## Validate
3813

@@ -48,17 +23,17 @@ be `ok`.
4823
COMMAND="curl -ks https://9A85B21811733524E3ABCDFEA8714642.gr7.us-west-2.eks.amazonaws.com/readyz"
4924

5025
COMMAND_ID=$(aws ssm send-command --region us-west-2 \
51-
--document-name "AWS-RunShellScript" \
52-
--parameters "commands=[$COMMAND]" \
53-
--targets "Key=instanceids,Values=i-0a45eff73ba408575" \
54-
--query 'Command.CommandId' \
55-
--output text)
26+
--document-name "AWS-RunShellScript" \
27+
--parameters "commands=[$COMMAND]" \
28+
--targets "Key=instanceids,Values=i-0a45eff73ba408575" \
29+
--query 'Command.CommandId' \
30+
--output text)
5631

5732
aws ssm get-command-invocation --region us-west-2 \
58-
--command-id $COMMAND_ID \
59-
--instance-id i-0a45eff73ba408575 \
60-
--query 'StandardOutputContent' \
61-
--output text
33+
--command-id $COMMAND_ID \
34+
--instance-id i-0a45eff73ba408575 \
35+
--query 'StandardOutputContent' \
36+
--output text
6237
```
6338

6439
### Cluster Access
@@ -73,10 +48,11 @@ add additional entries to the ConfigMap; we can only access the cluster from
7348
within the private network of the cluster's VPC or from the client VPC using AWS
7449
PrivateLink access.
7550

76-
> :warning: The "client" EC2 instance provided and copying of AWS credentials to
77-
that instance are merely for demonstration purposes only. Please consider
78-
alternate methods of network access such as AWS Client VPN to provide more
79-
secure access.
51+
!!! info
52+
The "client" EC2 instance provided and copying of AWS credentials to
53+
that instance are merely for demonstration purposes only. Please consider
54+
alternate methods of network access such as AWS Client VPN to provide more
55+
secure access.
8056

8157
Perform the following steps to access the cluster with `kubectl` from the
8258
provided "client" EC2 instance.
@@ -100,8 +76,9 @@ instance.
10076
3. Once logged in, export the following environment variables from the output
10177
of step #1:
10278

103-
> :exclamation: The session credentials are only valid for 1 hour; you can
104-
adjust the session duration in the command provided in step #1
79+
!!! warning
80+
The session credentials are only valid for 1 hour; you can
81+
adjust the session duration in the command provided in step #1
10582

10683
```sh
10784
export AWS_ACCESS_KEY_ID=XXXX
@@ -122,19 +99,16 @@ access to the cluster:
12299
kubectl get pods -A
123100
```
124101

125-
The test succeeded if you see an output like the one shown below:
126-
127-
NAMESPACE NAME READY STATUS RESTARTS AGE
128-
kube-system aws-node-4f8g8 1/1 Running 0 1m
129-
kube-system coredns-6ff9c46cd8-59sqp 1/1 Running 0 1m
130-
kube-system coredns-6ff9c46cd8-svnpb 1/1 Running 0 2m
131-
kube-system kube-proxy-mm2zc 1/1 Running 0 1m
132-
102+
```text
103+
NAMESPACE NAME READY STATUS RESTARTS AGE
104+
kube-system aws-node-4f8g8 1/1 Running 0 1m
105+
kube-system coredns-6ff9c46cd8-59sqp 1/1 Running 0 1m
106+
kube-system coredns-6ff9c46cd8-svnpb 1/1 Running 0 2m
107+
kube-system kube-proxy-mm2zc 1/1 Running 0 1m
108+
```
133109

134110
## Destroy
135111

136-
Run the following command to destroy all the resources created by Terraform:
137-
138-
```sh
139-
terraform destroy --auto-approve
140-
```
112+
{%
113+
include-markdown "../../docs/_partials/destroy.md"
114+
%}

‎patterns/single-sign-on/README.md

-39
This file was deleted.

‎patterns/single-sign-on/iam-identity-center/README.md ‎patterns/sso-iam-identity-center/README.md

+5-29
Original file line numberDiff line numberDiff line change
@@ -2,31 +2,9 @@
22

33
This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, integrated with IAM Identity Center (former AWS SSO) as an the Identity Provider (IdP) for Single Sign-On (SSO) authentication. The configuration for authorization is done using Kubernetes Role-based access control (RBAC).
44

5-
## Prerequisites:
6-
7-
Ensure that you have the following tools installed locally:
8-
9-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
10-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
11-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
12-
13-
Also make sure you have enabled the following AWS resource:
14-
15-
1. Enable [IAM Identity Center](https://console.aws.amazon.com/singlesignon/home/)
16-
This will also create an AWS Organization in your account.
17-
185
## Deploy
196

20-
To provision these examples, run the following commands:
21-
22-
```sh
23-
terraform init
24-
terraform apply -target module.vpc
25-
terraform apply -target module.eks
26-
terraform apply
27-
```
28-
29-
Enter `yes` at command prompt to apply
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
308

319
## Validate
3210

@@ -99,7 +77,7 @@ EOT
9977
With the `kubeconfig` configured, you'll be able to run `kubectl` commands in your Amazon EKS Cluster with the impersonated user. The read-only user has a `cluster-viewer` Kubernetes role bound to it's group, whereas the admin user, has the `admin` Kubernetes role bound to it's group.
10078

10179
```
102-
kubectl get pods -A
80+
kubectl get pods -A
10381
NAMESPACE NAME READY STATUS RESTARTS AGE
10482
amazon-guardduty aws-guardduty-agent-bl2v2 1/1 Running 0 3h54m
10583
amazon-guardduty aws-guardduty-agent-sqvcx 1/1 Running 0 3h54m
@@ -122,8 +100,6 @@ configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name iam-ide
122100

123101
## Destroy
124102

125-
To teardown and remove the resources created in this example:
126-
127-
```sh
128-
terraform destroy -auto-approve
129-
```
103+
{%
104+
include-markdown "../../docs/_partials/destroy.md"
105+
%}

‎patterns/single-sign-on/okta/README.md ‎patterns/sso-okta/README.md

+5-41
Original file line numberDiff line numberDiff line change
@@ -2,43 +2,9 @@
22

33
This example demonstrates how to deploy an Amazon EKS cluster that is deployed on the AWS Cloud, integrated with Okta as an the Identity Provider (IdP) for Single Sign-On (SSO) authentication. The configuration for authorization is done using Kubernetes Role-based access control (RBAC).
44

5-
## Prerequisites:
6-
7-
Ensure that you have the following tools installed locally:
8-
9-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
10-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
11-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
12-
4. [kubelogin](https://github.com/int128/kubelogin)
13-
14-
Also make sure you have enabled the following Okta resource:
15-
16-
1. [Okta Account](https://okta.com).
17-
2. [Okta Organization](https://developer.okta.com/docs/concepts/okta-organizations/)
18-
3. [Okta API Token](https://developer.okta.com/docs/guides/create-an-api-token/main/)
19-
205
## Deploy
216

22-
To provision this example, populate the Okta provider credentials, in the `okta.tf` file.
23-
24-
```
25-
provider "okta" {
26-
org_name = "dev-<ORG_ID>
27-
base_url = "okta.com"
28-
api_token = "<OKTA_APU_TOKEN>"
29-
}
30-
```
31-
32-
Then run the following commands:
33-
34-
```sh
35-
terraform init
36-
terraform apply -target module.vpc
37-
terraform apply -target module.eks
38-
terraform apply
39-
```
40-
41-
Enter `yes` at command prompt to apply
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
428

439
## Validate
4410

@@ -65,7 +31,7 @@ With the `kubeconfig` configured, you'll be able to run `kubectl` commands in yo
6531
The read-only user has a `cluster-viewer` Kubernetes role bound to it's group, whereas the admin user, has the `admin` Kubernetes role bound to it's group.
6632

6733
```
68-
kubectl get pods -A
34+
kubectl get pods -A
6935
NAMESPACE NAME READY STATUS RESTARTS AGE
7036
amazon-guardduty aws-guardduty-agent-bl2v2 1/1 Running 0 3h54m
7137
amazon-guardduty aws-guardduty-agent-sqvcx 1/1 Running 0 3h54m
@@ -94,8 +60,6 @@ okta_login = "kubectl oidc-login setup --oidc-issuer-url=https://dev-ORGID.okta.
9460

9561
## Destroy
9662

97-
To teardown and remove the resources created in this example:
98-
99-
```sh
100-
terraform destroy -auto-approve
101-
```
63+
{%
64+
include-markdown "../../docs/_partials/destroy.md"
65+
%}
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

‎patterns/stateful/README.md

+86-99
Original file line numberDiff line numberDiff line change
@@ -36,114 +36,101 @@ In addition, the following properties are configured on the nodegroup volumes:
3636
- EBS encryption using a customer managed key (CMK)
3737
- Configuring the volumes to use GP3 storage
3838

39-
## Prerequisites:
40-
41-
Ensure that you have the following tools installed locally:
42-
43-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
44-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
45-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
46-
4739
## Deploy
4840

49-
To provision this example:
50-
51-
```sh
52-
terraform init
53-
terraform apply
54-
```
55-
56-
Enter `yes` at command prompt to apply
41+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
5742

5843
## Validate
5944

6045
For validating `velero` see [here](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/modules/kubernetes-addons/velero#validate)
6146

6247
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the deployment.
6348

64-
1. Run `update-kubeconfig` command:
65-
66-
```sh
67-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
68-
```
69-
70-
2. List the storage classes to view that `efs`, `gp2`, and `gp3` classes are present and `gp3` is the default storage class
71-
72-
```sh
73-
kubectl get storageclasses
74-
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
75-
efs efs.csi.aws.com Delete Immediate true 2m19s
76-
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 15m
77-
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 2m19s
78-
```
79-
80-
3. From an instance launched with instance store(s), check that the instance store has been mounted correctly. To verify, first install the `nvme-cli` tool and then use it to verify. To verify, you can access the instance using SSM Session Manager:
81-
82-
```sh
83-
# Install the nvme-cli tool
84-
sudo yum install nvme-cli -y
85-
86-
# Show NVMe volumes attached
87-
sudo nvme list
88-
89-
# Output should look like below - notice the model is `EC2 NVMe Instance Storage` for the instance store
90-
Node SN Model Namespace Usage Format FW Rev
91-
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
92-
/dev/nvme0n1 vol0546d3c3b0af0bf6d Amazon Elastic Block Store 1 25.77 GB / 25.77 GB 512 B + 0 B 1.0
93-
/dev/nvme1n1 AWS24BBF51AF55097008 Amazon EC2 NVMe Instance Storage 1 75.00 GB / 75.00 GB 512 B + 0 B 0
94-
95-
# Show disks, their partitions and mounts
96-
sudo lsblk
97-
98-
# Output should look like below
99-
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
100-
nvme0n1 259:0 0 24G 0 disk
101-
├─nvme0n1p1 259:2 0 24G 0 part /
102-
└─nvme0n1p128 259:3 0 1M 0 part
103-
nvme1n1 259:1 0 69.9G 0 disk /local1 # <--- this is the instance store
104-
```
105-
106-
4. From an instance launched with multiple volume(s), check that the instance store has been mounted correctly. To verify, first install the `nvme-cli` tool and then use it to verify. To verify, you can access the instance using SSM Session Manager:
107-
108-
```sh
109-
# Install the nvme-cli tool
110-
sudo yum install nvme-cli -y
111-
112-
# Show NVMe volumes attached
113-
sudo nvme list
114-
115-
# Output should look like below, where /dev/nvme0n1 is the root volume and /dev/nvme1n1 is the second, additional volume
116-
Node SN Model Namespace Usage Format FW Rev
117-
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
118-
/dev/nvme0n1 vol0cd37dab9e4a5c184 Amazon Elastic Block Store 1 68.72 GB / 68.72 GB 512 B + 0 B 1.0
119-
/dev/nvme1n1 vol0ad3629c159ee869c Amazon Elastic Block Store 1 25.77 GB / 25.77 GB 512 B + 0 B 1.0
120-
```
121-
122-
5. From the same instance used in step 4, check that the containerd directories are using the second `/dev/nvme1n1` volume:
123-
124-
```sh
125-
df /var/lib/containerd/
126-
127-
# Output should look like below, which shows the directory on the /dev/nvme1n1 volume and NOT on /dev/nvme0n1 (root volume)
128-
Filesystem 1K-blocks Used Available Use% Mounted on
129-
/dev/nvme1n1 24594768 2886716 20433380 13% /var/lib/containerd
130-
```
131-
132-
```sh
133-
df /run/containerd/
134-
135-
# Output should look like below, which shows the directory on the /dev/nvme1n1 volume and NOT on /dev/nvme0n1 (root volume)
136-
Filesystem 1K-blocks Used Available Use% Mounted on
137-
/dev/nvme1n1 24594768 2886716 20433380 13% /run/containerd
138-
```
49+
1. List the storage classes to view that `efs`, `gp2`, and `gp3` classes are present and `gp3` is the default storage class
50+
51+
```sh
52+
kubectl get storageclasses
53+
```
54+
55+
```text
56+
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
57+
efs efs.csi.aws.com Delete Immediate true 2m19s
58+
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 15m
59+
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 2m19s
60+
```
61+
62+
2. From an instance launched with instance store(s), check that the instance store has been mounted correctly. To verify, first install the `nvme-cli` tool and then use it to verify. To verify, you can access the instance using SSM Session Manager:
63+
64+
```sh
65+
# Install the nvme-cli tool
66+
sudo yum install nvme-cli -y
67+
68+
# Show NVMe volumes attached
69+
sudo nvme list
70+
```
71+
72+
```text
73+
# Notice the model is `EC2 NVMe Instance Storage` for the instance store
74+
Node SN Model Namespace Usage Format FW Rev
75+
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
76+
/dev/nvme0n1 vol0546d3c3b0af0bf6d Amazon Elastic Block Store 1 25.77 GB / 25.77 GB 512 B + 0 B 1.0
77+
/dev/nvme1n1 AWS24BBF51AF55097008 Amazon EC2 NVMe Instance Storage 1 75.00 GB / 75.00 GB 512 B + 0 B 0
78+
79+
# Show disks, their partitions and mounts
80+
sudo lsblk
81+
82+
# Output should look like below
83+
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
84+
nvme0n1 259:0 0 24G 0 disk
85+
├─nvme0n1p1 259:2 0 24G 0 part /
86+
└─nvme0n1p128 259:3 0 1M 0 part
87+
nvme1n1 259:1 0 69.9G 0 disk /local1 # <--- this is the instance store
88+
```
89+
90+
3. From an instance launched with multiple volume(s), check that the instance store has been mounted correctly. To verify, first install the `nvme-cli` tool and then use it to verify. To verify, you can access the instance using SSM Session Manager:
91+
92+
```sh
93+
# Install the nvme-cli tool
94+
sudo yum install nvme-cli -y
95+
96+
# Show NVMe volumes attached
97+
sudo nvme list
98+
```
99+
100+
```text
101+
# /dev/nvme0n1 is the root volume and /dev/nvme1n1 is the second, additional volume
102+
Node SN Model Namespace Usage Format FW Rev
103+
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
104+
/dev/nvme0n1 vol0cd37dab9e4a5c184 Amazon Elastic Block Store 1 68.72 GB / 68.72 GB 512 B + 0 B 1.0
105+
/dev/nvme1n1 vol0ad3629c159ee869c Amazon Elastic Block Store 1 25.77 GB / 25.77 GB 512 B + 0 B 1.0
106+
```
107+
108+
4. From the same instance used in step 4, check that the containerd directories are using the second `/dev/nvme1n1` volume:
109+
110+
```sh
111+
df /var/lib/containerd/
112+
```
113+
114+
```text
115+
# Output should look like below, which shows the directory on the
116+
# /dev/nvme1n1 volume and NOT on /dev/nvme0n1 (root volume)
117+
Filesystem 1K-blocks Used Available Use% Mounted on
118+
/dev/nvme1n1 24594768 2886716 20433380 13% /var/lib/containerd
119+
```
120+
121+
```sh
122+
df /run/containerd/
123+
```
124+
125+
```text
126+
# Output should look like below, which shows the directory on the
127+
# /dev/nvme1n1 volume and NOT on /dev/nvme0n1 (root volume)
128+
Filesystem 1K-blocks Used Available Use% Mounted on
129+
/dev/nvme1n1 24594768 2886716 20433380 13% /run/containerd
130+
```
139131

140132
## Destroy
141133

142-
To teardown and remove the resources created in this example:
143-
144-
```bash
145-
terraform destroy -target module.eks_blueprints_addons
146-
terraform destroy
147-
```
148-
149-
Enter `yes` at each command prompt to destroy
134+
{%
135+
include-markdown "../../docs/_partials/destroy.md"
136+
%}
+26-71
Original file line numberDiff line numberDiff line change
@@ -1,87 +1,42 @@
11
# TLS with AWS PCA Issuer
22

3-
This example deploys the following
4-
5-
- Basic EKS Cluster with VPC
6-
- Creates a new sample VPC, 3 Private Subnets and 3 Public Subnets
7-
- Creates Internet gateway for Public Subnets and NAT Gateway for Private Subnets
8-
- Enables cert-manager module
9-
- Enables cert-manager CSI driver module
10-
- Enables aws-privateca-issuer module
11-
- Creates AWS Certificate Manager Private Certificate Authority, enables and activates it
12-
- Creates the CRDs to fetch `tls.crt`, `tls.key` and `ca.crt` , which will be available as Kubernetes Secret. Now you may mount the secret in the application for end to end TLS.
13-
14-
## How to Deploy
15-
16-
## Prerequisites:
17-
18-
Ensure that you have the following tools installed locally:
19-
20-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
21-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
22-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
3+
This pattern demonstrates how to enable TLS with AWS PCA issuer on an Amazon EKS cluster.
234

245
## Deploy
256

26-
To provision this example:
27-
28-
```sh
29-
terraform init
30-
terraform apply -target module.vpc
31-
terraform apply -target module.eks
32-
terraform apply
33-
34-
```
35-
36-
Enter `yes` at command prompt to apply
7+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
378

389
## Validate
3910

40-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate.
41-
42-
1. Check the Terraform provided Output, to update your `kubeconfig`
43-
44-
```hcl
45-
Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
46-
47-
Outputs:
48-
49-
configure_kubectl = "aws eks --region us-west-2 update-kubeconfig --name fully-private-cluster"
50-
```
51-
52-
2. Run `update-kubeconfig` command, using the Terraform provided Output, replace with your `$AWS_REGION` and your `$CLUSTER_NAME` variables.
53-
54-
```sh
55-
aws eks --region <$AWS_REGION> update-kubeconfig --name <$CLUSTER_NAME>
56-
```
57-
58-
3. List all the pods running in `aws-privateca-issuer` and `cert-manager` Namespace.
11+
1. List all the pods running in `aws-privateca-issuer` and `cert-manager` Namespace.
5912

60-
```sh
61-
kubectl get pods -n aws-privateca-issuer
62-
kubectl get pods -n cert-manager
63-
```
13+
```sh
14+
kubectl get pods -n aws-privateca-issuer
15+
kubectl get pods -n cert-manager
16+
```
6417

65-
4. View the `certificate` status in the `default` Namespace. It should be in `Ready` state, and be pointing to a `secret` created in the same Namespace.
18+
2. View the `certificate` status in the `default` Namespace. It should be in `Ready` state, and be pointing to a `secret` created in the same Namespace.
6619

67-
```sh
68-
kubectl get certificate -o wide
69-
NAME READY SECRET ISSUER STATUS AGE
70-
example True example-clusterissuer tls-with-aws-pca-issuer Certificate is up to date and has not expired 41m
20+
```sh
21+
kubectl get certificate -o wide
22+
```
7123

72-
kubectl get secret example-clusterissuer
73-
NAME TYPE DATA AGE
74-
example-clusterissuer kubernetes.io/tls 3 43m
75-
```
24+
```text
25+
NAME READY SECRET ISSUER STATUS AGE
26+
example True example-clusterissuer tls-with-aws-pca-issuer Certificate is up to date and has not expired 41m
27+
```
7628

77-
## Cleanup
29+
```sh
30+
kubectl get secret example-clusterissuer
31+
```
7832

79-
To clean up your environment, destroy the Terraform modules in reverse order.
33+
```text
34+
NAME TYPE DATA AGE
35+
example-clusterissuer kubernetes.io/tls 3 43m
36+
```
8037

81-
Destroy the Kubernetes Add-ons, EKS cluster with Node groups and VPC
38+
## Destroy
8239

83-
```sh
84-
terraform destroy -target module.eks_blueprints_kubernetes_addons -auto-approve
85-
terraform destroy -target module.eks -auto-approve
86-
terraform destroy -auto-approve
87-
```
40+
{%
41+
include-markdown "../../docs/_partials/destroy.md"
42+
%}
+43-83
Original file line numberDiff line numberDiff line change
@@ -1,110 +1,70 @@
11
# Transparent Encryption with Cilium and Wireguard
22

3-
This example shows how to provision an EKS cluster with:
4-
- Managed node group based on Bottlerocket AMI
5-
- Cilium configured in CNI chaining mode with VPC CNI and with Wireguard transparent encryption enabled
6-
7-
## Reference Documentation:
3+
This pattern demonstrates Cilium configured in CNI chaining mode with VPC CNI and with Wireguard transparent encryption enabled on an Amazon EKS cluster.
84

95
- [Cilium CNI Chaining Documentation](https://docs.cilium.io/en/v1.12/gettingstarted/cni-chaining-aws-cni/)
106
- [Cilium Wireguard Encryption Documentation](https://docs.cilium.io/en/v1.12/gettingstarted/encryption-wireguard/)
117

12-
## Prerequisites:
13-
14-
Ensure that you have the following tools installed locally:
15-
16-
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
17-
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
18-
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
19-
208
## Deploy
219

22-
To provision this example with a sample app for testing:
23-
24-
```sh
25-
terraform init
26-
terraform apply
27-
```
28-
29-
To provision this example without sample app for testing:
30-
31-
```sh
32-
terraform init
33-
terraform apply -var enable_example=false
34-
```
35-
36-
Enter `yes` at command prompt to apply
10+
See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.
3711

3812
## Validate
3913

40-
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the deployment.
41-
42-
1. Run `update-kubeconfig` command:
43-
44-
```sh
45-
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
46-
```
47-
48-
2. List the daemonsets
14+
1. List the daemonsets
4915

50-
```sh
51-
kubectl get ds -n kube-system
16+
```sh
17+
kubectl get ds -n kube-system
18+
```
5219

53-
# Output should look something similar
54-
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
55-
aws-node 2 2 2 2 2 <none> 156m
56-
cilium 2 2 2 2 2 kubernetes.io/os=linux 152m
57-
kube-proxy 2 2 2 2 2 <none> 156m
58-
```
20+
```text
21+
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
22+
aws-node 2 2 2 2 2 <none> 156m
23+
cilium 2 2 2 2 2 kubernetes.io/os=linux 152m
24+
kube-proxy 2 2 2 2 2 <none> 156m
25+
```
5926

60-
3. Open a shell inside the cilium container
27+
2. Open a shell inside the cilium container
6128

62-
```sh
63-
kubectl -n kube-system exec -ti ds/cilium -- bash
64-
```
29+
```sh
30+
kubectl -n kube-system exec -ti ds/cilium -- bash
31+
```
6532

66-
4. Verify Encryption is enabled
33+
3. Verify Encryption is enabled
6734

68-
```sh
69-
cilium status | grep Encryption
35+
```sh
36+
cilium status | grep Encryption
37+
```
7038

71-
# Output should look something similar
72-
Encryption: Wireguard [cilium_wg0 (Pubkey: b2krgbHgaCsVWALMnFLiS/RekhhcE36PXEjQ7T8+mW0=, Port: 51871, Peers: 1)]
73-
```
39+
```text
40+
Encryption: Wireguard [cilium_wg0 (Pubkey: b2krgbHgaCsVWALMnFLiS/RekhhcE36PXEjQ7T8+mW0=, Port: 51871, Peers: 1)]
41+
```
7442

75-
5. Install tcpdump
43+
4. Install [`tcpdump`](https://www.tcpdump.org/)
7644

77-
```sh
78-
apt-get update
79-
apt-get install -y tcpdump
80-
```
45+
```sh
46+
apt-get update
47+
apt-get install -y tcpdump
48+
```
8149

82-
6. Start a packet capture on `cilium_wg0` and verify you see payload in clear text, it means the traffic is encrypted with wireguard
50+
5. Start a packet capture on `cilium_wg0` and verify you see payload in clear text, it means the traffic is encrypted with wireguard
8351

84-
```sh
85-
tcpdump -A -c 40 -i cilium_wg0 | grep "Welcome to nginx!"
52+
```sh
53+
tcpdump -A -c 40 -i cilium_wg0 | grep "Welcome to nginx!"
54+
```
8655

87-
# Output should look similar below
56+
```text
57+
<title>Welcome to nginx!</title>
58+
<h1>Welcome to nginx!</h1>
59+
...
8860
89-
<title>Welcome to nginx!</title>
90-
<h1>Welcome to nginx!</h1>
91-
...
92-
93-
40 packets captured
94-
40 packets received by filter
95-
0 packets dropped by kernel
96-
```
97-
7. Exit the container shell
98-
99-
```sh
100-
exit
101-
```
61+
40 packets captured
62+
40 packets received by filter
63+
0 packets dropped by kernel
64+
```
10265

10366
## Destroy
10467

105-
To teardown and remove the resources created in this example:
106-
107-
```sh
108-
terraform destroy -target=module.eks -auto-approve
109-
terraform destroy -auto-approve
110-
```
68+
{%
69+
include-markdown "../../docs/_partials/destroy.md"
70+
%}

0 commit comments

Comments
 (0)
Please sign in to comment.