You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: patterns/karpenter-mng/README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -54,13 +54,13 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
54
54
2. Provision the Karpenter `EC2NodeClass` and `NodePool` resources which provide Karpenter the necessary configurations to provision EC2 resources:
55
55
56
56
```sh
57
-
kubectl apply -f karpenter.yaml
57
+
kubectl apply --server-side -f karpenter.yaml
58
58
```
59
59
60
60
3. Once the Karpenter resources are in place, Karpenter will provision the necessary EC2 resources to satisfy any pending pods in the scheduler's queue. You can demonstrate this with the example deployment provided. First deploy the example deployment which has the initial number replicas set to 0:
61
61
62
62
```sh
63
-
kubectl apply -f example.yaml
63
+
kubectl apply --server-side -f example.yaml
64
64
```
65
65
66
66
4. When you scale the example deployment, you should see Karpenter respond by quickly provisioning EC2 resources to satisfy those pending pod requests:
Copy file name to clipboardExpand all lines: patterns/karpenter/README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -47,13 +47,13 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
47
47
2. Provision the Karpenter `EC2NodeClass` and `NodePool` resources which provide Karpenter the necessary configurations to provision EC2 resources:
48
48
49
49
```sh
50
-
kubectl apply -f karpenter.yaml
50
+
kubectl apply --server-side -f karpenter.yaml
51
51
```
52
52
53
53
3. Once the Karpenter resources are in place, Karpenter will provision the necessary EC2 resources to satisfy any pending pods in the scheduler's queue. You can demonstrate this with the example deployment provided. First deploy the example deployment which has the initial number replicas set to 0:
54
54
55
55
```sh
56
-
kubectl apply -f example.yaml
56
+
kubectl apply --server-side -f example.yaml
57
57
```
58
58
59
59
4. When you scale the example deployment, you should see Karpenter respond by quickly provisioning EC2 resources to satisfy those pending pod requests:
Copy file name to clipboardExpand all lines: patterns/ml-container-cache/README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -81,13 +81,13 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
81
81
4. Once the EKS cluster and node group have been provisioned, you can deploy the provided example pod that will use a cached image to verify the time it takes for the pod to reach a ready state.
82
82
83
83
```sh
84
-
kubectl apply -f pod-cached.yaml
84
+
kubectl apply --server-side -f pod-cached.yaml
85
85
```
86
86
87
87
You can contrast this with the time it takes fora pod that is not cached on a node by using the provided `pod-uncached.yaml` file. This works by simply using a pod that doesn't have a toleration for nodes that contain NVIDIA GPUs, which is where the cached images are providedin this example.
88
88
89
89
```sh
90
-
kubectl apply -f pod-uncached.yaml
90
+
kubectl apply --server-side -f pod-uncached.yaml
91
91
```
92
92
93
93
You can also do the same steps above but using the small, utility CLI [ktime](https://github.com/clowdhaus/ktime) which can either collect the pod events to measure the time duration to reach a ready state, or it can deploy a pod manifest and return the same:
Copy file name to clipboardExpand all lines: patterns/nvidia-gpu-efa/README.md
+14-26Lines changed: 14 additions & 26 deletions
Original file line number
Diff line number
Diff line change
@@ -36,8 +36,7 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
36
36
## Validate
37
37
38
38
!!! note
39
-
40
-
Desired instance type can be specified in [eks.tf](eks.tf#L36).
39
+
Desired instance type can be specified in [eks.tf](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/d5ddd10afef9b4fd3e0cbba865645f0f522992ac/patterns/nvidia-gpu-efa/eks.tf#L38).
41
40
Values shown below will change based on the instance type selected (i.e. - `p5.48xlarge` has 8 GPUs and 32 EFA interfaces).
42
41
A list of EFA-enabled instance types is available [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html#efa-instance-types).
43
42
If you are using an on-demand capacity reservation (ODCR) for your instance type, please uncomment the `capacity_reservation_specification` block in `eks.tf`
@@ -66,36 +65,25 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
This test prints a list of available EFA interfaces by using the `/opt/amazon/efa/bin/fi_info` utility.
98
-
The script [generate-efa-info-test.sh](generate-efa-info-test.sh) creates an MPIJob manifest file named `efa-info-test.yaml`. It assumes that there are two cluster nodes with 8 GPU's per node and 32 EFA adapters. If you are not using `p5.48xlarge` instances in your cluster, you may adjust the settings in the script prior to running it.
86
+
The script [generate-efa-info-test.sh](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/patterns/nvidia-gpu-efa/generate-efa-info-test.sh) creates an MPIJob manifest file named `efa-info-test.yaml`. It assumes that there are two cluster nodes with 8 GPU's per node and 32 EFA adapters. If you are not using `p5.48xlarge` instances in your cluster, you may adjust the settings in the script prior to running it.
99
87
100
88
`NUM_WORKERS` - number of nodes you want to run the test on
101
89
`GPU_PER_WORKER` - number of GPUs available on each node
@@ -108,7 +96,7 @@ See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started
108
96
To start the test apply the generated manifest to the cluster:
0 commit comments