Module 4: Scheduler
This exercise focuses on enabling you to do the following:
-
Manual schedule a Pod
-
Create taints and tolerations
-
Define node affinities
-
Work with Resource Limits
NOTE: All the tasks in the following module are executed on the Kubernetes Master node. You must SSH the RHEL3 host in your labs, or copy the config file to the Windows jump host, in order to be able to execute the kubectl commands.
In this task, you will manually schedule a Pod.
Step | Action |
---|---|
Navigate to the folder where you cloned the GitHub repository on the RHEL3 host: cd ~/k8s/course/ |
|
Instantiate a copy of the manual Pod: kubectl create -f manual-schedule.yaml |
|
What is the state of the Pod? Use kubectl get pods | |
Why is the Pod’s state Pending? View the content of manual-schedule.yaml: cat manual-schedule.yaml |
|
Update manual-schedule file so that the nodeName key has a value of one of the worker node names in your environment. | |
Instantiate a copy of the manual Pod: kubectl create -f manual-schedule.yaml |
|
What is the state of the Pod? You should have successfully manually scheduled the Pod to run on a worker node. | |
Delete the Pod: kubectl delete pod manual |
In this task, you will learn how to set taints on nodes and tolerations on pods to ensure that nodes only run certain pods.
Step | Action |
---|---|
How many nodes are in the cluster? Use kubectl get nodes | |
Are there any taints on RHEL1? Use kubectl describe node rhel1 | |
Create a taint on RHEL1: kubectl taint nodes rhel1 app=blue:NoSchedule |
|
Are there any taints on RHEL2? Use kubectl describe node rhel2 | |
Create a taint on RHEL2: kubectl taint nodes rhel2 app=blue:NoSchedule |
|
Create a red Pod: kubectl create -f toleration-red.yaml |
|
What is the state of the red Pod and why? Use kubectl get pods | |
Create a blue Pod: kubectl create -f toleration-blue.yaml |
|
What is the state of the blue Pod and why? Use kubectl get pods | |
Are there any taints on RHEL3 (the master node)? Use kubectl describe node rhel3 | |
Remove the taint from the master node: kubectl taint nodes rhel3 node-role.kubernetes.io/master:NoSchedule- |
|
What is the status of the red Pod? | |
What node is it running on? Use kubectl get pods -o wide | |
Re-add the taint to the master node: kubectl taint node rhel3 node-role.kubernetes.io/master=true:NoSchedule |
|
What is the status of the red Pod and why? | |
Delete the red Pod: kubectl delete pod red |
|
Delete the blue Pod: kubectl delete pod blue |
|
Remove the taint on RHEL1: kubectl taint nodes rhel1 app=blue:NoSchedule- |
|
Remove the taint on RHEL2: kubectl taint nodes rhel2 app=blue:NoSchedule- |
In this task, you will create a label on a node and then observe the effects of node affinities set on Pods.
Step | Action |
---|---|
What labels are on RHEL1? Use kubectl describe node rhel1 | |
Apply a label to the RHEL1 node: kubectl label node rhel1 app=blue |
|
Create a deployment: kubectl run blue --image=nginx --replicas=6 |
|
Which nodes are they placed on and why? | |
Delete the deployment: kubectl delete deployment blue |
|
Configure the affinities-1.yaml with the following configuration:
|
|
Deploy the object: kubectl create -f affinities-1.yaml |
|
What nodes are in the pods on and why? | |
Delete the deployment: kubectl delete deployment blue |
|
Configure the affinities-2.yaml with the following configuration:
|
|
Deploy the object: kubectl create -f affinities-2.yaml |
|
What is the status of the pods on and why? | |
Remove the taint from the master node: kubectl taint nodes rhel3 node-role.kubernetes.io/master:NoSchedule- |
|
What is the status of the red Pod and why? If they are running, what node are they running on and why? | |
Re-add the taint to the master node: kubectl taint node rhel3 node-role.kubernetes.io/master=true:NoSchedule |
|
What is the status of the red Pod and why? | |
Delete the deployment: kubectl delete deployment red |
In this task, you will explore the resource limits on Pods.
Step | Action |
---|---|
Deploy the resource-1.yaml Pod. | |
What is the status of the Pod? What are the resource limits? How much memory does it require? | |
Delete the Pod stress-1. | |
Deploy the resource-2.yaml Pod. | |
What is the status? What are the resource limits? How much memory does it require? | |
Increase the memory limit of stress-2 Pod to 20Mi and redeploy the Pod. (You might have to delete the exist Pod and redeploy it.) | |
What is the status of the stress-2 Pod? It should be running. | |
Delete the Pod stress-2. | |
Deploy the resource-3.yaml Pod. | |
What is the status? What are the resource limits? How much cpu does it require? | |
Increase the cpu limit of stress-3 Pod to 2 and redeploy the Pod. (You might have to delete the exist Pod and redeploy it.) | |
What is the status of the stress-3 Pod? It should be running. | |
Delete the Pod stress-3. |
End of Exercise