Skip to content

Latest commit

 

History

History
392 lines (371 loc) · 10.2 KB

M6_Networking.md

File metadata and controls

392 lines (371 loc) · 10.2 KB

Module 6: Networking

Objectives

This exercise focuses on enabling you to do the following:

  • Identify the network components on a Kubernetes cluster

  • Understand pod networking

  • Investigate service networking

  • Review the DNS configuration

    NOTE: All the tasks in the following module are executed on the Kubernetes Master node. You must SSH the RHEL3 host in your labs or copy the config file to the Windows jump host, in order to be able to execute the kubectl commands.

Task 1: identify the network componentes on a kubernetes cluster

In this task, you will review the network components of your Kubernetes cluster.

Step Action

List the available nodes in your cluster:

kubectl get nodes

What is the address assigned to the master node?

kubectl get nodes -o wide

Review the network interfaces on the master node:

ifconfig -a

Identify network interface configured for cluster connectivity on the master node:

ip link

What is the mac address assigned to the master node?

ip link show ens32

What is the address assigned to worker node RHEL1?

kubectl get nodes -o wide

What is the mac address assign to RHEL1?

arp rhel1

What is the address assigned to the worker node RHEL2?

kubectl get nodes -o wide

What is the mac address assign to the RHEL2?

arp rhel2

What is Docker interface/bridge on the master node and what is the state?

ip link

What is the state of the Docker interface?

ip link show docker0

What is the default gateway?

ip route show default

On the master node, what port does the Kubernetes’ scheduler listen on?

netstat -nplt

Notice that ETCD is listening on two different ports across three addresses. What port of ectd is there more client connections?

netstat -anp | grep etcd

Run netstat -nplt on a worker node. Compare this output to the one run on the master node.

Task 2: understand POD NEtworking

In this task, you will learn how to configure networking for the pods.

Step Action

Investigate the kubelet service and identify the network plugin:

ps -aux | grep network

Identify list of possible CNI plugins:

ls /opt/cni/bin

Compare the running process with network connection to the list of possible CNI plugins. Identify the current running CNI plugin:

netstat -nplt

Review how the Weave network is configured:

cat /opt/cni/net.d/10-weave.conflist

Identify how many and which nodes are Weave agents deployed in the cluster:

kubectl get pods -n kube-system

For each node in the cluster, identify the name of the Weave (bridge) network:

ip link

Identify the Pod IP address range configured for your CNI plugin on the master node and worker nodes:

ip addr show weave

Can you identify the default gateway configured on the Pods scheduled on RHEL1?

ip route

Task 3: investigate Services Networking

In this task, you will investigate how networking functions with services in a Kubernetes cluster.

Step Action
A service is an abstraction for pods, providing a stable, so called virtual IP (VIP) address. While pods may come and go and with it their IP addresses, a service allows clients to reliably connect to the containers running in the pod using the VIP. The virtual in VIP means it is not an actual IP address connected to a network interface, but its purpose is purely to forward traffic to one or more pods. Keeping the mapping between the VIP and the pods up-to-date is the job of kube-proxy, a process that runs on every node, which queries the API server to learn about new services in the cluster.
Make sure that you are in the ~/k8s/course/ folder on RHEL3.

Let’s create a pod supervised by a Replication Controller and a Simple Service along with it:

kubectl create -f ReplicationController.yaml

kubectl create -f sise.yaml

Now we have the supervised pod running:

kubectl get pods -l app=sise

kubectl describe pod rcsise-XXX

You can, from within the cluster, access the pod directly via its assigned IP (use the describe operation above to get the IP address):

curl 10.44.0.x:9876/info

Sample output:

{"host": "10.44.0.2:9876", "version": "0.5.0", "from": "10.32.0.1"}

This is however, as mentioned above, not advisable since the IPs assigned to pods may change. That’s the reason why we create the simple service:

kubectl get svc

kubectl describe svc simpleservice

This means that from within the cluster, you can access your service using the cluster IP shown in the command from the step above:

curl 10.106.214.101/info

Sample output:

{"host": "10.106.214.101", "version": "0.5.0", "from": "10.32.0.1"}

The cluster VIP 10.106.214.101 forwards traffic to the pod using Iptables. To view the IPtables for our simple service, use the following command:

iptables-save | grep simpleservice

Let’s now add a second pod by scaling up the RC supervising it:

kubectl scale --replicas=2 rc/rcsise

Check the IPtables again to display the new rules:

iptables-save | grep simpleservice

You can remove all the resources created by doing:

kubectl delete svc simpleservice

kubectl delete rc rcsise

Task 4: Review the DNS configuration

In this task, you will learn how DNS is configured in your cluster.

Step Action

Identify which DNS system has been deployed in your cluster:

kubectl get pods -n kube-system

How many pods are being used for DNS?

What is the service being used for DNS?

kubectl get service -n kube-system

What is the IP address for DNS service?
What would be the Fully Qualified Domain Name for the DNS service? (see lecture)

How is the file passed into the CoreDNS pod?

kubectl get configmap -n kube-system

What is the root domain?

kubectl describe configmap coredns -n kube-system

Launch the alpine.yaml Pod.

Verify how the Pod’s resolv.conf is configured:

kubectl exec alpine cat /etc/resolv.conf

Delete the alpine Pod.

End of Exercise