Skip to content

Latest commit

 

History

History
236 lines (197 loc) · 4.91 KB

File metadata and controls

236 lines (197 loc) · 4.91 KB

Remote NSMgr death

This example shows that NSM keeps working after the remote NSMgr death.

NSC and NSE are using the kernel mechanism to connect to its local forwarder. Forwarders are using the vxlan mechanism to connect with each other.

Requires

Make sure that you have completed steps from basic or memory setup.

Run

Create test namespace:

NAMESPACE=($(kubectl create -f https://raw.githubusercontent.com/networkservicemesh/deployments-k8s/041ba2468fb8177f53926af0eab984850aa682c2/examples/heal/namespace.yaml)[0])
NAMESPACE=${NAMESPACE:10}

Get nodes exclude control-plane:

NODES=($(kubectl get nodes -o go-template='{{range .items}}{{ if not .spec.taints  }}{{index .metadata.labels "kubernetes.io/hostname"}} {{end}}{{end}}'))

Create customization file:

cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: ${NAMESPACE}

bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nsc-kernel?ref=041ba2468fb8177f53926af0eab984850aa682c2
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-kernel?ref=041ba2468fb8177f53926af0eab984850aa682c2

patchesStrategicMerge:
- patch-nsc.yaml
- patch-nse.yaml
EOF

Create NSC patch:

cat > patch-nsc.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nsc-kernel
spec:
  template:
    spec:
      containers:
        - name: nsc
          env:
            - name: NSM_NETWORK_SERVICES
              value: kernel://icmp-responder/nsm-1

      nodeSelector:
        kubernetes.io/hostname: ${NODES[0]}
EOF

Create NSE patch:

cat > patch-nse.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nse-kernel
spec:
  template:
    spec:
      containers:
        - name: nse
          env:
            - name: NSM_CIDR_PREFIX
              value: 172.16.1.100/31
      nodeSelector:
        kubernetes.io/hostname: ${NODES[1]}
EOF

Deploy NSC and NSE:

kubectl apply -k .

Wait for applications ready:

kubectl wait --for=condition=ready --timeout=1m pod -l app=nsc-kernel -n ${NAMESPACE}
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -n ${NAMESPACE}

Find NSC and NSE pods by labels:

NSC=$(kubectl get pods -l app=nsc-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
NSE=$(kubectl get pods -l app=nse-kernel -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

Ping from NSC to NSE:

kubectl exec ${NSC} -n ${NAMESPACE} -- ping -c 4 172.16.1.100

Ping from NSE to NSC:

kubectl exec ${NSE} -n ${NAMESPACE} -- ping -c 4 172.16.1.101

Kill remote NSMgr:

Customization file:

cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: nsm-system

bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nsmgr?ref=041ba2468fb8177f53926af0eab984850aa682c2

patchesStrategicMerge:
- patch-nsmgr.yaml
EOF

NSMgr patch:

cat > patch-nsmgr.yaml <<EOF
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nsmgr
spec:
  updateStrategy:
    type: OnDelete
  template:
    spec:
      containers:
        - name: nsmgr
      nodeSelector:
        kubernetes.io/hostname: ${NODES[0]}
EOF

Apply changes:

kubectl apply -k .

Start local NSE instead of the remote one:

Customization file:

cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: ${NAMESPACE}

bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-kernel?ref=041ba2468fb8177f53926af0eab984850aa682c2

patchesStrategicMerge:
- patch-nse.yaml
EOF

NSE patch:

cat > patch-nse.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nse-kernel
spec:
  template:
    spec:
      containers:
        - name: nse
          env:
            - name: NSM_CIDR_PREFIX
              value: 172.16.1.102/31
      nodeSelector:
        kubernetes.io/hostname: ${NODES[0]}
EOF

Apply changes:

kubectl apply -k .

Wait for the new NSE to start:

kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel --field-selector spec.nodeName==${NODES[0]} -n ${NAMESPACE}

Find new NSE pod:

NEW_NSE=$(kubectl get pods -l app=nse-kernel --field-selector spec.nodeName==${NODES[0]} -n ${NAMESPACE} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

Ping from NSC to new NSE:

kubectl exec ${NSC} -n ${NAMESPACE} -- ping -c 4 172.16.1.102

Ping from new NSE to NSC:

kubectl exec ${NEW_NSE} -n ${NAMESPACE} -- ping -c 4 172.16.1.103

Cleanup

Restore NSMgr setup:

kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/apps/nsmgr?ref=041ba2468fb8177f53926af0eab984850aa682c2 -n nsm-system

Delete ns:

kubectl delete ns ${NAMESPACE}