This example shows that NSC can reach NSE registered in floating registry.
NSC and NSE are using the kernel
mechanism to connect to its local forwarder.
Forwarders are using the wireguard
mechanism to connect with each other.
Important points:
- nsc deploys on cluster1 and requests network service from cluster3.
- nse deploys on cluster2 and registers itself in cluster3 with IP payload.
Make sure that you have completed steps from interdomain
1. Prepare cluster2
Switch to cluster2:
export KUBECONFIG=$KUBECONFIG2
Create test namespace:
NAMESPACE1=($(kubectl create -f https://raw.githubusercontent.com/networkservicemesh/deployments-k8s/041ba2468fb8177f53926af0eab984850aa682c2/examples/interdomain/usecases/namespace.yaml)[0])
NAMESPACE1=${NAMESPACE1:10}
Create kustomization file:
cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ${NAMESPACE1}
resources:
- nse.yaml
EOF
Deploy NSE:
kubectl apply -k .
Find NSE pod by labels:
NSE=$(kubectl get pods -l app=nse-kernel -n ${NAMESPACE1} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
[[ ! -z $NSE ]]
2. Prepare cluster1
Switch to cluster1:
export KUBECONFIG=$KUBECONFIG1
Create test namespace:
NAMESPACE2=($(kubectl create -f https://raw.githubusercontent.com/networkservicemesh/deployments-k8s/041ba2468fb8177f53926af0eab984850aa682c2/examples/interdomain/usecases/namespace.yaml)[0])
NAMESPACE2=${NAMESPACE2:10}
Create kustomization file:
cat > kustomization.yaml <<EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ${NAMESPACE2}
bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nsc-kernel?ref=041ba2468fb8177f53926af0eab984850aa682c2
patchesStrategicMerge:
- patch-nsc.yaml
EOF
Create NSC patch:
cat > patch-nsc.yaml <<EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nsc-kernel
spec:
template:
spec:
containers:
- name: nsc
env:
- name: NSM_NETWORK_SERVICES
value: kernel://[email protected]/nsm-1
EOF
Deploy NSC:
kubectl apply -k .
Wait for applications ready:
kubectl wait --for=condition=ready --timeout=5m pod -l app=nsc-kernel -n ${NAMESPACE2}
Find NSC pod by labels:
NSC=$(kubectl get pods -l app=nsc-kernel -n ${NAMESPACE2} --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
3. Ping from NSC to NSE:
Switch to cluster1:
export KUBECONFIG=$KUBECONFIG1
kubectl exec ${NSC} -n ${NAMESPACE2} -- ping -c 4 172.16.1.2
Switch to cluster2:
export KUBECONFIG=$KUBECONFIG2
Ping from NSE to NSC:
kubectl exec ${NSE} -n ${NAMESPACE1} -- ping -c 4 172.16.1.3
- Cleanup resources for cluster2:
export KUBECONFIG=$KUBECONFIG2
kubectl delete ns ${NAMESPACE1}
- Cleanup resources for cluster1:
export KUBECONFIG=$KUBECONFIG1
kubectl delete ns ${NAMESPACE2}