You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
VCD: 10.3.2
CSE: 3.1.4
Container UI Plugin: 3.3.0
Native K8S : ubuntu-16.04_k8-1.21_weave-2.8.1 (Revision 1)
No of Master Node: 1
No of Worker Node: 2
It has been noted that the Native cluster deployment with the above configuration appears completed in the UI with the status "CREATE:SUCCEEDED." However, it is discovered that the Pods are frozen in the Pending state while deploying applications.
root@mstr-vyxs:/etc/docker# kubectl get nodes
NAME STATUS ROLES AGE VERSION
mstr-vyxs NotReady control-plane,master 43h v1.21.2
node-4art NotReady <none> 43h v1.21.2
node-lm57 NotReady <none> 43h v1.21.2
root@mstr-vyxs:/etc/docker#
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7h1m (x2 over 7h1m) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
This problem appears to be caused by a weave-net deployment failure.
root@mstr-vyxs:/etc/docker# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-2fnqq 0/1 Pending 0 42h
kube-system coredns-558bd4d5db-l9jfg 0/1 Pending 0 42h
kube-system etcd-mstr-vyxs 1/1 Running 0 42h
kube-system kube-apiserver-mstr-vyxs 1/1 Running 0 42h
kube-system kube-controller-manager-mstr-vyxs 1/1 Running 0 42h
kube-system kube-proxy-67lt4 1/1 Running 0 42h
kube-system kube-proxy-f8kck 1/1 Running 0 42h
kube-system kube-proxy-m45zl 1/1 Running 0 42h
kube-system kube-scheduler-mstr-vyxs 1/1 Running 0 42h
kube-system weave-net-lcwlm 0/2 Init:ImagePullBackOff 0 42h
kube-system weave-net-qb56f 0/2 Init:ImagePullBackOff 0 3h53m
kube-system weave-net-xnm92 0/2 Init:ImagePullBackOff 0 42h
root@mstr-vyxs:/etc/docker#
#kubectl describe pod weave-net-qb56f -n kube-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 42m (x39 over 3h23m) kubelet (combined from similar events): Failed to pull image "ghcr.io/weaveworks/launcher/weave-kube:2.8.1": rpc error: code = Unknown desc = error pulling image configuration: Get https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:df29c0a4002c047fe35dab1cba959a4bed6f034ab9b95b14280ea7bb158cc111?se=2022-09-12T13%3A05%3A00Z&sig=%2BJ8FnvWz427tQbi%2FNykCIb9c0BXfgBus2PFI0qJf968%3D&sp=r&spr=https&sr=b&sv=2019-12-12: net/http: TLS handshake timeout
Normal Pulling 37m (x42 over 3h57m) kubelet Pulling image "ghcr.io/weaveworks/launcher/weave-kube:2.8.1"
Normal BackOff 2m36s (x979 over 3h57m) kubelet Back-off pulling image "ghcr.io/weaveworks/launcher/weave-kube:2.8.1"
The manual docker pull also fails. All ports on the master and worker nodes have access to the internet. Uncertain as to what is causing the TLS handshake difficulty. Will someone kindly assist in solving this problem?
Describe the bug
VCD: 10.3.2
CSE: 3.1.4
Container UI Plugin: 3.3.0
Native K8S : ubuntu-16.04_k8-1.21_weave-2.8.1 (Revision 1)
No of Master Node: 1
No of Worker Node: 2
It has been noted that the Native cluster deployment with the above configuration appears completed in the UI with the status "CREATE:SUCCEEDED." However, it is discovered that the Pods are frozen in the Pending state while deploying applications.
This problem appears to be caused by a weave-net deployment failure.
The manual docker pull also fails. All ports on the master and worker nodes have access to the internet. Uncertain as to what is causing the TLS handshake difficulty. Will someone kindly assist in solving this problem?
Reproduction steps
Expected behavior
Native cluster should be capable to host the K8S application.
Additional context
No response
The text was updated successfully, but these errors were encountered: