You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: readme.md
+90Lines changed: 90 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -68,6 +68,12 @@ This is just a simple demonstration to get a basic understanding of how kubernet
68
68
-[Creating a namespace](#creating-a-namespace)
69
69
-[Managing objects in other namespaces](#managing-objects-in-other-namespaces)
70
70
-[Understanding the isolation provided by namespaces](#understanding-the-isolation-provided-by-namespaces)
71
+
-[Stopping and removing pods](#stopping-and-removing-pods)
72
+
-[Deleting a pod by name](#deleting-a-pod-by-name)
73
+
-[Deleting pods using label selectors](#deleting-pods-using-label-selectors)
74
+
-[Deleting pods by deleting the whole namespace](#deleting-pods-by-deleting-the-whole-namespace)
75
+
-[Deleting all pods in namespace, while keeping the namespace](#deleting-all-pods-in-namespace-while-keeping-the-namespace)
76
+
-[Delete almost all resources in namespace](#delete-almost-all-resources-in-namespace)
71
77
72
78
4.[Todo](#todo)
73
79
@@ -800,6 +806,90 @@ need to pass the `--namespace (or -n)` flag to kubectl. If you don’t specify t
800
806
801
807
To wrap up this section about namespaces, let me explain what namespaces don’t provide at least not out of the box. Although namespaces allow you to isolate objects into distinct groups, which allows you to operate only on those belonging to the specified namespace, they don’t provide any kind of isolation of running objects. For example, you may think that when different users deploy pods across different namespaces, those pods are isolated from each other and can’t communicate but that’s not necessarily the case. Whether namespaces provide network isolation depends on which networking solution is deployed with Kubernetes. When the solution doesn’t provide inter-namespace network isolation, if a pod in namespace foo knows the IP address of a pod in namespace bar, there is nothing preventing it from sending traffic, such as HTTP requests, to the other pod.
802
808
809
+
### Stopping and removing pods
810
+
811
+
We have created a number of pods which should all be running. If you have followed from the start, you should have 5 pods in `default` namespace and one in `custom-namespace`. We are going to stop them all now, because we don't need them anymore.
812
+
813
+
#### Deleting a pod by name
814
+
815
+
Let's first delele `kubia-gpu` pod name
816
+
817
+
`kubectl delete po kubia-gpu`
818
+
819
+
#### Deleting pods using label selectors
820
+
821
+
Instead of specifying each pod to delete by name, you’ll now use what you’ve learned
822
+
about label selectors to stop both the `kubia-manual` and the `kubia-manual-v2` pod.
823
+
Both pods include the `creation_method=manual` label, so you can delete them by
824
+
using a label selector:
825
+
826
+
`kubectl delete po -l creation_method=manual`
827
+
> pod "kubia-manual" deleted
828
+
> pod "kubia-manual-v2" deleted
829
+
830
+
In the earlier microservices example, where you had tens (or possibly hundreds) of
831
+
pods, you could, for instance, delete all canary pods at once by specifying the
832
+
`rel=canary`label selector
833
+
834
+
`kubectl delete po -l rel=canary`
835
+
836
+
#### Deleting pods by deleting the whole namespace
837
+
838
+
Okay, back to your real pods. What about the pod in the `custom-namespace`? We no
839
+
longer need either the pods in that namespace, or the namespace itself. You can delete the whole namespace using the following command. Your all pods inside that workspace will be automatically deleted.
840
+
841
+
`kubectl delete ns custom-namespace`
842
+
> namespace "custom-namespace" deleted
843
+
844
+
#### Deleting all pods in namespace, while keeping the namespace
845
+
846
+
Suppose you want to keep your namespace but delete all the pods in it, so this is the approach to follow. We now have cleaned almost everything but we have some pods running if you ran the `kubectl run` command before.
847
+
848
+
This time, instead of deleting the specific pod, tell Kubernetes to delete all pods in the current namespace by using the --all option
849
+
850
+
`kubeclt delete po --all`
851
+
```bash
852
+
pod "kubia-pjxrs" deleted
853
+
pod "kubia-xvfxp" deleted
854
+
pod "kubia-zb95q" deleted
855
+
```
856
+
857
+
Now, double check that no pods were left running:
858
+
859
+
`kubectl get po`
860
+
```bash
861
+
kubia-5gknm 1/1 Running 0 48s
862
+
kubia-h62k7 1/1 Running 0 48s
863
+
kubia-x4nsb 1/1 Running 0 48s
864
+
```
865
+
866
+
Wait, what!?! All pods are terminating, but a new pod which weren't there before, has appeared. No matter how many times you delete all pods, a new pod called kubia-something will emerge.
867
+
868
+
You may remember you created your first pod with the `kubectl run` command. I mentioned that this doesn’t create a pod directly, but instead creates a `ReplicationController`, which then creates the pod. As soon as you delete a pod created by the ReplicationController, it immediately creates a new one. To delete the pod, you also need to **delete the ReplicationController**.
869
+
870
+
#### Delete almost all resources in namespace
871
+
872
+
You can delete the ReplicationController and the pods, as well as all the Services
873
+
you’ve created, by deleting all resources in the current namespace with a single
874
+
command:
875
+
876
+
`kubectl delete all --all`
877
+
```bash
878
+
pod "kubia-5gknm" deleted
879
+
pod "kubia-h62k7" deleted
880
+
pod "kubia-x4nsb" deleted
881
+
replicationcontroller "kubia" deleted
882
+
service "kubernetes" deleted
883
+
service "kubia-http" deleted
884
+
```
885
+
886
+
The first all in the command specifies that you’re deleting resources of all types, and the --all option specifies that you’re deleting all resource instances instead of specifying them by name (you already used this option when you ran the previous delete command).
887
+
888
+
As it deletes resources, kubectl will print the name of every resource it deletes. In the list, you should see the kubia ReplicationController and the `kubia-http` Service you created before.
889
+
890
+
**Note**:- The kubectl delete all --all command also deletes the kubernetes
891
+
Service, but it should be recreated automatically in a few moments.
0 commit comments