You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/getting-started-guides/ubuntu.md
+79-58
Original file line number
Diff line number
Diff line change
@@ -35,53 +35,66 @@ Kubernetes Deployment On Bare-metal Ubuntu Nodes
35
35
36
36
-[Introduction](#introduction)
37
37
-[Prerequisites](#prerequisites)
38
-
-[Starting a Cluster](#starting-a-cluster)
39
-
- [Make *kubernetes* , *etcd* and *flanneld* binaries](#make-kubernetes--etcd-and-flanneld-binaries)
40
-
- [Configure and start the Kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
41
-
- [Deploy addons](#deploy-addons)
42
-
- [Trouble Shooting](#trouble-shooting)
38
+
-[Starting a Cluster](#starting-a-cluster)
39
+
-[Download binaries](#download-binaries)
40
+
-[Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster)
41
+
-[Test it out](#test-it-out)
42
+
-[Deploy addons](#deploy-addons)
43
+
-[Trouble shooting](#trouble-shooting)
43
44
44
45
## Introduction
45
46
46
-
This document describes how to deploy Kubernetes on ubuntu nodes, including 1 Kubernetes master and 3 Kubernetes nodes, and people uses this approach can scale to **any number of nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document.
47
+
This document describes how to deploy kubernetes on ubuntu nodes, 1 master and 3 nodes involved
48
+
in the given examples. You can scale to **any number of nodes** by changing some settings with ease.
49
+
The original idea was heavily inspired by @jainvipin 's ubuntu single node
50
+
work, which has been merge into this document.
47
51
48
52
[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work.
49
53
50
54
## Prerequisites
51
55
52
-
*1 The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge*
53
-
54
-
*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)*
55
-
56
-
*3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with Ubuntu 15 which use systemd instead of upstart and we are fixing this*
57
-
58
-
*4 Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-1.0.1, but it may work with higher versions*
59
-
60
-
*5 All the remote servers can be ssh logged in without a password by using key authentication*
56
+
1. The nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge.
57
+
2. All machines can communicate with each other, no need to connect Internet (should use
58
+
private docker registry in this case).
59
+
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
60
+
Ubuntu 15 which use systemd instead of upstart. We are working around fixing this.
61
+
4. Dependencies of this guide: etcd-2.0.12, flannel-0.4.0, k8s-1.0.3, may work with higher versions.
62
+
5. All the remote servers can be ssh logged in without a password by using key authentication.
61
63
62
64
63
65
### Starting a Cluster
64
66
65
-
#### Make *kubernetes* , *etcd* and *flanneld* binaries
67
+
#### Download binaries
66
68
67
-
First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git`
Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`.
75
+
Then download all the needed binaries into given directory (cluster/ubuntu/binaries)
72
76
73
-
You can customize your etcd version, flannel version, k8s version by changing variable `ETCD_VERSION` , `FLANNEL_VERSION` and `K8S_VERSION` in build.sh, default etcd version is 2.0.12, flannel version is 0.4.0 and K8s version is 1.0.1.
77
+
```console
78
+
$ cd kubernetes/cluster/ubuntu
79
+
$ ./build.sh
80
+
```
74
81
75
-
Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `kubelet`, `kube-proxy`, `etcd`, `etcdctl` and `flannel` in the binaries/master or binaries/minion directory.
82
+
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
83
+
`ETCD_VERSION` , `FLANNEL_VERSION` and `K8S_VERSION` in build.sh, by default etcd version is 2.0.12,
84
+
flannel version is 0.4.0 and k8s version is 1.0.3.
76
85
77
-
> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as an example.
86
+
Make sure that the involved binaries are located properly in the binaries/master
87
+
or binaries/minion directory before you go ahead to the next step .
88
+
89
+
Note that we use flannel here to set up overlay network, yet it's optional. Actually you can build up k8s
90
+
cluster natively, or use flannel, Open vSwitch or any other SDN tool you like.
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
116
+
The first variable `nodes` defines all your cluster nodes, MASTER node comes first and
117
+
separated with blank space like `<user_1@ip_1> <user_2@ip_2> <user_3@ip_3> `
104
118
105
-
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and node, "a" stands for master, "i" stands for node. So they are just defined the k8s cluster as the table above described.
119
+
Then the `role` variable defines the role of above machine in the same order, "ai" stands for machine
120
+
acts as both master and node, "a" stands for master, "i" stands for node.
106
121
107
122
The `NUM_MINIONS` variable defines the total number of nodes.
108
123
109
-
The `SERVICE_CLUSTER_IP_RANGE` variable defines the Kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range.
124
+
The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure
125
+
that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips.
126
+
You can use below three private network range according to rfc1918. Besides you'd better not choose the one
127
+
that conflicts with your own private network range.
110
128
111
129
10.0.0.0 - 10.255.255.255 (10/8 prefix)
112
130
113
131
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
114
132
115
133
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
116
134
117
-
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
135
+
The `FLANNEL_NET` variable defines the IP range used for flannel overlay network,
136
+
should not conflict with above `SERVICE_CLUSTER_IP_RANGE`.
118
137
119
138
After all the above variables being set correctly, we can use following command in cluster/ directory to bring up the whole cluster.
120
139
121
140
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh`
122
141
123
-
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below, so you will not type in the wrong password.
142
+
The scripts automatically scp binaries and config files to all the machines and start the k8s service on them.
143
+
The only thing you need to do is to type the sudo password when promoted.
124
144
125
145
```console
126
146
Deploying minion on machine 10.10.103.223
127
-
128
147
...
129
-
130
148
[sudo] password to copy files and start minion:
131
149
```
132
150
133
-
If all things goes right, you will see the below message from console
134
-
`Cluster validation succeeded` indicating the k8s is up.
151
+
If all things goes right, you will see the below message from console indicating the k8s is up.
152
+
153
+
```console
154
+
Cluster validation succeeded
155
+
```
135
156
136
-
**All done !**
157
+
#### Test it out
137
158
138
-
You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly.
159
+
You can use `kubectl` command to check if the newly created k8s is working correctly.
160
+
The `kubectl` binary is under the `cluster/ubuntu/binaries` directory.
161
+
You can make it available via PATH, then you can use the below command smoothly.
139
162
140
-
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
163
+
For example, use `$ kubectl get nodes` to see if all of your nodes are ready.
@@ -154,7 +175,8 @@ Also you can run Kubernetes [guest-example](../../examples/guestbook/) to build
154
175
155
176
#### Deploy addons
156
177
157
-
After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addons like DNS and UI onto the existing cluster.
178
+
Assuming you have a starting cluster now, this section will tell you how to deploy addons like DNS
179
+
and UI onto the existing cluster.
158
180
159
181
The configuration of DNS is configured in cluster/ubuntu/config-default.sh.
160
182
@@ -169,60 +191,59 @@ DNS_REPLICAS=1
169
191
```
170
192
171
193
The `DNS_SERVER_IP` is defining the ip of dns server which must be in the `SERVICE_CLUSTER_IP_RANGE`.
172
-
173
194
The `DNS_REPLICAS` describes how many dns pod running in the cluster.
174
195
175
-
Further, if you want to deploy UI addon, you should also modify the configuration file as follows:
After all the above variable have been set, just type the below command.
202
+
After all the above variables have been set, just type the following command.
182
203
183
204
```console
184
205
$ cd cluster/ubuntu
185
-
186
206
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
187
207
```
188
208
189
-
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster. Done!
209
+
After some time, you can use `$ kubectl get pods --namespace=kube-system` to see the DNS and UI pods are running in the cluster.
190
210
191
211
#### On going
192
212
193
213
We are working on these features which we'd like to let everybody know:
194
214
195
-
1. Run Kubernetes binaries in Docker using [kube-in-docker](https://github.com/ZJU-SEL/kube-in-docker/tree/baremetal-kube), to eliminate OS-distro differences.
196
-
215
+
1. Run kubernetes binaries in Docker using [kube-in-docker](https://github.com/ZJU-SEL/kube-in-docker/tree/baremetal-kube),
216
+
to eliminate OS-distro differences.
197
217
2. Tearing Down scripts: clear and re-create the whole stack by one click.
198
218
199
-
#### Trouble Shooting
219
+
#### Trouble shooting
200
220
201
-
Generally, what this approach did is quite simple:
221
+
Generally, what this approach does is quite simple:
202
222
203
223
1. Download and copy binaries and configuration files to proper directories on every node
204
-
205
224
2. Configure `etcd` using IPs based on input from user
206
-
207
225
3. Create and start flannel network
208
226
209
-
So, if you see a problem, **check etcd configuration first**
227
+
So if you encounter a problem, **check etcd configuration first**
210
228
211
229
Please try:
212
230
213
231
1. Check `/var/log/upstart/etcd.log` for suspicious etcd log
214
-
215
232
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
0 commit comments