Skip to content

Commit a9b67d5

Browse files
Miouge1k8s-ci-robot
authored andcommitted
Add markdown CI (kubernetes-sigs#5380)
1 parent b1fbead commit a9b67d5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+572
-512
lines changed

Diff for: .gitlab-ci/lint.yml

+8
Original file line numberDiff line numberDiff line change
@@ -47,3 +47,11 @@ tox-inventory-builder:
4747
- cd contrib/inventory_builder && tox
4848
when: manual
4949
except: ['triggers', 'master']
50+
51+
markdownlint:
52+
stage: unit-tests
53+
image: node
54+
before_script:
55+
- npm install -g markdownlint-cli
56+
script:
57+
- markdownlint README.md docs --ignore docs/_sidebar.md

Diff for: .markdownlint.yaml

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
---
2+
MD013: false

Diff for: README.md

+129-130
Large diffs are not rendered by default.

Diff for: docs/ansible.md

+20-19
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,7 @@
1-
Ansible variables
2-
===============
1+
# Ansible variables
32

3+
## Inventory
44

5-
Inventory
6-
-------------
75
The inventory is composed of 3 groups:
86

97
* **kube-node** : list of kubernetes nodes where the pods will run.
@@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
1412
the _etcd_ group into the _k8s-cluster_, unless you are certain
1513
to do that and you have it fully contained in the latter:
1614

17-
```
15+
```ShellSession
1816
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
1917
```
2018

@@ -32,7 +30,7 @@ There are also two special groups:
3230

3331
Below is a complete inventory example:
3432

35-
```
33+
```ini
3634
## Configure 'ip' variable to bind kubernetes services on a
3735
## different ip than the default iface
3836
node1 ansible_host=95.54.0.12 ip=10.3.0.1
@@ -63,8 +61,7 @@ kube-node
6361
kube-master
6462
```
6563

66-
Group vars and overriding variables precedence
67-
----------------------------------------------
64+
## Group vars and overriding variables precedence
6865

6966
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
7067
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
@@ -73,7 +70,7 @@ Mandatory variables that are common for at least one role (or a node group) can
7370
There are also role vars for docker, kubernetes preinstall and master roles.
7471
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
7572
those cannot be overridden from the group vars. In order to override, one should use
76-
the `-e ` runtime flags (most simple way) or other layers described in the docs.
73+
the `-e` runtime flags (most simple way) or other layers described in the docs.
7774

7875
Kubespray uses only a few layers to override things (or expect them to
7976
be overridden for roles):
@@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
9794
task vars (only for the task) | Unused for roles, but only for helper scripts
9895
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
9996

100-
Ansible tags
101-
------------
97+
## Ansible tags
98+
10299
The following tags are defined in playbooks:
103100

104101
| Tag name | Used for
@@ -145,36 +142,40 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
145142
tags found in the codebase. New tags will be listed with the empty "Used for"
146143
field.
147144

148-
Example commands
149-
----------------
145+
## Example commands
146+
150147
Example command to filter and apply only DNS configuration tasks and skip
151148
everything else related to host OS configuration and downloading images of containers:
152149

153-
```
150+
```ShellSession
154151
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
155152
```
153+
156154
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
157-
```
155+
156+
```ShellSession
158157
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
159158
```
159+
160160
And this prepares all container images locally (at the ansible runner node) without installing
161161
or upgrading related stuff or trying to upload container to K8s cluster nodes:
162-
```
162+
163+
```ShellSession
163164
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
164165
-e download_run_once=true -e download_localhost=true \
165166
--tags download --skip-tags upload,upgrade
166167
```
167168

168169
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
169170

170-
Bastion host
171-
--------------
171+
## Bastion host
172+
172173
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
173174
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
174175
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
175176
bastion host.
176177

177-
```
178+
```ShellSession
178179
[bastion]
179180
bastion ansible_host=x.x.x.x
180181
```

Diff for: docs/arch.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
## Architecture compatibility
1+
# Architecture compatibility
22

33
The following table shows the impact of the CPU architecture on compatible features:
4+
45
- amd64: Cluster using only x86/amd64 CPUs
56
- arm64: Cluster using only arm64 CPUs
67
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs

Diff for: docs/atomic.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,22 @@
1-
Atomic host bootstrap
2-
=====================
1+
# Atomic host bootstrap
32

43
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
54

65
Note: Flannel is the only plugin that has currently been tested with atomic
76

8-
### Vagrant
7+
## Vagrant
98

10-
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
9+
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
1110
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
1211
* Update `vm_memory = 2048` and `vm_cpus = 2`
1312
* Networking on vagrant hosts has to be brought up manually once they are booted.
1413

15-
```
14+
```ShellSession
1615
vagrant ssh
1716
sudo /sbin/ifup enp0s8
1817
```
1918

20-
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
21-
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://dl.fedoraproject.org/pub/alt/atomic/stable/
19+
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
20+
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
2221

2322
Then you can proceed to [cluster deployment](#run-deployment)

Diff for: docs/aws.md

+9-6
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
AWS
2-
===============
1+
# AWS
32

43
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
54

@@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic
1312

1413
You can now create your cluster!
1514

16-
### Dynamic Inventory ###
15+
## Dynamic Inventory
16+
1717
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
1818

1919
This will produce an inventory that is passed into Ansible that looks like the following:
20-
```
20+
21+
```json
2122
{
2223
"_meta": {
2324
"hostvars": {
@@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
4849
```
4950

5051
Guide:
52+
5153
- Create instances in AWS as needed.
5254
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
5355
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
5456
- Set the following AWS credentials and info as environment variables in your terminal:
55-
```
57+
58+
```ShellSession
5659
export AWS_ACCESS_KEY_ID="xxxxx"
5760
export AWS_SECRET_ACCESS_KEY="yyyyy"
5861
export REGION="us-east-2"
5962
```
63+
6064
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
6165

6266
## Kubespray configuration
@@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
7579
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
7680
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
7781
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.
78-

Diff for: docs/azure.md

+30-22
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,50 @@
1-
Azure
2-
===============
1+
# Azure
32

43
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
54

65
All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.
76

87
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
98

10-
### Parameters
9+
## Parameters
1110

1211
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
1312

14-
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
13+
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
1514
After installation you have to run `azure login` to get access to your account.
1615

16+
### azure\_tenant\_id + azure\_subscription\_id
1717

18-
#### azure\_tenant\_id + azure\_subscription\_id
1918
run `azure account show` to retrieve your subscription id and tenant id:
2019
`azure_tenant_id` -> Tenant ID field
2120
`azure_subscription_id` -> ID field
2221

22+
### azure\_location
2323

24-
#### azure\_location
2524
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
2625

26+
### azure\_resource\_group
2727

28-
#### azure\_resource\_group
2928
The name of the resource group your instances are in, can be retrieved via `azure group list`
3029

31-
#### azure\_vnet\_name
30+
### azure\_vnet\_name
31+
3232
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
3333

34-
#### azure\_subnet\_name
34+
### azure\_subnet\_name
35+
3536
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
3637

37-
#### azure\_security\_group\_name
38+
### azure\_security\_group\_name
39+
3840
The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
3941

40-
#### azure\_aad\_client\_id + azure\_aad\_client\_secret
42+
### azure\_aad\_client\_id + azure\_aad\_client\_secret
43+
4144
These will have to be generated first:
45+
4246
- Create an Azure AD Application with:
43-
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
47+
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
4448
display name, identifier-uri, homepage and the password can be chosen
4549
Note the AppId in the output.
4650
- Create Service principal for the application with:
@@ -51,24 +55,28 @@ This is the AppId from the last command
5155

5256
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
5357

54-
#### azure\_loadbalancer\_sku
58+
### azure\_loadbalancer\_sku
59+
5560
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
5661

57-
#### azure\_exclude\_master\_from\_standard\_lb
62+
### azure\_exclude\_master\_from\_standard\_lb
63+
5864
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
5965

60-
#### azure\_disable\_outbound\_snat
66+
### azure\_disable\_outbound\_snat
67+
6168
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
62-
63-
#### azure\_primary\_availability\_set\_name
64-
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
65-
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
66-
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
69+
70+
### azure\_primary\_availability\_set\_name
71+
72+
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
73+
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
74+
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
6775
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
6876

69-
#### azure\_use\_instance\_metadata
70-
Use instance metadata service where possible
77+
### azure\_use\_instance\_metadata
7178

79+
Use instance metadata service where possible
7280

7381
## Provisioning Azure with Resource Group Templates
7482

0 commit comments

Comments
 (0)