Table of Contents generated with DocToc
The cluster configuration file can be generated by using clusterctl config cluster
command.
This command actually uses the template file and replace the values surrounded by ${}
with environment variables. You have to set all required environment variables in advance. The following sections explain some more details about what should be configured.
Note: You can use the template file by manually replacing values.
Note: By default the command creates highly available control plane with internal OpenStack cloud provider. If you wish to create highly available control plane with external OpenStack cloud provider or single control plane without load balancer, use external-cloud-provider or without-lb flavor respectively. For example,
# Using 'external-cloud-provider' flavor
clusterctl config cluster capi-quickstart \
--flavor external-cloud-provider \
--kubernetes-version v1.21.3 \
--control-plane-machine-count=3 \
--worker-machine-count=1 \
> capi-quickstart.yaml
# Using 'without-lb' flavor
clusterctl config cluster capi-quickstart \
--flavor without-lb \
--kubernetes-version v1.21.3 \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
We currently depend on an up-to-date version of cloud-init otherwise the operating system choice is yours. The kubeadm bootstrap provider we're using also depends on some pre-installed software like a container runtime, kubelet, kubeadm, etc.. . For an examples how to build such an image take a look at image-builder (openstack).
The image can be referenced by exposing it as an environment variable OPENSTACK_IMAGE_NAME
.
The SSH key pair is required. You can create one using,
openstack keypair create [--public-key <file> | --private-key <file>] <name>
The key pair name must be exposed as an environment variable OPENSTACK_SSH_KEY_NAME
.
If you want to login to each machine by ssh, you can access nodes through the bastion host via SSH. Otherwise you have to configure security groups. If spec.managedSecurityGroups
of OpenStackCluster
set to true, two security groups will be created and added to the instances. One is k8s-cluster-${NAMESPACE}-${CLUSTER_NAME}-secgroup-controlplane
, another is k8s-cluster-${NAMESPACE}-${CLUSTER_NAME}-secgroup-worker
. These security group rules include the kubeadm's Check required ports so that each node can not be logged in through ssh by default. Please add pre-existing security group allowing ssh port to OpenStackMachineTemplate spec. Here is an example:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
securityGroups:
- name: allow-ssh
The env.rc script sets the environment variables related to credentials.
source env.rc <path/to/clouds.yaml> <cloud>
The following variables are set.
Variable | Meaning |
---|---|
OPENSTACK_CLOUD | The cloud name which is used as second argument |
OPENSTACK_CLOUD_YAML_B64 | The secret used by Cluster API Provider OpenStack accessing OpenStack |
OPENSTACK_CLOUD_PROVIDER_CONF_B64 | The content of cloud.conf which is used by OpenStack cloud provider |
OPENSTACK_CLOUD_CACERT_B64 | (Optional) The content of your custom CA file which can be specified in your clouds.yaml by ca-file |
Note: Only the external cloud provider supports Application Credentials.
Note: you need to set clusterctl.cluster.x-k8s.io/move
label for the secret created from OPENSTACK_CLOUD_YAML_B64
in order to successfully move objects from bootstrap cluster to target cluster. See bug 626 for further information.
The availability zone names must be exposed as an environment variable OPENSTACK_FAILURE_DOMAIN
.
The DNS servers must be exposed as an environment variable OPENSTACK_DNS_NAMESERVERS
.
The flavors for control plane and worker node machines must be exposed as environment variables OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR
and OPENSTACK_NODE_MACHINE_FLAVOR
respectively.
When running CAPO with --v=6
the gophercloud client logs its requests to the OpenStack API. This can be helpful during debugging.
External network is automatically found, but you can specify the external network explicitly by spec.externalNetworkId
of OpenStackCluster
.
The public network id can be obtained by using command,
openstack network list --external
Note: If your openstack cluster does not already have a public network, you should contact your cloud service provider. We will not review how to troubleshoot this here.
A floating IP is automatically created and associated with the load balancer or controller node, but you can specify the floating IP explicitly by spec.apiServerFloatingIP
of OpenStackCluster
.
You have to be able to create a floating IP in your OpenStack in advance. You can create one using,
openstack floating ip create <public network>
Note: Only user with admin role can create a floating IP with specific IP.
If you have a complex query that you want to use to lookup a network, then you can do this by using a network filter. More details about the filter can be found in NetworkParam
By using filters to look up a network, please note that it is possible to get multiple networks as a result. This should not be a problem, however please test your filters with openstack network list
to be certain that it returns the networks you want. Please refer to the following usage example:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filter:
name: <network-name>
You can specify multiple networks (or subnets) to connect your server to. To do this, simply add another entry in the networks array. The following example connects the server to 3 different networks:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filters:
name: myNetwork
tags: myTag
- uuid: your_network_id
- subnet_id: your_subnet_id
Rather than just using a network, you have the option of specifying a specific subnet to connect your server to. The following is an example of how to specify a specific subnet of a network to use for your server.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
networks:
- filter:
name: <network-name>
subnets:
- filter:
name: <subnet-name>
A server can also be connected to networks by describing what ports to create. Describing a server's connection with ports
allows for finer and more advanced configuration. For example, you can specify per-port security groups, fixed IPs or VNIC type.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
ports:
- networkId: <your-network-id>
description: <your-custom-port-description>
vnicType: normal
fixedIPs:
- subnetId: <your-subnet-id>
ipAddress: <your-fixed-ip>
securityGroups:
- <your-security-group-id>
Any such ports are created in addition to ports used for connections to networks or subnets.
If your cluster supports tagging servers, you have the ability to tag all resources created by the cluster in the cluster.yaml
file. Here is an example how to configure tagging:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackCluster
metadata:
name: <cluster-name>
namespace: <cluster-name>
spec:
tags:
- cluster-tag
To tag resources specific to a machine, add a value to the tags field in controlplane.yaml
and machinedeployment.yaml
like this:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
tags:
- machine-tag
Instead of tagging, you also have the option to add metadata to instances. This functionality should be more commonly available than tagging. Here is a usage example:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: OpenStackMachine
metadata:
name: <cluster-name>-controlplane
namespace: <cluster-name>
spec:
serverMetadata:
name: bob
nickname: bobbert
-
For example in
OpenStackMachineTemplate
setspec.rootVolume.diskSize
to something greater than0
means boot from volume.apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4 kind: OpenStackMachineTemplate metadata: name: <cluster-name>-controlplane namespace: <cluster-name> spec: ... rootVolume: diskSize: <image size> sourceType: "image" sourceUUID: <image id> ...
If creating servers in your OpenStack takes a long time, you can increase the timeout, by default it's 5 minutes. You can set it via the CLUSTER_API_OPENSTACK_INSTANCE_CREATE_TIMEOUT
in your Cluster API Provider OpenStack controller deployment.
If 192.168.0.0/16
is already in use within your network, you must select a different pod network CIDR. You have to replace the CIDR 192.168.0.0/16
with your own in the generated file.
To configure the Cluster API Provider for OpenStack to create a SSH bastion host, add this line to the OpenStackCluster spec after clusterctl config cluster
was successfully executed:
bastion:
enabled: true
instance:
flavor: <Flavor name>
image: <Image name>
sshKeyName: <Key pair name>
A floating IP is created and associated to the bastion host automatically, but you can add the IP address explicitly:
bastion:
...
floatingIP: <Floating IP address>
If managedSecurityGroups: true
, security group rule opening 22/tcp is added to security groups for bastion, controller, and worker nodes respectively. Otherwise, you have to add securityGroups
to the bastion
in OpenStackCluster
spec and OpenStackMachineTemplate
spec template respectively.
Once the workload cluster is up and running after being configured for an SSH bastion host, you can use the kubectl get openstackcluster command to look up the floating IP address of the bastion host (make sure the kubectl context is set to the management cluster). The output will look something like this:
$ kubectl get openstackcluster
NAME CLUSTER READY NETWORK SUBNET BASTION
nonha nonha true 2e2a2fad-28c0-4159-8898-c0a2241a86a7 53cb77ab-86a6-4f2c-8d87-24f8411f15de 10.0.0.213