Ensure you have your `LINODE_TOKEN` set as outlined in the
[getting started prerequisites](../topics/getting-started.md#Prerequisites) section.
There are no requirements since development dependencies are fetched as needed via the make targets, but a recommendation is to install Devbox
git clone https://github.com/linode/cluster-api-provider-linode
cd cluster-api-provider-linode
To enable automatic code validation on code push, execute the following commands:
PATH="$PWD/bin:$PATH" make husky && husky install
If you would like to temporarily disable git hook, set SKIP_GIT_PUSH_HOOK
value:
SKIP_GIT_PUSH_HOOK=1 git push
-
Install dependent packages in your project
devbox install
This will take a while, go and grab a drink of water.
-
Use devbox environment
devbox shell
From this point you can use the devbox shell like a regular shell. The rest of the guide assumes a devbox shell is used, but the make target dependencies will install any missing dependencies if needed when running outside a devbox shell.
This provider is based on the Cluster API project. It's recommended to familiarize yourself with Cluster API resources, concepts, and conventions outlined in the Cluster API Book.
This repository uses Go Modules to track and vendor dependencies.
To pin a new dependency, run:
go get <repository>@<version>
The code in this repo is organized across the following packages:
/api
contains the custom resource types managed by CAPL./cmd
contains the main entrypoint for registering controllers and running the controller manager./controller
contains the various controllers that run in CAPL for reconciling the custom resource types./cloud/scope
contains all Kubernetes client interactions scoped to each resource reconciliation loop. Each "scope" object is expected to store both a Kubernetes client and a Linode client./cloud/services
contains all Linode client interactions. Functions defined in this package all expect a "scope" object which contains a Linode client to use./mock
contains gomock clients generated from/cloud/scope/client.go
./util/
contains general-use helper functions used in other packages./util/reconciler
contains helper functions and constants used within the/controller
package.
When adding a new controller, it is preferable that controller code only use the Kubernetes and Linode clients via functions defined in /cloud/scope
and /cloud/services
. This ensures each separate package can be tested in isolation using mock clients.
If you want to create RKE2 and/or K3s clusters, make sure to
set the following env vars first:
```sh
export INSTALL_RKE2_PROVIDER=true
export INSTALL_K3S_PROVIDER=true
```
Additionally, if you want to skip the docker build step for CAPL to
instead use the latest image on `main` from Dockerhub, set the following:
```sh
export SKIP_DOCKER_BUILD=true
```
To build a kind cluster and start Tilt, simply run:
make local-deploy
Once your kind management cluster is up and running, you can deploy a workload cluster.
To tear down the tilt-cluster, run
kind delete cluster --name tilt
After your kind management cluster is up and running with Tilt, you should be ready to deploy your first cluster.
For local development, templates should be generated via:
make local-release
This creates infrastructure-local-linode/v0.0.0/
with all the cluster templates:
infrastructure-local-linode/v0.0.0
├── cluster-template-clusterclass-kubeadm.yaml
├── cluster-template-etcd-backup-restore.yaml
├── cluster-template-k3s.yaml
├── cluster-template-rke2.yaml
├── cluster-template.yaml
├── clusterclass-kubeadm.yaml
├── infrastructure-components.yaml
└── metadata.yaml
This can then be used with clusterctl
by adding the following to ~/.clusterctl/cluster-api.yaml
(assuming the repo exists in the $HOME
directory):
providers:
- name: local-linode
url: ${HOME}/cluster-api-provider-linode/infrastructure-local-linode/v0.0.0/infrastructure-components.yaml
type: InfrastructureProvider
Here is a list of required configuration parameters:
## Cluster settings
export CLUSTER_NAME=capl-cluster
## Linode settings
export LINODE_REGION=us-ord
# Multi-tenancy: This may be changed for each cluster to deploy to different Linode accounts.
export LINODE_TOKEN=<your linode PAT>
export LINODE_CONTROL_PLANE_MACHINE_TYPE=g6-standard-2
export LINODE_MACHINE_TYPE=g6-standard-2
You can also use `clusterctl generate` to see which variables need to be set:
```
clusterctl generate cluster $CLUSTER_NAME --infrastructure local-linode:v0.0.0 [--flavor <flavor>] --list-variables
```
Once you have all the necessary environment variables set, you can deploy a workload cluster with the default flavor:
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure local-linode:v0.0.0 \
| kubectl apply -f -
This will provision the cluster within VPC with the CNI defaulted to cilium and the linode-ccm installed.
ClusterClass experimental feature is enabled by default in the KIND management cluster
created via `make tilt-cluster`
You can use the clusterclass
flavor to create a workload cluster as well, assuming the
management cluster has the ClusterTopology feature gate set:
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure local-linode:v0.0.0 \
--flavor clusterclass-kubeadm \
| kubectl apply -f -
For any issues, please refer to the [troubleshooting guide](../topics/troubleshooting.md).
To delete the cluster, simply run:
kubectl delete cluster $CLUSTER_NAME
VPCs are not deleted when a cluster is deleted using kubectl. One can run `kubectl delete linodevpc <vpcname>` to cleanup VPC once cluster is deleted.
For any issues, please refer to the [troubleshooting guide](../topics/troubleshooting.md).
To run E2E locally run:
make e2etest
This command creates a KIND cluster, and executes all the defined tests.
Please ensure you have [increased maximum open files on your host](https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files)