Skip to content

Commit

Permalink
Merge branch 'main' into main-merge-e2e
Browse files Browse the repository at this point in the history
  • Loading branch information
komer3 committed Apr 22, 2024
2 parents dcce13f + 3a65db7 commit 307acb5
Show file tree
Hide file tree
Showing 38 changed files with 484 additions and 118 deletions.
8 changes: 6 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,16 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
check-latest: true
- name: Create Release Artifacts
run: make release
env:
RELEASE_TAG: ${{ github.ref_name }}
- name: Upload Release Artifacts
uses: softprops/action-gh-release@v2
with:
files: |
./release
files: ./infrastructure-linode/*
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -216,10 +216,10 @@ tilt-cluster: ctlptl tilt kind clusterctl

##@ Release:

RELEASE_DIR ?= release
RELEASE_DIR ?= infrastructure-linode

.PHONY: release
release: $(KUSTOMIZE) clean-release set-manifest-image release-manifests generate-flavors release-templates release-metadata clean-release-git
release: kustomize clean-release set-manifest-image release-manifests generate-flavors release-templates release-metadata clean-release-git

$(RELEASE_DIR):
mkdir -p $(RELEASE_DIR)/
Expand All @@ -243,7 +243,7 @@ release-manifests: $(KUSTOMIZE) $(RELEASE_DIR)

.PHONY: local-release
local-release:
RELEASE_DIR=infrastructure-linode/0.0.0 $(MAKE) release
RELEASE_DIR=infrastructure-linode/v0.0.0 $(MAKE) release
$(MAKE) clean-release-git

## --------------------------------------
Expand Down
4 changes: 4 additions & 0 deletions docs/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@
- [Flavors](./topics/flavors/flavors.md)
- [Default (kubeadm)](./topics/flavors/default.md)
- [Dual-stack (kubeadm)](./topics/flavors/dual-stack.md)
- [Etcd-disk (kubeadm)](./topics/flavors/etcd-disk.md)
- [ClusterClass kubeadm](./topics/flavors/clusterclass-kubeadm.md)
- [Cluster Autoscaler (kubeadm)](./topics/flavors/cluster-autoscaler.md)
- [k3s](./topics/flavors/k3s.md)
- [rke2](./topics/flavors/rke2.md)
- [Etcd](./topics/etcd.md)
Expand All @@ -17,6 +19,8 @@
- [Disks](./topics/disks/disks.md)
- [OS Disk](./topics/disks/os-disk.md)
- [Data Disks](./topics/disks/data-disks.md)
- [Machine Health Checks](./topics/health-checking.md)
- [Autoscaling](./topics/autoscaling.md)
- [Development](./developers/development.md)
- [Releasing](./developers/releasing.md)
- [Reference](./reference/reference.md)
19 changes: 9 additions & 10 deletions docs/src/developers/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
- [Customizing the cluster deployment](#customizing-the-cluster-deployment)
- [Creating the workload cluster](#creating-the-workload-cluster)
- [Using the default flavor](#using-the-default-flavor)
- [Using ClusterClass (alpha)](#using-clusterclass)
- [Using ClusterClass (alpha)](#using-clusterclass-alpha)
- [Cleaning up the workload cluster](#cleaning-up-the-workload-cluster)
- [Automated Testing](#automated-testing)
- [E2E Testing](#e2e-testing)
Expand Down Expand Up @@ -142,18 +142,18 @@ kind delete cluster --name tilt

After your kind management cluster is up and running with Tilt, you should be ready to deploy your first cluster.

#### Generating the cluster templates
#### Generating local cluster templates

For local development, templates should be generated via:

```sh
make local-release
```

This creates `infrastructure-linode/0.0.0/` with all the cluster templates:
This creates `infrastructure-linode/v0.0.0/` with all the cluster templates:

```sh
infrastructure-linode/0.0.0
infrastructure-linode/v0.0.0
├── cluster-template-clusterclass-kubeadm.yaml
├── cluster-template-etcd-backup-restore.yaml
├── cluster-template-k3s.yaml
Expand All @@ -169,8 +169,8 @@ This can then be used with `clusterctl` by adding the following to `~/.clusterct

```
providers:
- name: linode
url: ${HOME}/cluster-api-provider-linode/infrastructure-linode/0.0.0/infrastructure-components.yaml
- name: akamai-linode
url: ${HOME}/cluster-api-provider-linode/infrastructure-linode/v0.0.0/infrastructure-components.yaml
type: InfrastructureProvider
```

Expand All @@ -181,7 +181,6 @@ Here is a list of required configuration parameters:
```sh
## Cluster settings
export CLUSTER_NAME=capl-cluster
export KUBERNETES_VERSION=v1.29.1

## Linode settings
export LINODE_REGION=us-ord
Expand All @@ -195,7 +194,7 @@ export LINODE_MACHINE_TYPE=g6-standard-2
You can also use `clusterctl generate` to see which variables need to be set:
```
clusterctl generate cluster $CLUSTER_NAME --infrastructure linode:0.0.0 [--flavor <flavor>] --list-variables
clusterctl generate cluster $CLUSTER_NAME --infrastructure akamai-linode:v0.0.0 [--flavor <flavor>] --list-variables
```
~~~
Expand All @@ -210,7 +209,7 @@ you can deploy a workload cluster with the default flavor:
```sh
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure linode:0.0.0 \
--infrastructure akamai-akamai-linode:v0.0.0 \
| kubectl apply -f -
```

Expand All @@ -230,7 +229,7 @@ management cluster has the [ClusterTopology feature gate set](https://cluster-ap
```sh
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure linode:0.0.0 \
--infrastructure akamai-linode:v0.0.0 \
--flavor clusterclass-kubeadm \
| kubectl apply -f -
```
Expand Down
21 changes: 4 additions & 17 deletions docs/src/developers/releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ backported to the current and previous release lines.

## Versioning Scheme

CAPL follows the [semantic versionining](https://semver.org/#semantic-versioning-200) specification.
CAPL follows the [semantic versioning](https://semver.org/#semantic-versioning-200) specification.

Example versions:

Expand All @@ -25,34 +25,21 @@ Example versions:
### Update metadata.yaml (skip for patch releases)

- Make sure [metadata.yaml](https://github.com/linode/cluster-api-provider-linode/blob/main/metadata.yaml)
is up to date and contains the new release with the correct Cluster API contract version.
is up-to-date and contains the new release with the correct Cluster API contract version.
- If not, open a PR to add it.

### Create a release branch (skip for patch releases)

- Create a release branch off of `main` named
`release-$(MAJOR_VERSION).$(MINOR_VERSION)` (e.g. release-0.1)

### Create a tag for the release branch

- After ensuring all desired changes for the release are in the release branch,
create a tag following semantic versioning (e.g. v0.1.1)
- Ensure the [release workflow](https://github.com/linode/cluster-api-provider-linode/actions/workflows/release.yml)
succeeds for the created tag to build and push the Docker image and generate
the [release artifacts](#expected-artifacts).

### Release in GitHub

- Create a [new release](https://github.com/linode/cluster-api-provider-linode/releases/new).
- Use the newly created tag
- Enter tag and select create tag on publish
- Make sure to click "Generate Release Notes"
- Review the generated Release Notes and make any necessary changes.
- If the tag is a pre-release, make sure to check the "Set as a pre-release box"

### Expected artifacts

- A `infrastructure-components.yaml` file containing the resources needed to deploy to Kubernetes
- A `cluster-templates.yaml` file for each supported flavor
- A `cluster-templates-*.yaml` file for each supported flavor
- A `metadata.yaml` file which maps release series to the Cluster API contract version

### Communication
Expand Down
62 changes: 62 additions & 0 deletions docs/src/topics/autoscaling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Auto-scaling

This guide covers auto-scaling for CAPL clusters. The recommended tool for auto-scaling on Cluster API is [Cluster
Autoscaler](https://www.github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler).

## Flavor

The auto-scaling feature is provided by an add-on as part of the [Cluster Autoscaler
flavor](./flavors/cluster-autoscaler.md).

## Configuration

By default, the Cluster Autoscaler add-on [runs in the management cluster, managing an external workload
cluster](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling#autoscaler-running-in-management-cluster-using-service-account-credentials-with-separate-workload-cluster).

```
+------------+ +----------+
| mgmt | | workload |
| ---------- | kubeconfig | |
| autoscaler +------------>| |
+------------+ +----------+
```

A separate Cluster Autoscaler is deployed for each workload cluster, configured to only monitor node groups for the
specific namespace and cluster name combination.

## Role-based Access Control (RBAC)

### Management Cluster

Due to constraints with the Kubernetes RBAC system (i.e. [roles cannot be subdivided beyond
namespace-granularity](https://www.github.com/kubernetes/kubernetes/issues/56582)), the Cluster Autoscaler add-on is
deployed on the management cluster to prevent leaking Cluster API data between workload clusters.

### Workload Cluster

Currently, the Cluster Autoscaler reuses the `${CLUSTER_NAME}-kubeconfig` Secret generated by the bootstrap provider to
interact with the workload cluster. The kubeconfig contents must be stored in a key named `value`. Due to this, all
Cluster Autoscaler actions in the workload cluster are performed as the `cluster-admin` role.

## Scale Down

> Cluster Autoscaler decreases the size of the cluster when some nodes are consistently unneeded for a significant
> amount of time. A node is unneeded when it has low utilization and all of its important pods can be moved elsewhere.
By default, Cluster Autoscaler scales down a node after it is marked as unneeded for 10 minutes. This can be adjusted
with the [`--scale-down-unneeded-time`
setting](https://www.github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-modify-cluster-autoscaler-reaction-time).

### Kubernetes Cloud Controller Manager for Linode (CCM)

The [Kubernetes Cloud Controller Manager for
Linode](https://www.github.com/linode/linode-cloud-controller-manager?tab=readme-ov-file#the-purpose-of-the-ccm) is
deployed on workload clusters and reconciles Kubernetes Node objects with their backing Linode infrastructure. When
scaling down a node group, the Cluster Autoscaler also deletes the Kubernetes Node object on the workload cluster. This
step preempts the Node-deletion in Kubernetes triggered by the CCM.

## Additional Resources

- [Autoscaling - The Cluster API Book](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling)
- [Cluster Autoscaler
FAQ](https://www.github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler)
7 changes: 4 additions & 3 deletions docs/src/topics/backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ To enable backups, use the addon flag during provisioning to select the etcd-bac
```sh
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure linode:0.0.0 \
--infrastructure akamai-linode \
--flavor etcd-backup-restore \
| kubectl apply -f -
```
For more fine-grain control and to know more about etcd backups, refere [backups.md](../topics/etcd.md)
For more fine-grain control and to know more about etcd backups, refer to [the backups section of the etcd page](../topics/etcd.md#etcd-backups)

## Object Storage

Expand Down Expand Up @@ -49,7 +49,7 @@ The bucket label must be unique within the region across all accounts. Otherwise

### Access Keys Creation

CAPL will also create `read_write` and `read_only` access keys for the bucket and store credentials in a secret in the same namespace where the `LinodeObjectStorageBucket` was created alongwith other details about the Linode OBJ Bucket:
CAPL will also create `read_write` and `read_only` access keys for the bucket and store credentials in a secret in the same namespace where the `LinodeObjectStorageBucket` was created along with other details about the Linode OBJ Bucket:

```yaml
apiVersion: v1
Expand All @@ -62,6 +62,7 @@ metadata:
kind: LinodeObjectStorageBucket
name: <unique-bucket-label>
controller: true
uid: <unique-uid>
data:
bucket_name: <unique-bucket-label>
bucket_region: <linode-obj-bucket-region>
Expand Down
2 changes: 2 additions & 0 deletions docs/src/topics/disks/data-disks.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,10 @@ There are a couple caveats with specifying disks for a linode instance:
Currently SDB is being used by a swap disk, replacing this disk with a data disk will slow down linode creation by
up to 90 seconds. This will be resolved when the disk creation refactor is finished in PR [#216](https://github.com/linode/cluster-api-provider-linode/pull/216)
```

## Specify a data disk
A LinodeMachine can be configured with additional data disks with the key being the device to be mounted as and including an optional label and size.

* `size` Required field. [resource.Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/) for the size if a disk. The sum of all data disks must not be more than allowed by the [linode plan](https://www.linode.com/pricing/#compute-shared).
* `label` Optional field. The label for the disk, defaults to the device name
* `diskID` Optional field used by the controller to track disk IDs, this should not be set unless a disk is created outside CAPL
Expand Down
17 changes: 11 additions & 6 deletions docs/src/topics/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,13 @@ This guide covers etcd configuration for the control plane of provisioned CAPL c

## Default configuration

By default, etcd is configured to be on a separate device from the root filesystem on
control plane nodes. The etcd disk is automatically sized at 10 GB with a quota backend of 8 GB per
recommendation from [the etcd documentation](https://etcd.io/docs/latest/dev-guide/limit/#storage-size-limit)
The `quota-backend-bytes` for etcd is set to `8589934592` (8 GiB) per recommendation from
[the etcd documentation](https://etcd.io/docs/latest/dev-guide/limit/#storage-size-limit).

By default, etcd is configured to be on the same disk as the root filesystem on
control plane nodes. If users prefer etcd to be on a separate disk, see the
[etcd-disk flavor](flavors/etcd-disk.md).


## ETCD Backups

Expand All @@ -23,8 +27,9 @@ Users can also enable SSE (Server-side encryption) by passing a SSE AES-256 Key
[here](https://github.com/linode/cluster-api-provider-linode/blob/main/templates/addons/etcd-backup-restore/etcd-backup-restore.yaml)
on the pod can be controlled during the provisioning process.

> [!WARNING]
> This is currently under development and will be available for use once the upstream [PR](https://github.com/gardener/etcd-backup-restore/pull/719) is merged and an official image is made available
```admonish warning
This is currently under development and will be available for use once the upstream [PR](https://github.com/gardener/etcd-backup-restore/pull/719) is merged and an official image is made available
```

For eg:
```sh
Expand All @@ -34,7 +39,7 @@ export ETCDBR_IMAGE=docker.io/username/your-custom-image:version
export SSE_KEY=cdQdZ3PrKgm5vmqxeqwQCuAWJ7pPVyHg
clusterctl generate cluster $CLUSTER_NAME \
--kubernetes-version v1.29.1 \
--infrastructure linode:0.0.0 \
--infrastructure akamai-linode \
--flavor etcd-backup-restore \
| kubectl apply -f -
```
44 changes: 44 additions & 0 deletions docs/src/topics/flavors/cluster-autoscaler.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Cluster Autoscaler

This flavor adds auto-scaling via [Cluster
Autoscaler](https://www.github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler).

## Specification

| Control Plane | CNI | Default OS | Installs ClusterClass | IPv4 | IPv6 |
|---------------|--------|--------------|-----------------------|------|------|
| Kubeadm | Cilium | Ubuntu 22.04 | No | Yes | No |

## Prerequisites

[Quickstart](../getting-started.md) completed

## Usage

1. Set up autoscaling environment variables
> We recommend using Cluster Autoscaler with the Kubernetes control plane
> ... version for which it was meant.
>
> -- <cite>[Releases · kubernetes/autoscaler](https://www.github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#releases)</cite>
```sh
export CLUSTER_AUTOSCALER_VERSION=v1.29.0
# Optional: If specified, these values must be explicitly quoted!
export WORKER_MACHINE_MIN='"1"'
export WORKER_MACHINE_MAX='"10"'
```

2. Generate cluster yaml

```sh
clusterctl generate cluster test-cluster \
--kubernetes-version v1.29.1 \
--infrastructure akamai-linode \
--flavor cluster-autoscaler > test-cluster.yaml
```

3. Apply cluster yaml

```sh
kubectl apply -f test-cluster.yaml
```
4 changes: 3 additions & 1 deletion docs/src/topics/flavors/clusterclass-kubeadm.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@
1. Generate the ClusterClass and cluster manifests
```bash
clusterctl generate cluster test-cluster \
--infrastructure linode:0.0.0 \
--kubernetes-version v1.29.1 \
--infrastructure akamai-linode \
--flavor clusterclass-kubeadm > test-cluster.yaml
```
2. Apply cluster manifests
Expand All @@ -21,6 +22,7 @@
1. Generate cluster manifests
```bash
clusterctl generate cluster test-cluster-2 \
--kubernetes-version v1.29.1 \
--flavor clusterclass-kubeadm > test-cluster-2.yaml
```
```yaml
Expand Down
3 changes: 2 additions & 1 deletion docs/src/topics/flavors/default.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@
1. Generate cluster yaml
```bash
clusterctl generate cluster test-cluster \
--infrastructure linode:0.0.0 > test-cluster.yaml
--kubernetes-version v1.29.1 \
--infrastructure akamai-linode > test-cluster.yaml
```
2. Apply cluster yaml
```bash
Expand Down
Loading

0 comments on commit 307acb5

Please sign in to comment.