Skip to content
This repository was archived by the owner on Jan 29, 2025. It is now read-only.

Commit 63ea8d3

Browse files
committed
Add Cluster API deployment method
Signed-off-by: Cristiano Colangelo <[email protected]>
1 parent 077a6e6 commit 63ea8d3

File tree

4 files changed

+274
-0
lines changed

4 files changed

+274
-0
lines changed
+132
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
# Cluster API deployment
2+
3+
## Introduction
4+
5+
Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. [Learn more](https://cluster-api.sigs.k8s.io/introduction.html).
6+
7+
This folder contains an automated and declarative way of deploying the Telemetry Aware Scheduler using Cluster API. We will make use of the [ClusterResourceSet feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) to automatically apply a set of resources. Note you must enable its feature gate before running `clusterctl init` (with `export EXP_CLUSTER_RESOURCE_SET=true`).
8+
9+
## Requirements
10+
11+
- A management cluster provisioned in your infrastructure of choice. See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html).
12+
- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25).
13+
14+
## Provision clusters with TAS installed using Cluster API
15+
16+
We will provision a cluster with the TAS installed using Cluster API.
17+
18+
1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example:
19+
20+
```bash
21+
clusterctl generate cluster scheduling-dev-wkld \
22+
--kubernetes-version v1.25.0 \
23+
--control-plane-machine-count=1 \
24+
--worker-machine-count=3 \
25+
> your-manifests.yaml
26+
```
27+
28+
Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this
29+
step in the same way as we will see with TAS resources using ClusterResourceSets.
30+
31+
2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with
32+
`your-manifests.yaml`.
33+
34+
If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility:
35+
36+
> Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be
37+
> overwritten.
38+
39+
```bash
40+
yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml
41+
```
42+
43+
The new config will:
44+
- Configure TLS certificates for the extender
45+
- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet`
46+
- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler.
47+
48+
You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target
49+
it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`.
50+
51+
3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience:
52+
53+
First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g.
54+
additional metric scraping configurations), then render the charts:
55+
56+
```bash
57+
helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml
58+
helm template ../charts/prometheus_helm_chart/ > prometheus.yaml
59+
helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml
60+
```
61+
62+
You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`:
63+
64+
```bash
65+
kind: Namespace
66+
apiVersion: v1
67+
metadata:
68+
name: monitoring
69+
labels:
70+
name: monitoring
71+
````
72+
73+
Prepend the following to `prometheus-custom-metrics.yaml`:
74+
```bash
75+
kind: Namespace
76+
apiVersion: v1
77+
metadata:
78+
name: custom-metrics
79+
labels:
80+
name: custom-metrics
81+
```
82+
83+
The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key.
84+
Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md).
85+
Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory.
86+
87+
Run the following:
88+
89+
```bash
90+
kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml
91+
kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml
92+
```
93+
94+
**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Makesure to wipe them off your workstation after applying the relative Secrets to your cluster.**
95+
96+
You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter"
97+
ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience:
98+
99+
```bash
100+
yq '.' ../tas-*.yaml > tas.yaml
101+
```
102+
103+
4. Create and apply the ConfigMaps
104+
105+
```bash
106+
kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml
107+
kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml
108+
kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml
109+
kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml
110+
kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml
111+
kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml
112+
kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml
113+
```
114+
115+
Apply to the management cluster:
116+
117+
```bash
118+
kubectl apply -f '*-configmap.yaml'
119+
```
120+
121+
5. Apply the ClusterResourceSets
122+
123+
ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`.
124+
Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml`
125+
126+
6. Apply the cluster manifests
127+
128+
Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`.
129+
The Telemetry Aware Scheduler will be running on your new cluster.
130+
131+
You can test if the scheduler actually works by following this guide:
132+
[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
apiVersion: cluster.x-k8s.io/v1beta1
2+
kind: Cluster
3+
metadata:
4+
labels:
5+
scheduler: tas
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
2+
kind: ClusterResourceSet
3+
metadata:
4+
name: prometheus
5+
spec:
6+
clusterSelector:
7+
matchLabels:
8+
scheduler: tas
9+
resources:
10+
- kind: ConfigMap
11+
name: prometheus-configmap
12+
---
13+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
14+
kind: ClusterResourceSet
15+
metadata:
16+
name: prometheus-node-exporter
17+
spec:
18+
clusterSelector:
19+
matchLabels:
20+
scheduler: tas
21+
resources:
22+
- kind: ConfigMap
23+
name: prometheus-node-exporter-configmap
24+
---
25+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
26+
kind: ClusterResourceSet
27+
metadata:
28+
name: custom-metrics
29+
spec:
30+
clusterSelector:
31+
matchLabels:
32+
scheduler: tas
33+
resources:
34+
- kind: ConfigMap
35+
name: custom-metrics-configmap
36+
---
37+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
38+
kind: ClusterResourceSet
39+
metadata:
40+
name: custom-metrics-tls-secret
41+
spec:
42+
clusterSelector:
43+
matchLabels:
44+
scheduler: tas
45+
resources:
46+
- kind: ConfigMap
47+
name: custom-metrics-tls-secret-configmap
48+
---
49+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
50+
kind: ClusterResourceSet
51+
metadata:
52+
name: tas
53+
spec:
54+
clusterSelector:
55+
matchLabels:
56+
scheduler: tas
57+
resources:
58+
- kind: ConfigMap
59+
name: tas-configmap
60+
---
61+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
62+
kind: ClusterResourceSet
63+
metadata:
64+
name: tas-tls-secret
65+
spec:
66+
clusterSelector:
67+
matchLabels:
68+
scheduler: tas
69+
resources:
70+
- kind: ConfigMap
71+
name: tas-tls-secret-configmap
72+
---
73+
apiVersion: addons.cluster.x-k8s.io/v1alpha3
74+
kind: ClusterResourceSet
75+
metadata:
76+
name: extender
77+
spec:
78+
clusterSelector:
79+
matchLabels:
80+
scheduler: tas
81+
resources:
82+
- kind: ConfigMap
83+
name: extender-configmap
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
2+
kind: KubeadmControlPlane
3+
spec:
4+
kubeadmConfigSpec:
5+
files:
6+
- path: /etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml
7+
content: |
8+
apiVersion: kubescheduler.config.k8s.io/v1
9+
kind: KubeSchedulerConfiguration
10+
clientConnection:
11+
kubeconfig: /etc/kubernetes/scheduler.conf
12+
extenders:
13+
- urlPrefix: "https://tas-service.default.svc.cluster.local:9001"
14+
prioritizeVerb: "scheduler/prioritize"
15+
filterVerb: "scheduler/filter"
16+
weight: 1
17+
enableHTTPS: true
18+
managedResources:
19+
- name: "telemetry/scheduling"
20+
ignoredByScheduler: true
21+
ignorable: true
22+
tlsConfig:
23+
insecure: false
24+
certFile: "/host/certs/client.crt"
25+
keyFile: "/host/certs/client.key"
26+
- path: /tmp/kubeadm/patches/kube-scheduler+json.json
27+
content: |-
28+
[
29+
{
30+
"op": "add",
31+
"path": "/spec/dnsPolicy",
32+
"value": "ClusterFirstWithHostNet"
33+
}
34+
]
35+
clusterConfiguration:
36+
scheduler:
37+
extraArgs:
38+
config: "/etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml"
39+
extraVolumes:
40+
- hostPath: "/etc/kubernetes/schedulerconfig"
41+
mountPath: "/etc/kubernetes/schedulerconfig"
42+
name: schedulerconfig
43+
- hostPath: "/etc/kubernetes/pki/ca.key"
44+
mountPath: "/host/certs/client.key"
45+
name: cacert
46+
- hostPath: "/etc/kubernetes/pki/ca.crt"
47+
mountPath: "/host/certs/client.crt"
48+
name: clientcert
49+
initConfiguration:
50+
patches:
51+
directory: /tmp/kubeadm/patches
52+
joinConfiguration:
53+
patches:
54+
directory: /tmp/kubeadm/patches

0 commit comments

Comments
 (0)