Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate to switch generating local Table resources rather than reverse proxying to API Server #57

Open
prometherion opened this issue Mar 11, 2021 · 1 comment
Labels
help wanted Extra attention is needed

Comments

@prometherion
Copy link
Member

One key point of capsule-proxy is that we're just leveraging the Kubernetes API labels filtering capabilities to serve the expected resources a Tenant Owner (TO) is eager to get.

  1. TO issues kubectl get namespaces
  2. capsule-proxy receives the request
  3. decorates it with the additional label selector (capsue.clastix.io/tenant in (oil,gas,...))
  4. reverse proxy to the real API Server
  5. returning the response

This is pretty straightforward, no ticket science and just smart thinking.

However, with newer features request as the listing of Ingress and Storage classes, we need to hack a bit the resources since fields selector doesn't support the In operator, forcing the Cluster Administrator to label the resources using the pattern name=<resource-name>.

What we could is, rather, serving directly the expected resources from capsule-proxy, without the need of doing the reverse proxy except for the case when this is needed (e.g.: when capsule-proxy is in front of the API Server as an adapter, collecting all the requests).

We got three kinds of resources we have to serve:

  1. JSON output
  2. YAML output (although converted from the kubectl binary(
  3. Table output

Besides the kind that can be easily guessed by header Accept (e.g.: Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json), we should support also the watch feature (kubectl get resource --watch), implemented with websockets.

Table

I had the chance to dig the code-base of API Server and I was able to generate a JSON for the Table struct.

package main

import (
	"context"

	appsv1 "k8s.io/api/apps/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime"
	"k8s.io/apimachinery/pkg/runtime/serializer"
	"k8s.io/apiserver/pkg/registry/rest"
	"k8s.io/client-go/kubernetes"
	clientrest "k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
)

func main() {
	var err error

	kubeconfig := "/home/prometherion/.kube/config"

	var config *clientrest.Config
	config, err = clientcmd.BuildConfigFromFlags("", kubeconfig)
	if err != nil {
		panic(err.Error())
	}

	var clientset *kubernetes.Clientset
	clientset, err = kubernetes.NewForConfig(config)
	if err != nil {
		panic(err.Error())
	}

	var l *appsv1.DeploymentList
	l, err = clientset.AppsV1().Deployments("capsule-system").List(context.Background(), metav1.ListOptions{})
	if err != nil {
		panic(err.Error())
	}

	tc := rest.NewDefaultTableConvertor(appsv1.Resource("deployment"))
	var table *metav1.Table
	table, err = tc.ConvertToTable(context.Background(), l, nil)
	if err != nil {
		panic(err.Error())
	}
	scheme := runtime.NewScheme()
	err = metav1.AddMetaToScheme(scheme)
	if err != nil {
		panic(err.Error())
	}
	codec := serializer.NewCodecFactory(scheme).LegacyCodec(metav1.SchemeGroupVersion)
	var output []byte
	output, err = runtime.Encode(codec, table)
	if err != nil {
		panic(err.Error())
	}

	println(string(output))
}

Result is the following:

{
	"kind": "Table",
	"apiVersion": "meta.k8s.io/v1",
	"metadata": {
		"selfLink": "/apis/apps/v1/namespaces/capsule-system/deployments",
		"resourceVersion": "993870"
	},
	"columnDefinitions": [{
		"name": "Name",
		"type": "string",
		"format": "name",
		"description": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names",
		"priority": 0
	}, {
		"name": "Created At",
		"type": "date",
		"format": "",
		"description": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
		"priority": 0
	}],
	"rows": [{
		"cells": ["capsule-controller-manager", "2021-03-06T15:45:00Z"],
		"object": {
			"metadata": {
				"name": "capsule-controller-manager",
				"namespace": "capsule-system",
				"selfLink": "/apis/apps/v1/namespaces/capsule-system/deployments/capsule-controller-manager",
				"uid": "a285657f-0aa0-4753-9636-44feacd36539",
				"resourceVersion": "936210",
				"generation": 2,
				"creationTimestamp": "2021-03-06T15:45:00Z",
				"labels": {
					"app.kubernetes.io/instance": "capsule",
					"app.kubernetes.io/managed-by": "Helm",
					"app.kubernetes.io/name": "capsule",
					"app.kubernetes.io/version": "0.0.4",
					"helm.sh/chart": "capsule-0.0.16"
				},
				"annotations": {
					"deployment.kubernetes.io/revision": "2",
					"meta.helm.sh/release-name": "capsule",
					"meta.helm.sh/release-namespace": "capsule-system"
				}
			},
			"spec": {
				"replicas": 1,
				"selector": {
					"matchLabels": {
						"app.kubernetes.io/instance": "capsule",
						"app.kubernetes.io/name": "capsule"
					}
				},
				"template": {
					"metadata": {
						"creationTimestamp": null,
						"labels": {
							"app.kubernetes.io/instance": "capsule",
							"app.kubernetes.io/name": "capsule"
						}
					},
					"spec": {
						"volumes": [{
							"name": "cert",
							"secret": {
								"secretName": "capsule-tls",
								"defaultMode": 420
							}
						}],
						"containers": [{
							"name": "manager",
							"image": "quay.io/clastix/capsule:v0.0.5-rc1",
							"command": ["/manager"],
							"args": ["--metrics-addr=127.0.0.1:8080", "--enable-leader-election", "--zap-log-level=4", "--allow-ingress-hostname-collision=false"],
							"ports": [{
								"name": "webhook-server",
								"containerPort": 9443,
								"protocol": "TCP"
							}],
							"env": [{
								"name": "NAMESPACE",
								"valueFrom": {
									"fieldRef": {
										"apiVersion": "v1",
										"fieldPath": "metadata.namespace"
									}
								}
							}],
							"resources": {},
							"volumeMounts": [{
								"name": "cert",
								"readOnly": true,
								"mountPath": "/tmp/k8s-webhook-server/serving-certs"
							}],
							"livenessProbe": {
								"httpGet": {
									"path": "/healthz",
									"port": 10080,
									"scheme": "HTTP"
								},
								"timeoutSeconds": 1,
								"periodSeconds": 10,
								"successThreshold": 1,
								"failureThreshold": 10
							},
							"readinessProbe": {
								"httpGet": {
									"path": "/readyz",
									"port": 10080,
									"scheme": "HTTP"
								},
								"timeoutSeconds": 1,
								"periodSeconds": 10,
								"successThreshold": 1,
								"failureThreshold": 10
							},
							"terminationMessagePath": "/dev/termination-log",
							"terminationMessagePolicy": "File",
							"imagePullPolicy": "Never",
							"securityContext": {
								"allowPrivilegeEscalation": false
							}
						}, {
							"name": "kube-rbac-proxy",
							"image": "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0",
							"args": ["--secure-listen-address=0.0.0.0:8443", "--upstream=http://127.0.0.1:8080/", "--logtostderr=true", "--v=10"],
							"ports": [{
								"name": "https",
								"containerPort": 8443,
								"protocol": "TCP"
							}],
							"resources": {
								"limits": {
									"cpu": "100m",
									"memory": "128Mi"
								},
								"requests": {
									"cpu": "10m",
									"memory": "64Mi"
								}
							},
							"terminationMessagePath": "/dev/termination-log",
							"terminationMessagePolicy": "File",
							"imagePullPolicy": "IfNotPresent",
							"securityContext": {
								"allowPrivilegeEscalation": false
							}
						}],
						"restartPolicy": "Always",
						"terminationGracePeriodSeconds": 30,
						"dnsPolicy": "ClusterFirst",
						"serviceAccountName": "capsule",
						"serviceAccount": "capsule",
						"securityContext": {},
						"schedulerName": "default-scheduler"
					}
				},
				"strategy": {
					"type": "RollingUpdate",
					"rollingUpdate": {
						"maxUnavailable": "25%",
						"maxSurge": "25%"
					}
				},
				"revisionHistoryLimit": 10,
				"progressDeadlineSeconds": 600
			},
			"status": {
				"observedGeneration": 2,
				"replicas": 1,
				"updatedReplicas": 1,
				"readyReplicas": 1,
				"availableReplicas": 1,
				"conditions": [{
					"type": "Progressing",
					"status": "True",
					"lastUpdateTime": "2021-03-06T15:45:48Z",
					"lastTransitionTime": "2021-03-06T15:45:00Z",
					"reason": "NewReplicaSetAvailable",
					"message": "ReplicaSet \"capsule-controller-manager-6bc858b6f8\" has successfully progressed."
				}, {
					"type": "Available",
					"status": "True",
					"lastUpdateTime": "2021-03-10T19:11:34Z",
					"lastTransitionTime": "2021-03-10T19:11:34Z",
					"reason": "MinimumReplicasAvailable",
					"message": "Deployment has minimum availability."
				}]
			}
		}
	}]
}

Watch/Websocket

The versioned clientset is providing the Watch function for every GVK resource, returning the watch.Interface interface.

Said so, we just need to implement a websocket and serve the messages.


The benefit of these changes would be superb since we could get full control over the Capsule resources without the need of dirty hack, as well gaining more control and flexibility, although the wise Uncle Ben words are always in place: with great power comes great responsibility.

@bsctl @MaxFedotov I'd like to get your thoughts about this! 👀

@prometherion prometherion added the help wanted Extra attention is needed label Mar 11, 2021
@bsctl
Copy link
Member

bsctl commented Mar 13, 2021

@prometherion it sounds interesting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants