-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
customizeComponents feature #327
Comments
Also the current implementation might be updated to generate patches locally using Helm: diff --git a/charts/piraeus/templates/operator-controller.yaml b/charts/piraeus/templates/operator-controller.yaml
index 25ac9fe..3ce0a8f 100644
--- a/charts/piraeus/templates/operator-controller.yaml
+++ b/charts/piraeus/templates/operator-controller.yaml
@@ -55,10 +55,18 @@ spec:
{{- if .Values.operator.controller.httpsBindAddress }}
httpsBindAddress: {{ .Values.operator.controller.httpsBindAddress | quote }}
{{- end }}
- {{- if .Values.operator.controller.sidecars }}
- sidecars: {{ .Values.operator.controller.sidecars | toJson }}
- {{- end }}
- {{- if .Values.operator.controller.extraVolumes }}
- extraVolumes: {{ .Values.operator.controller.extraVolumes | toJson }}
- {{- end }}
+ customizeComponents:
+ patches:
+ {{- with .Values.operator.controller.sidecars }}
+ - resourceType: Deployment
+ resourceName: linstor-controller
+ patch: '{{ printf "{\"spec\":{\"template\":{\"spec\":{\"containers\":%s}}}}" (toJson .) }}'
+ type: strategic
+ {{- end }}
+ {{- with .Values.operator.controller.extraVolumes }}
+ - resourceType: Deployment
+ resourceName: linstor-controller
+ patch: '{{ printf "{\"spec\":{\"template\":{\"spec\":{\"volumes\":%s}}}}" (toJson .) }}'
+ type: strategic
+ {{- end }}
{{- end }} example output: # Source: piraeus/templates/operator-controller.yaml
apiVersion: piraeus.linbit.com/v1
kind: LinstorController
metadata:
name: linstor-piraeus-cs
namespace: linstor
labels:
app.kubernetes.io/name: linstor-piraeus-cs
spec:
priorityClassName: ""
# TODO: switch to k8s db by default
dbConnectionURL: etcd://linstor-etcd:2379
luksSecret: linstor-piraeus-passphrase
sslSecret: "linstor-piraeus-control-secret"
dbCertSecret:
dbUseClientCert: false
drbdRepoCred: ""
controllerImage: quay.io/piraeusdatastore/piraeus-server:v1.18.2
imagePullPolicy: "IfNotPresent"
linstorHttpsControllerSecret: "linstor-piraeus-controller-secret"
linstorHttpsClientSecret: "linstor-piraeus-client-secret"
tolerations: [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane","operator":"Exists"}]
resources: {}
replicas: 1
httpBindAddress: "127.0.0.1"
- sidecars: [{"args":["--upstream=http://127.0.0.1:3370","--config-file=/etc/kube-rbac-proxy/linstor-controller.yaml","--secure-listen-address=$(KUBE_RBAC_PROXY_LISTEN_ADDRESS):3370","--client-ca-file=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt","--v=2","--logtostderr=true"],"env":[{"name":"KUBE_RBAC_PROXY_LISTEN_ADDRESS","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"quay.io/brancz/kube-rbac-proxy:v0.11.0","name":"kube-rbac-proxy","volumeMounts":[{"mountPath":"/etc/kube-rbac-proxy","name":"rbac-proxy-config"}]}]
- extraVolumes: [{"configMap":{"name":"piraeus-rbac-proxy"},"name":"rbac-proxy-config"}]
+ customizeComponents:
+ patches:
+ - resourceType: Deployment
+ resourceName: linstor-controller
+ patch: '{"spec":{"template":{"spec":{"containers":[{"args":["--upstream=http://127.0.0.1:3370","--config-file=/etc/kube-rbac-proxy/linstor-controller.yaml","--secure-listen-address=$(KUBE_RBAC_PROXY_LISTEN_ADDRESS):3370","--client-ca-file=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt","--v=2","--logtostderr=true"],"env":[{"name":"KUBE_RBAC_PROXY_LISTEN_ADDRESS","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"quay.io/brancz/kube-rbac-proxy:v0.11.0","name":"kube-rbac-proxy","volumeMounts":[{"mountPath":"/etc/kube-rbac-proxy","name":"rbac-proxy-config"}]}]}}}}'
+ type: strategic
+ - resourceType: Deployment
+ resourceName: linstor-controller
+ patch: '{"spec":{"template":{"spec":{"volumes":[{"configMap":{"name":"piraeus-rbac-proxy"},"name":"rbac-proxy-config"}]}}}}'
+ type: strategic |
I do like that approach, it seems much more manageable. I'm a bit hesitant on porting existing features to this new patch strategy. Since I'm working on v2 anyways, I'll give it a try there first. To keep development effort low, I'd like to move the v1 operator into a "maintenance mode", i.e. no new features, only bug fixes. Unless there is a pressing issue that needs a new feature while v2 is still being developed of course. |
@kvaps perhaps you have some insight here, too: Let's assume the operator v2 is working with some slightly different CRDs:
A user might want to apply some customization to only a subset of nodes. The question is: how should that work. In my prototype, if you create a Some ideas I had:
|
Hey, I think the best idea is to try align to well-known design patterns.
Thus I think the As about configuration for specific nodes I would prefer to use separate CRD, eg, nodeSelectorTerms:
- matchExpressions:
- key: "kubernetes.io/arch"
operator: In
values: ["amd64"]
- matchExpressions:
- key: "kubernetes.io/os"
operator: In
values: ["linux"] nodeSelectorTerms:
- matchExpressions:
- key: name
operator: In
values:
- app-worker-node nodeSelectorTerms:
- matchExpressions:
- key: "failure-domain.kubernetes.io/zone"
operator: In
values: ["us-central1-a"] nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: DoesNotExist Similar logic is used by istio-operator to match namespaces |
I was also thinking about a |
I'd really like to not have to reimplement a DaemonSet, but there are a few edge cases that I believe can't be fixed with a normal daemonset. On the top of my head:
But that does not mean we actually need a |
Well, the driver loader pod can be created by the daemonset controller as well. Since you want to use separate driver loader image for every OS, this is a little bit tricky. But to be honest I don't see the problem of using the common driver loader image. Debian with additional Having multiple kernel-loader images for the every OS will make it more difficult to manage and maintain. ref deckhouse/deckhouse#2269 and piraeusdatastore/piraeus#120 |
This is implemented with Operator v2 |
@WanzenBug in flowing to discussion in #180 (comment) I think I found an alternative way to add customizations to the generated manifests.
KubeVirt project has
customizeComponents
option, which is working the following way:https://kubevirtlegacy.gitbook.io/user-guide/docs/operations/customize_components
From my point of view it can be better because of few reasons:
strategic
merge patch is supported, but alsojson
andmerge
patches, which might be useful to remove fields and customization in the more smart way.The text was updated successfully, but these errors were encountered: