-
Notifications
You must be signed in to change notification settings - Fork 1k
Operator ignores OperatorConfiguration changes #1315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is the expected behavior. Changes are not dynamically loaded. Ideally you have a deployment pipeline with operator and configuration and run that whenever there's a change, so that the operator pod is replaced. |
Thank you for response!
So if postgresql-operator is actually Controller for OperatorConfiguration resource it should sync controlled resources state. At least do that in configurable way. Or it can be implemented with (now deprecated) ConfigMap configuration mechanism. Please consider implement control loop for operator controlled by env variable of operator pod (disable by default). I would provide a PR if desired. |
Within the Helm chart, https://github.com/zalando/postgres-operator/blob/master/charts/postgres-operator/templates/deployment.yaml#L23 We've observed, the operator roll due to a combination of changing config and an operator image change, but when it starts up the So I would assert:
I think a fix could be an env var to enable such new behavior like @vladimirfx suggested, but then rather than going to the trouble of a full control loop, the operator simply graceful exits and K8s restarts it via the regular container lifecycle. On startup, the operator will load the new configuration through its regular startup procedure and not have to account for dynamic config changes during in-flight reconciles at this time. Would that be an acceptable solution while the maintainers consider how a control loop could/should be implemented ? |
Unfortunately, I can't provide a PR even if the idea will be accepted. We migrated from Zalando mainly because of this issue (it can't be used in GitOps without ugly workarounds) and Spilo lock-in (it is a tough task to move from Spilo-based deployments). |
Happy to make the contribution, will likely pose the same question in the form of a PR. |
Operator successfully read own OperatorConfiguration resource on start and ignores subsequent updates to it. I can't find any params/env to configure this behavior in docs.
As workaround we are forced to delete operator pod.
The text was updated successfully, but these errors were encountered: