-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: monitoring multiple clusters #266
Conversation
3af4434
to
af7786d
Compare
Overall looks good We will need to think about how to release this |
390dbaf
to
366f6d3
Compare
Thanks for the review, Alex. I just provided a brief overview of the feature and explained the use cases we're building on our infrastructure. |
Apologies for the delay @prometherion if you can resolve conflicts we will merge this and inform the community of the change |
366f6d3
to
051b66b
Compare
Thanks, @AlexsJones, just rebased. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution ! Very nice enhancement !
@@ -273,6 +274,35 @@ func GetDeployment(config v1alpha1.K8sGPT) (*appsv1.Deployment, error) { | |||
}, | |||
}, | |||
} | |||
if outOfClusterMode { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we standardize the code based on the Spec
?
if config.Spec.Kubeconfig {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we handle all these changes by updating the version of the k8sgpt resource
It depends on the API governance of the project, although sticking to the best practices of Kubernetes operators development, such as agreed on several Cluster API projects or Kubernetes itself, adding fields to a specification doesn't require a bump of the version since it's a feature addition that would require an update of the deployed Operator.
This would allow us to maintain two versions of the controller,
In operator development, it's highly discouraged to have two controllers for the same resource, even tho this has multiple API versions. Rather, a version is elected as Hub which is going to be the stored one in etcd, which will be used as "hub" to convert back and forth from other versions.
preventing any breaking changes and avoiding complicating the migration process
As I stated before, adding fields is not considered as a breaking change. The migration process can be achieved by leveraging the Operator SDK/controller-runtime machinery, such as the conversion webhook which will translate API objects between versions in a transparent mode for the end-user. Adding a conversion requires a lot of more code base, as well as webhooks management (which require CA handling): it should be put in place if and only if definitely required, this doesn't seem the case to me.
051b66b
to
ee55ee3
Compare
Signed-off-by: Dario Tranchitella <[email protected]>
Signed-off-by: Dario Tranchitella <[email protected]>
ee55ee3
to
75a0514
Compare
@AlexsJones I'd like to do my best to get this merged, and I tested it thoroughly. In the case of a previous With that said, although there are some errors in the logs of the operator, we're going to preserve the previous behaviour till the cluster administrator deletes manually the k8sgpt instance Deployment to let it deploy according to the new template. Furthermore, since we have an additional field, there's no need to bump the CRD version since it's a backward-compatible change: I'm raising the seniority flag here since it's the same review comment I received from Vince on the AWS Cluster API provider. Implementing a new CRD version would be an overkill since it would require a conversion webhook which requires a TLS certificate, also considering the state of the current version ( |
It looks good and I agree @prometherion we don't need yet a conversion hook. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm; I think we are good to merge @prometherion @AlexsJones
apologies for the delay again !
Closes #265
📑 Description
k8sgpt.spec.kubeconfig
: when defined and a valid kubeconfig is provided, thek8sgpt
server will target the configured clusterk8sgpt
instance, a Deployment matching its name will be usedSync
function will fail unless the Deployment is manually deleted, since thelabelSelector
is an immutable field✅ Checks
ℹ Additional Information
There's no need to bump the
k8sgpt
Custom Resource Definition version since there are just additions of fields, and not a reorg of them. Of course, to take full advantage of this feature, if the user has installed the Operator through Helm, they must manually update the generated Custom Resource Definition.