-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no matches for kind "Plan" in version "upgrade.cattle.io/v1" #298
Comments
You should also apply the CRD manifest: |
@brandond Hi Brad, thanks for the hint, that at least fixed this error.
|
Do you see the |
@SISheogorath |
@SISheogorath @brandond
|
There should be two clusterroles and one role. When I adjusted the roles for the controller, I decided to limit secrets and job creation to the namespace of the controller. system-upgrade-controller/manifests/clusterrole.yaml Lines 53 to 79 in 4a64353
Maybe this was too restrictive. I just double checked my setup, the controller is functional here with these roles. The reason I asked for its existence is that it might be related to object ordering: #296 |
Here there are only clusterroles and no role |
watch missing also on
|
If you apply the release manifest a second time (now that the namespace exists), does it fix the issue? |
all objects unchanged.
|
If you use the manifests directory from the tag in the git repository, you have to apply all manifests. The release manifest I referred to is attached to the Release on GitHub: https://github.com/rancher/system-upgrade-controller/releases/download/v0.13.4/system-upgrade-controller.yaml |
That looks good.
And no more error messages, a role has also been created.
|
Well, now we know, that #296 actually fixed a problem 🙌🏻 |
The permissions are not correct, the upgrade pod spews errors that it is not allowed to delete pods. |
Version
v0.13.4
Platform/Architecture
openSUSE MicroOS 20240221
Describe the bug
When i create a new plan, i get this error message:
To Reproduce
kubectl label node master-01 master-02 worker-01 worker-02 worker-03 k3s-upgrade=true
kubectl apply -f k3s-upgrade.yaml
k3s-upgrade.yaml:
Expected behavior
Upgrade plan without error message.
Actual behavior
Error message:
no matches for kind "Plan" in version "upgrade.cattle.io/v1"
Additional context
log in pod:
The text was updated successfully, but these errors were encountered: