-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm ceph-csi-rbd - not enough permitions #1337
Comments
@baznikin in tried to reproduce it with master build am not seeing any issue can you retry with v3.1.0 helm charts, if the issue persists can you provide the deployment, cluster role of rbd? and also the PVC -oyaml output |
@Madhu-1 sorry, I found workaround for my issue and live with it. My intention was to notify maintainer (who know his software better than me) about found lack of permissions. If you follow my steppes and it didn't reproduce with 3.0.0 then it fixed already and can be closed. |
closing this as not reproducible |
@XtremeOwnageDotCom Same problem with helm chart version 3.10 on k8s v1.24. Works for me with 3.9.0. |
The solution, is pretty simple. You need to edit the cluster roles for ceph.
For each of those roles, you need to add the ability to get nodes. (Add this)
There is prob a fancy way of doing this with a patch, but, I did it the simple, stupid way.
As a note, 3.10 also has this issue. |
Until that PR hits a release- here is a patch command to fix the issue.
|
I used helm3 on kubernetes 1.25.4, and had to add this to ClusterRole (exactly as if .Values.topology.domainLabels is set):
otherwise got these errors from provisioner: E0822 23:13:52.973668 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:serviceaccount:ceph-csi-rbd:ceph-csi-rbd-provisioner" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope |
@skliarie, #4798 should fix the above issue. |
Just stepping in: this issue is still present on 3.12.1 chart on v1.27.4+rke2r1 Kubernetes. We faced this after upgrading from 3.9.0. It took some time to figure out so just a small backtrace below to help out anyone searching:
The log trace mentioned here can be found in the ceph-csi-rbd-provisioner pod behind the csi-provisioner container - it will not be visible if the container is not specified as the default logs are from csi-rbdplugin not csi-provisioner in the pod. Mentioned workarounds solve this. |
Describe the bug
It is not possible to provision with current (3.0.0) Helm chart version. See Steps to reproduce for details
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) :Steps to reproduce
I installed ceph-csi-rbd using Helm chart:
helm install ceph-csi-rbd ceph-csi/ceph-csi-rbd --namespace ceph-csi-rbd -f ceph-csi-rbd.yaml
With
tolopogy.enabled=false
(default value) i got error about nodes resourceceph-csi/charts/ceph-csi-rbd/templates/provisioner-rules-clusterrole.yaml
Line 54 in 23e0874
tolopogy.enabled=true
, remove ClusterRoleceph-csi-rbd-provisioner-rules
and upgrade Helm chart. Got another error:tolopogy.enabled
back to false, upgrading chart without deleting ClusterRoleceph-csi-rbd-provisioner-rules
(so permitions fornodes
remains in place) I got working provisioning:The text was updated successfully, but these errors were encountered: