-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Message is not right when delete pv failed #5132
Comments
solution:
|
Hi @ecosysbin , thanks for the report! Would you be willing to post a PR that fixes this? You have good idea of what can be done to improve the Thanks! |
it was added to support older version of ceph, if older version on EOL we can remove the check as well. |
Ok, I'm glad to do it. @nixpanic |
Describe the bug
I delete pv failed, message is 'volume deletion failed: rpc error: code = Internal desc = rbd: ret=-39, Directory not empty'.
After analysis the log of csi, the actual failed reason is 'access denied'
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) :Steps to reproduce
Steps to reproduce the behavior:
Actual results
pv is in deleting status with error message 'volume deletion failed: rpc error: code = Internal desc = rbd: ret=-39, Directory not empty'
Expected behavior
pv is in deleting status with error message 'access denied to Ceph MGR-based rbd commands on cluster ID xxxxxx'
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.
provisioner pod.
If the issue is in PVC resize please attach complete logs of below containers.
provisioner pod.
If the issue is in snapshot creation and deletion please attach complete logs
of below containers.
provisioner pod.
If the issue is in PVC mounting please attach complete logs of below containers.
csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from
plugin pod from the node where the mount is failing.
if required attach dmesg logs.
Note:- If its a rbd issue please provide only rbd related logs, if its a
cephFS issue please provide cephFS logs.
Additional context
Add any other context about the problem here.
For example:
Any existing bug report which describe about the similar issue/behavior
The text was updated successfully, but these errors were encountered: