Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Message is not right when delete pv failed #5132

Open
ecosysbin opened this issue Feb 9, 2025 · 4 comments
Open

Message is not right when delete pv failed #5132

ecosysbin opened this issue Feb 9, 2025 · 4 comments
Labels
component/rbd Issues related to RBD

Comments

@ecosysbin
Copy link

ecosysbin commented Feb 9, 2025

Describe the bug

I delete pv failed, message is 'volume deletion failed: rpc error: code = Internal desc = rbd: ret=-39, Directory not empty'.
After analysis the log of csi, the actual failed reason is 'access denied'

Image

kubectl logs csi-rbdplugin-provisioner-686ddd4f4c-9f5qr -n cephfs -c csi-rbdplugin

W0126 12:46:49.107929      1 rbd_util.go:596] ID: 251312 Req-ID: 0001-0024-92ab6c78-7edc-11ee-aec4-5e807f521aec-0000000000000006-1b5a6cea-f2c2-4b62-83f3-95d0424077a3 access denied to Ceph MGR-based rbd commands on cluster ID (92ab6c78-7edc-11ee-aec4-5e807f521aec)

E0126 12:46:49.160690      1 rbd_util.go:689] ID: 251311 Req-ID: 0001-0024-92ab6c78-7edc-11ee-aec4-5e807f521aec-0000000000000006-b7dc20db-47ad-42a5-8d6d-37b40afd3af1 failed to delete rbd image: kubernetes/csi-vol-b7dc20db-47ad-42a5-8d6d-37b40afd3af1-temp, rbd: ret=-39, Directory not empty

Environment details

  • Image/version of Ceph CSI driver : v3.13.0
  • Helm chart version :
  • Kernel version :
  • Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its
    krbd or rbd-nbd) :
  • Kubernetes cluster version : 1.25
  • Ceph cluster version :

Steps to reproduce

Steps to reproduce the behavior:

  1. create ceph client key whitch has not permission to delete pv but can create
  2. create pv
  3. delete pv

Actual results

pv is in deleting status with error message 'volume deletion failed: rpc error: code = Internal desc = rbd: ret=-39, Directory not empty'

Expected behavior

pv is in deleting status with error message 'access denied to Ceph MGR-based rbd commands on cluster ID xxxxxx'

Logs

If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.

  • csi-provisioner and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.
kubectl logs csi-rbdplugin-provisioner-686ddd4f4c-9f5qr -n cephfs -c csi-rbdplugin

W0126 12:46:49.107929      1 rbd_util.go:596] ID: 251312 Req-ID: 0001-0024-92ab6c78-7edc-11ee-aec4-5e807f521aec-0000000000000006-1b5a6cea-f2c2-4b62-83f3-95d0424077a3 access denied to Ceph MGR-based rbd commands on cluster ID (92ab6c78-7edc-11ee-aec4-5e807f521aec)

E0126 12:46:49.160690      1 rbd_util.go:689] ID: 251311 Req-ID: 0001-0024-92ab6c78-7edc-11ee-aec4-5e807f521aec-0000000000000006-b7dc20db-47ad-42a5-8d6d-37b40afd3af1 failed to delete rbd image: kubernetes/csi-vol-b7dc20db-47ad-42a5-8d6d-37b40afd3af1-temp, rbd: ret=-39, Directory not empty

If the issue is in PVC resize please attach complete logs of below containers.

  • csi-resizer and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.

If the issue is in snapshot creation and deletion please attach complete logs
of below containers.

  • csi-snapshotter and csi-rbdplugin/csi-cephfsplugin container logs from the
    provisioner pod.

If the issue is in PVC mounting please attach complete logs of below containers.

  • csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from
    plugin pod from the node where the mount is failing.

  • if required attach dmesg logs.

Note:- If its a rbd issue please provide only rbd related logs, if its a
cephFS issue please provide cephFS logs.

Additional context

Add any other context about the problem here.

For example:

Any existing bug report which describe about the similar issue/behavior

@ecosysbin
Copy link
Author

ecosysbin commented Feb 9, 2025

solution:
The function 'isCephMgrSupported' should return error message too. and return the error message when isCephMgrSupported failed

/ceph-csi/internal/rbd/rbd_util.go
func isCephMgrSupported(ctx context.Context, clusterID string, err error) bool {
	switch {
	case err == nil:
		return true
	case strings.Contains(err.Error(), rbdTaskRemoveCmdInvalidString):
		log.WarningLog(
			ctx,
			"cluster with cluster ID (%s) does not support Ceph manager based rbd commands"+
				"(minimum ceph version required is v14.2.3)",
			clusterID)

		return false
	case strings.Contains(err.Error(), rbdTaskRemoveCmdAccessDeniedMessage):
		log.WarningLog(ctx, "access denied to Ceph MGR-based rbd commands on cluster ID (%s)", clusterID)

		return false
	}

	return true
}
==>
func isCephMgrSupported(ctx context.Context, clusterID string, err error) (bool, error) {
	switch {
	case err == nil:
		return true, nil
	case strings.Contains(err.Error(), rbdTaskRemoveCmdInvalidString):
                msg := cluster with cluster ID (%s) does not support Ceph manager based rbd commands"+
				"(minimum ceph version required is v14.2.3)
		log.WarningLog(
			ctx,
			msg,
			clusterID)

		return false, errors.New(msg)
	case strings.Contains(err.Error(), rbdTaskRemoveCmdAccessDeniedMessage):
		log.WarningLog(ctx, "access denied to Ceph MGR-based rbd commands on cluster ID (%s)", clusterID)

		return false, errors.New("access denied")
	}

	return true, nil
}

@nixpanic nixpanic added the component/rbd Issues related to RBD label Feb 11, 2025
@nixpanic
Copy link
Member

Hi @ecosysbin , thanks for the report!

Would you be willing to post a PR that fixes this? You have good idea of what can be done to improve the isCephMgrSupported() function. Send the modifications once you have gone through the developers contribution workflow. If you need help, let us know.

Thanks!

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Feb 11, 2025

it was added to support older version of ceph, if older version on EOL we can remove the check as well.

@ecosysbin
Copy link
Author

Ok, I'm glad to do it. @nixpanic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/rbd Issues related to RBD
Projects
None yet
Development

No branches or pull requests

3 participants