-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to transfer specific PersistentVolumes #104
Comments
If I'm understanding correctly, the StorageClass in question has been updated so that new volumes will be created with the new driver, while old volumes persist with the previous driver? What sort of selector are you thinking of for PVs, then? And what sort of output would you like to see in the dry-run output? Just the list of PVs/PVCs/Pods that will be impacted, or more than that? |
If not for the current stop-everything-then-migrate behavior, I'd recommend just making a new storageclass with the good settings and migrating to that - but I don't see how adding a selector would help with that, besides letting users manually batch things. |
Yes, you right. We achieving benefits of using csi driver and gp3 storage just in one step by using pvmigrate.
The problem arises when you want to do it on production, where there are hundreds of pods to migrate. pvmigrate would cause major downtime as migrations starts concurrently for all pods at the same time.
|
If there's a way to pause statefulset/replicaset self healing, it would be relatively easy to change the logic around to only remove one PVC's pods at a time. Unfortunately, I don't know of a way to do this besides inserting an admissioncontroller and blocking the recreation of pods with that, and doing that would be a large undertaking. If you know of a mechanism that would allow this, let me know! Without that behavior, you'd still be limited to stopping all of a statefulset's replicas in order to migrate any one PVC from that statefulset. (to migrate the PV backing the PVC statefulsetname-0, you need to scale down the statefulset to 0 replicas even if you aren't migrating statefulsetname-1, -2 etc.) Implementing a more general pv/pvc filter mechanism is still plausibly beneficial, though.
I agree |
I added #108 as a start of dry run support |
The future kubernetes architecture will be working using drivers to provision volumes. aws-ebs-csi-driver for example adds new storage class, csi-driver, create proxy for old pv created by kubernetes.io/aws-ebs but lack of script that ppl could migrate from. It seems that pvmigrate is the solution for that use case(maybe some colaboration).
This is feature request to add granular selector for PV. Ideally it would be to also implement
--dry-run
optionSee kubernetes-sigs/aws-ebs-csi-driver#1287
laverya Wdyt?
The text was updated successfully, but these errors were encountered: