diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index ae3c2deed90a..206696dafdd0 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -38,6 +38,7 @@ Settings can be specified at the global level to apply to the cluster as a whole If this value is empty, each pod will get an ephemeral directory to store their config files that is tied to the lifetime of the pod running on that node. More details can be found in the Kubernetes [empty dir docs](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir). * `skipUpgradeChecks`: if set to true Rook won't perform any upgrade checks on Ceph daemons during an upgrade. Use this at **YOUR OWN RISK**, only if you know what you're doing. To understand Rook's upgrade process of Ceph, read the [upgrade doc](../../Upgrade/rook-upgrade.md#ceph-version-upgrades). * `continueUpgradeAfterChecksEvenIfNotHealthy`: if set to true Rook will continue the OSD daemon upgrade process even if the PGs are not clean, or continue with the MDS upgrade even the file system is not healthy. +* `upgradeOSDRequiresHealthyPGs`: if set to true OSD upgrade process won't start until PGs are healthy. * `dashboard`: Settings for the Ceph dashboard. To view the dashboard in your browser see the [dashboard guide](../../Storage-Configuration/Monitoring/ceph-dashboard.md). * `enabled`: Whether to enable the dashboard to view cluster status * `urlPrefix`: Allows to serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy) diff --git a/Documentation/CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd.md b/Documentation/CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd.md index 8f8941c4f49b..4016ee495517 100644 --- a/Documentation/CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd.md +++ b/Documentation/CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd.md @@ -32,6 +32,10 @@ spec: distributed: 1 # distributed=<0, 1> (disabled=0) # export: # export=<0-256> (disabled=-1) # random: # random=[0.0, 1.0](disabled=0.0) + # Quota size of the subvolume group. + #quota: 10G + # data pool name for the subvolume group layout instead of the default data pool. + #dataPoolName: myfs-replicated ``` ## Settings @@ -48,7 +52,11 @@ If any setting is unspecified, a suitable default will be used automatically. * `filesystemName`: The metadata name of the CephFilesystem CR where the subvolume group will be created. -* `pinning`: To distribute load across MDS ranks in predictable and stable ways. Reference: https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups. +* `quota`: Quota size of the Ceph Filesystem subvolume group. + +* `dataPoolName`: The data pool name for the subvolume group layout instead of the default data pool. + +* `pinning`: To distribute load across MDS ranks in predictable and stable ways. See the Ceph doc for [Pinning subvolume groups](https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups). * `distributed`: Range: <0, 1>, for disabling it set to 0 * `export`: Range: <0-256>, for disabling it set to -1 * `random`: Range: [0.0, 1.0], for disabling it set to 0.0 diff --git a/Documentation/CRDs/specification.md b/Documentation/CRDs/specification.md index b84be542f70d..64197c517dcf 100644 --- a/Documentation/CRDs/specification.md +++ b/Documentation/CRDs/specification.md @@ -945,6 +945,20 @@ The default wait timeout is 10 minutes.
upgradeOSDRequiresHealthyPGs
UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to true
OSD upgrade process won’t start until PGs are healthy.
+This configuration will be ignored if skipUpgradeChecks
is true
.
+Default is false.
disruptionManagement
quota
Quota size of the Ceph Filesystem subvolume group.
+dataPoolName
The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired.
+upgradeOSDRequiresHealthyPGs
UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to true
OSD upgrade process won’t start until PGs are healthy.
+This configuration will be ignored if skipUpgradeChecks
is true
.
+Default is false.
disruptionManagement