From 6c8ba0136ca97ba9ae1e13764b8270f34a448b3a Mon Sep 17 00:00:00 2001 From: sp98 Date: Mon, 29 Jan 2024 15:54:34 +0530 Subject: [PATCH 01/65] doc: add support for using azure kms Users on Microsoft Azure can make use of the Azure key vault service rather than replying on any third party service for KMS. Signed-off-by: sp98 --- .../Advanced/key-management-system.md | 55 +++++++++++++++---- 1 file changed, 45 insertions(+), 10 deletions(-) diff --git a/Documentation/Storage-Configuration/Advanced/key-management-system.md b/Documentation/Storage-Configuration/Advanced/key-management-system.md index 9877c91ce79d..a5640694623c 100644 --- a/Documentation/Storage-Configuration/Advanced/key-management-system.md +++ b/Documentation/Storage-Configuration/Advanced/key-management-system.md @@ -22,16 +22,18 @@ The `security` section contains settings related to encryption of the cluster. Supported KMS providers: -- [Vault](#vault) - - [Authentication methods](#authentication-methods) - - [Token-based authentication](#token-based-authentication) - - [Kubernetes-based authentication](#kubernetes-based-authentication) - - [General Vault configuration](#general-vault-configuration) - - [TLS configuration](#tls-configuration) -- [IBM Key Protect](#ibm-key-protect) - - [Configuration](#configuration) -- [Key Management Interoperability Protocol](#key-management-interoperability-protocol) - - [Configuration](#configuration-1) +* [Vault](#vault) + * [Authentication methods](#authentication-methods) + * [Token-based authentication](#token-based-authentication) + * [Kubernetes-based authentication](#kubernetes-based-authentication) + * [General Vault configuration](#general-vault-configuration) + * [TLS configuration](#tls-configuration) +* [IBM Key Protect](#ibm-key-protect) + * [Configuration](#configuration) +* [Key Management Interoperability Protocol](#key-management-interoperability-protocol) + * [Configuration](#configuration-1) +* [Azure Key Vault](#azure-key-vault) + * [Client Authentication](#client-authentication) ## Vault @@ -334,3 +336,36 @@ security: # name of the k8s secret containing the credentials. tokenSecretName: kmip-credentials ``` + +## Azure Key Vault + +Rook supports storing OSD encryption keys in [Azure Key vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal) + +### Client Authentication + +Different methods are available in Azure to authenticate a client. Rook supports Azure recommended method of authentication with Service Principal and a certificate. Refer the following Azure documentation to set up key vault and authenticate it via service principal and certtificate + +* [Create Azure Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal) + * `AZURE_VAULT_URL` can be retrieved at this step + +* [Create Service Principal](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal) + * `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` can be obtained after creating the service principal + * Ensure that the service principal is authenticated with a certificate and not with a client secret. + +* [Set Azure Key Vault RBAC](https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli#enable-azure-rbac-permissions-on-key-vault) + * Ensure that the role assigned to the key vault should be able to create, retrieve and delete secrets in the key vault. + +Provide the following KMS connection details in order to connect with Azure Key Vault. + +```yaml +security: + kms: + connectionDetails: + KMS_PROVIDER: azure-kv + AZURE_VAULT_URL: https://.vault.azure.net + AZURE_CLIENT_ID: Application ID of an Azure service principal + AZURE_TENANT_ID: ID of the application's Microsoft Entra tenant + AZURE_CERT_SECRET_NAME: +``` + +* `AZURE_CERT_SECRET_NAME` should hold the name of the k8s secret. The secret data should be base64 encoded certificate along with private key (without password protection) From f7a9d8ff7b63150dd88f374a196a1764f69c0b70 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Thu, 14 Dec 2023 18:30:11 +0530 Subject: [PATCH 02/65] core: added rook-ceph-default service account When a private docker registry is used and an image pull secret is specified in the chart, the pods with default Service Account fail to pull the image due to authentication issues. Added rook-ceph-default service account and modify the pods specifications by adding the serviceAccountName closes: https://github.com/rook/rook/issues/12786 Closes: https://github.com/rook/rook/issues/6673 Co-authored-by: Tareq Sharafy Signed-off-by: parth-gr (cherry picked from commit 737fb099feafa01489b233cb64f889c61b3b6016) Signed-off-by: parth-gr --- .../Prerequisites/authenticated-registry.md | 13 ++++--------- PendingReleaseNotes.md | 1 + build/csv/csv-gen.sh | 2 +- .../library/templates/_cluster-serviceaccount.tpl | 11 +++++++++++ .../templates/securityContextConstraints.yaml | 1 + deploy/examples/common-second-cluster.yaml | 12 ++++++++++++ deploy/examples/common.yaml | 12 ++++++++++++ pkg/apis/ceph.rook.io/v1/scc.go | 2 +- pkg/operator/ceph/cluster/cleanup.go | 7 ++++--- pkg/operator/ceph/cluster/mon/spec.go | 7 ++++--- pkg/operator/ceph/cluster/mon/spec_test.go | 1 + pkg/operator/ceph/cluster/nodedaemon/crash.go | 11 ++++++----- pkg/operator/ceph/cluster/nodedaemon/exporter.go | 1 + .../ceph/cluster/nodedaemon/exporter_test.go | 1 + pkg/operator/ceph/cluster/nodedaemon/pruner.go | 7 ++++--- pkg/operator/ceph/cluster/rbd/spec.go | 9 +++++---- pkg/operator/ceph/cluster/rbd/spec_test.go | 3 ++- pkg/operator/ceph/file/mds/spec.go | 9 +++++---- pkg/operator/ceph/file/mds/spec_test.go | 2 +- pkg/operator/ceph/file/mirror/spec.go | 9 +++++---- pkg/operator/ceph/file/mirror/spec_test.go | 2 ++ pkg/operator/ceph/nfs/spec.go | 3 ++- pkg/operator/ceph/nfs/spec_test.go | 2 ++ pkg/operator/k8sutil/cmdreporter/cmdreporter.go | 3 ++- pkg/operator/k8sutil/k8sutil.go | 3 ++- tests/framework/installer/ceph_settings.go | 1 + 26 files changed, 93 insertions(+), 42 deletions(-) diff --git a/Documentation/Getting-Started/Prerequisites/authenticated-registry.md b/Documentation/Getting-Started/Prerequisites/authenticated-registry.md index 503f9234d5ae..e9f346dadcdc 100644 --- a/Documentation/Getting-Started/Prerequisites/authenticated-registry.md +++ b/Documentation/Getting-Started/Prerequisites/authenticated-registry.md @@ -3,9 +3,7 @@ title: Authenticated Container Registries --- If you want to use an image from authenticated docker registry (e.g. for image cache/mirror), you'll need to -add an `imagePullSecret` to all relevant service accounts. This way all pods created by the operator (for service account: -`rook-ceph-system`) or all new pods in the namespace (for service account: `default`) will have the `imagePullSecret` added -to their spec. +add an `imagePullSecret` to all relevant service accounts. See the next section for the required service accounts. The whole process is described in the [official kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). @@ -29,25 +27,22 @@ imagePullSecrets: The service accounts are: * `rook-ceph-system` (namespace: `rook-ceph`): Will affect all pods created by the rook operator in the `rook-ceph` namespace. -* `default` (namespace: `rook-ceph`): Will affect most pods in the `rook-ceph` namespace. +* `rook-ceph-default` (namespace: `rook-ceph`): Will affect most pods in the `rook-ceph` namespace. * `rook-ceph-mgr` (namespace: `rook-ceph`): Will affect the MGR pods in the `rook-ceph` namespace. * `rook-ceph-osd` (namespace: `rook-ceph`): Will affect the OSD pods in the `rook-ceph` namespace. * `rook-ceph-rgw` (namespace: `rook-ceph`): Will affect the RGW pods in the `rook-ceph` namespace. -You can do it either via e.g. `kubectl -n edit serviceaccount default` or by modifying the [`operator.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml) -and [`cluster.yaml`](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml) before deploying them. - Since it's the same procedure for all service accounts, here is just one example: ```console -kubectl -n rook-ceph edit serviceaccount default +kubectl -n rook-ceph edit serviceaccount rook-ceph-default ``` ```yaml hl_lines="9-10" apiVersion: v1 kind: ServiceAccount metadata: - name: default + name: rook-ceph-default namespace: rook-ceph secrets: - name: default-token-12345 diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 3c0f7a94bdfe..76426906d391 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -9,3 +9,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https ## Features - Kubernetes versions **v1.24** through **v1.29** are supported. +- Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. diff --git a/build/csv/csv-gen.sh b/build/csv/csv-gen.sh index e55d3f448563..cab017e01dc4 100755 --- a/build/csv/csv-gen.sh +++ b/build/csv/csv-gen.sh @@ -23,7 +23,7 @@ ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml" ############# function generate_csv() { - kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-system,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter + kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-default,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter # cleanup to get the expected state before merging the real data from assembles "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.icon[*]' diff --git a/deploy/charts/library/templates/_cluster-serviceaccount.tpl b/deploy/charts/library/templates/_cluster-serviceaccount.tpl index fcc9932f3871..c6709f370972 100644 --- a/deploy/charts/library/templates/_cluster-serviceaccount.tpl +++ b/deploy/charts/library/templates/_cluster-serviceaccount.tpl @@ -57,4 +57,15 @@ metadata: storage-backend: ceph {{- include "library.rook-ceph.labels" . | nindent 4 }} {{ include "library.imagePullSecrets" . }} +--- +# Service account for other components +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-default + namespace: {{ .Release.Namespace }} # namespace:cluster + labels: + operator: rook + storage-backend: ceph +{{ include "library.imagePullSecrets" . }} {{ end }} diff --git a/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml b/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml index 893350a9b205..f79bcef07f79 100644 --- a/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml +++ b/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml @@ -41,4 +41,5 @@ users: - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-mgr - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-osd - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-rgw + - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-default {{- end }} diff --git a/deploy/examples/common-second-cluster.yaml b/deploy/examples/common-second-cluster.yaml index a7b7ff72d01a..c19a618c2d55 100644 --- a/deploy/examples/common-second-cluster.yaml +++ b/deploy/examples/common-second-cluster.yaml @@ -224,6 +224,18 @@ metadata: name: rook-ceph-mgr namespace: rook-ceph-secondary # namespace:cluster --- +# Service account for other components +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-default + namespace: rook-ceph-secondary # namespace:cluster + labels: + operator: rook + storage-backend: ceph +# imagePullSecrets: +# - name: my-registry-secret +--- apiVersion: v1 kind: ServiceAccount metadata: diff --git a/deploy/examples/common.yaml b/deploy/examples/common.yaml index c344860c1b04..5495bf631ab3 100644 --- a/deploy/examples/common.yaml +++ b/deploy/examples/common.yaml @@ -1154,6 +1154,18 @@ metadata: # imagePullSecrets: # - name: my-registry-secret --- +# Service account for other components +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-default + namespace: rook-ceph # namespace:cluster + labels: + operator: rook + storage-backend: ceph +# imagePullSecrets: +# - name: my-registry-secret +--- # Service account for Ceph mgrs apiVersion: v1 kind: ServiceAccount diff --git a/pkg/apis/ceph.rook.io/v1/scc.go b/pkg/apis/ceph.rook.io/v1/scc.go index 8db6efb7454b..8a76c1566903 100644 --- a/pkg/apis/ceph.rook.io/v1/scc.go +++ b/pkg/apis/ceph.rook.io/v1/scc.go @@ -69,7 +69,7 @@ func NewSecurityContextConstraints(name string, namespaces ...string) *secv1.Sec for _, ns := range namespaces { users = append(users, []string{ fmt.Sprintf("system:serviceaccount:%s:rook-ceph-system", ns), - fmt.Sprintf("system:serviceaccount:%s:default", ns), + fmt.Sprintf("system:serviceaccount:%s:rook-ceph-default", ns), fmt.Sprintf("system:serviceaccount:%s:rook-ceph-mgr", ns), fmt.Sprintf("system:serviceaccount:%s:rook-ceph-osd", ns), fmt.Sprintf("system:serviceaccount:%s:rook-ceph-rgw", ns), diff --git a/pkg/operator/ceph/cluster/cleanup.go b/pkg/operator/ceph/cluster/cleanup.go index 23ece72b1eda..6c87353b09c4 100644 --- a/pkg/operator/ceph/cluster/cleanup.go +++ b/pkg/operator/ceph/cluster/cleanup.go @@ -158,9 +158,10 @@ func (c *ClusterController) cleanUpJobTemplateSpec(cluster *cephv1.CephCluster, Containers: []v1.Container{ c.cleanUpJobContainer(cluster, monSecret, clusterFSID), }, - Volumes: volumes, - RestartPolicy: v1.RestartPolicyOnFailure, - PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames), + Volumes: volumes, + RestartPolicy: v1.RestartPolicyOnFailure, + PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames), + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/cluster/mon/spec.go b/pkg/operator/ceph/cluster/mon/spec.go index af8db024e8bb..69ea83d8888c 100644 --- a/pkg/operator/ceph/cluster/mon/spec.go +++ b/pkg/operator/ceph/cluster/mon/spec.go @@ -186,9 +186,10 @@ func (c *Cluster) makeMonPod(monConfig *monConfig, canary bool) (*corev1.Pod, er RestartPolicy: corev1.RestartPolicyAlways, // we decide later whether to use a PVC volume or host volumes for mons, so only populate // the base volumes at this point. - Volumes: controller.DaemonVolumesBase(monConfig.DataPathMap, keyringStoreName, c.spec.DataDirHostPath), - HostNetwork: monConfig.UseHostNetwork, - PriorityClassName: cephv1.GetMonPriorityClassName(c.spec.PriorityClassNames), + Volumes: controller.DaemonVolumesBase(monConfig.DataPathMap, keyringStoreName, c.spec.DataDirHostPath), + HostNetwork: monConfig.UseHostNetwork, + PriorityClassName: cephv1.GetMonPriorityClassName(c.spec.PriorityClassNames), + ServiceAccountName: k8sutil.DefaultServiceAccount, } // If the log collector is enabled we add the side-car container diff --git a/pkg/operator/ceph/cluster/mon/spec_test.go b/pkg/operator/ceph/cluster/mon/spec_test.go index 3c5d0b43280f..d336654a313d 100644 --- a/pkg/operator/ceph/cluster/mon/spec_test.go +++ b/pkg/operator/ceph/cluster/mon/spec_test.go @@ -72,6 +72,7 @@ func testPodSpec(t *testing.T, monID string, pvc bool) { d, err := c.makeDeployment(monConfig, false) assert.NoError(t, err) assert.NotNil(t, d) + assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName) if pvc { d.Spec.Template.Spec.Volumes = append( diff --git a/pkg/operator/ceph/cluster/nodedaemon/crash.go b/pkg/operator/ceph/cluster/nodedaemon/crash.go index c6b9e7e09412..97abb34a3f9d 100644 --- a/pkg/operator/ceph/cluster/nodedaemon/crash.go +++ b/pkg/operator/ceph/cluster/nodedaemon/crash.go @@ -116,11 +116,12 @@ func (r *ReconcileNode) createOrUpdateCephCrash(node corev1.Node, tolerations [] Containers: []corev1.Container{ getCrashDaemonContainer(cephCluster, *cephVersion), }, - Tolerations: tolerations, - RestartPolicy: corev1.RestartPolicyAlways, - HostNetwork: cephCluster.Spec.Network.IsHost(), - Volumes: volumes, - PriorityClassName: cephv1.GetCrashCollectorPriorityClassName(cephCluster.Spec.PriorityClassNames), + Tolerations: tolerations, + RestartPolicy: corev1.RestartPolicyAlways, + HostNetwork: cephCluster.Spec.Network.IsHost(), + Volumes: volumes, + PriorityClassName: cephv1.GetCrashCollectorPriorityClassName(cephCluster.Spec.PriorityClassNames), + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/cluster/nodedaemon/exporter.go b/pkg/operator/ceph/cluster/nodedaemon/exporter.go index f0a196384eb6..ca66fb540ce5 100644 --- a/pkg/operator/ceph/cluster/nodedaemon/exporter.go +++ b/pkg/operator/ceph/cluster/nodedaemon/exporter.go @@ -143,6 +143,7 @@ func (r *ReconcileNode) createOrUpdateCephExporter(node corev1.Node, tolerations Volumes: volumes, PriorityClassName: cephv1.GetCephExporterPriorityClassName(cephCluster.Spec.PriorityClassNames), TerminationGracePeriodSeconds: &terminationGracePeriodSeconds, + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } cephv1.GetCephExporterAnnotations(cephCluster.Spec.Annotations).ApplyToObjectMeta(&deploy.Spec.Template.ObjectMeta) diff --git a/pkg/operator/ceph/cluster/nodedaemon/exporter_test.go b/pkg/operator/ceph/cluster/nodedaemon/exporter_test.go index fa3e635a0d35..6a72b1776bf9 100644 --- a/pkg/operator/ceph/cluster/nodedaemon/exporter_test.go +++ b/pkg/operator/ceph/cluster/nodedaemon/exporter_test.go @@ -103,6 +103,7 @@ func TestCreateOrUpdateCephExporter(t *testing.T) { assert.Equal(t, tolerations, podSpec.Spec.Tolerations) assert.Equal(t, false, podSpec.Spec.HostNetwork) assert.Equal(t, "", podSpec.Spec.PriorityClassName) + assert.Equal(t, k8sutil.DefaultServiceAccount, podSpec.Spec.ServiceAccountName) assertCephExporterArgs(t, podSpec.Spec.Containers[0].Args, cephCluster.Spec.Network.DualStack || cephCluster.Spec.Network.IPFamily == "IPv6") diff --git a/pkg/operator/ceph/cluster/nodedaemon/pruner.go b/pkg/operator/ceph/cluster/nodedaemon/pruner.go index 25e19d22845a..bb8e3966bf92 100644 --- a/pkg/operator/ceph/cluster/nodedaemon/pruner.go +++ b/pkg/operator/ceph/cluster/nodedaemon/pruner.go @@ -107,9 +107,10 @@ func (r *ReconcileNode) createOrUpdateCephCron(cephCluster cephv1.CephCluster, c Containers: []corev1.Container{ getCrashPruneContainer(cephCluster, *cephVersion), }, - RestartPolicy: corev1.RestartPolicyNever, - HostNetwork: cephCluster.Spec.Network.IsHost(), - Volumes: volumes, + RestartPolicy: corev1.RestartPolicyNever, + HostNetwork: cephCluster.Spec.Network.IsHost(), + Volumes: volumes, + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/cluster/rbd/spec.go b/pkg/operator/ceph/cluster/rbd/spec.go index 35ba7bae831f..2b846eae826d 100644 --- a/pkg/operator/ceph/cluster/rbd/spec.go +++ b/pkg/operator/ceph/cluster/rbd/spec.go @@ -39,10 +39,11 @@ func (r *ReconcileCephRBDMirror) makeDeployment(daemonConfig *daemonConfig, rbdM Containers: []v1.Container{ r.makeMirroringDaemonContainer(daemonConfig, rbdMirror), }, - RestartPolicy: v1.RestartPolicyAlways, - Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath), - HostNetwork: r.cephClusterSpec.Network.IsHost(), - PriorityClassName: rbdMirror.Spec.PriorityClassName, + RestartPolicy: v1.RestartPolicyAlways, + Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath), + HostNetwork: r.cephClusterSpec.Network.IsHost(), + PriorityClassName: rbdMirror.Spec.PriorityClassName, + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/cluster/rbd/spec_test.go b/pkg/operator/ceph/cluster/rbd/spec_test.go index 981f8538a613..d03596645f52 100644 --- a/pkg/operator/ceph/cluster/rbd/spec_test.go +++ b/pkg/operator/ceph/cluster/rbd/spec_test.go @@ -23,9 +23,9 @@ import ( "github.com/rook/rook/pkg/client/clientset/versioned/scheme" cephclient "github.com/rook/rook/pkg/daemon/ceph/client" "github.com/rook/rook/pkg/operator/ceph/config" - "github.com/rook/rook/pkg/operator/ceph/test" cephver "github.com/rook/rook/pkg/operator/ceph/version" + "github.com/rook/rook/pkg/operator/k8sutil" "github.com/stretchr/testify/assert" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" @@ -91,6 +91,7 @@ func TestPodSpec(t *testing.T) { assert.Equal(t, 5, len(d.Spec.Template.Spec.Volumes)) assert.Equal(t, 1, len(d.Spec.Template.Spec.Volumes[0].Projected.Sources)) assert.Equal(t, 5, len(d.Spec.Template.Spec.Containers[0].VolumeMounts)) + assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName) // Deployment should have Ceph labels test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels, diff --git a/pkg/operator/ceph/file/mds/spec.go b/pkg/operator/ceph/file/mds/spec.go index 83d38dab2843..426957a2409c 100644 --- a/pkg/operator/ceph/file/mds/spec.go +++ b/pkg/operator/ceph/file/mds/spec.go @@ -61,10 +61,11 @@ func (c *Cluster) makeDeployment(mdsConfig *mdsConfig, fsNamespacedname types.Na Containers: []v1.Container{ mdsContainer, }, - RestartPolicy: v1.RestartPolicyAlways, - Volumes: controller.DaemonVolumes(mdsConfig.DataPathMap, mdsConfig.ResourceName, c.clusterSpec.DataDirHostPath), - HostNetwork: c.clusterSpec.Network.IsHost(), - PriorityClassName: c.fs.Spec.MetadataServer.PriorityClassName, + RestartPolicy: v1.RestartPolicyAlways, + Volumes: controller.DaemonVolumes(mdsConfig.DataPathMap, mdsConfig.ResourceName, c.clusterSpec.DataDirHostPath), + HostNetwork: c.clusterSpec.Network.IsHost(), + PriorityClassName: c.fs.Spec.MetadataServer.PriorityClassName, + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/file/mds/spec_test.go b/pkg/operator/ceph/file/mds/spec_test.go index 18445edc14b7..803c3b6c019e 100644 --- a/pkg/operator/ceph/file/mds/spec_test.go +++ b/pkg/operator/ceph/file/mds/spec_test.go @@ -28,7 +28,6 @@ import ( "github.com/rook/rook/pkg/clusterd" cephclient "github.com/rook/rook/pkg/daemon/ceph/client" cephver "github.com/rook/rook/pkg/operator/ceph/version" - testop "github.com/rook/rook/pkg/operator/test" "github.com/stretchr/testify/assert" apps "k8s.io/api/apps/v1" @@ -104,6 +103,7 @@ func TestPodSpecs(t *testing.T) { assert.NotNil(t, d) assert.Equal(t, v1.RestartPolicyAlways, d.Spec.Template.Spec.RestartPolicy) + assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName) // Deployment should have Ceph labels test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels, diff --git a/pkg/operator/ceph/file/mirror/spec.go b/pkg/operator/ceph/file/mirror/spec.go index 4b197e039956..8e9153b5bc28 100644 --- a/pkg/operator/ceph/file/mirror/spec.go +++ b/pkg/operator/ceph/file/mirror/spec.go @@ -42,10 +42,11 @@ func (r *ReconcileFilesystemMirror) makeDeployment(daemonConfig *daemonConfig, f Containers: []v1.Container{ r.makeFsMirroringDaemonContainer(daemonConfig, fsMirror), }, - RestartPolicy: v1.RestartPolicyAlways, - Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath), - HostNetwork: r.cephClusterSpec.Network.IsHost(), - PriorityClassName: fsMirror.Spec.PriorityClassName, + RestartPolicy: v1.RestartPolicyAlways, + Volumes: controller.DaemonVolumes(daemonConfig.DataPathMap, daemonConfig.ResourceName, r.cephClusterSpec.DataDirHostPath), + HostNetwork: r.cephClusterSpec.Network.IsHost(), + PriorityClassName: fsMirror.Spec.PriorityClassName, + ServiceAccountName: k8sutil.DefaultServiceAccount, }, } diff --git a/pkg/operator/ceph/file/mirror/spec_test.go b/pkg/operator/ceph/file/mirror/spec_test.go index 0bf8cc1dd0e1..256705fab3c9 100644 --- a/pkg/operator/ceph/file/mirror/spec_test.go +++ b/pkg/operator/ceph/file/mirror/spec_test.go @@ -25,6 +25,7 @@ import ( "github.com/rook/rook/pkg/operator/ceph/config" "github.com/rook/rook/pkg/operator/ceph/test" cephver "github.com/rook/rook/pkg/operator/ceph/version" + "github.com/rook/rook/pkg/operator/k8sutil" "github.com/stretchr/testify/assert" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" @@ -88,6 +89,7 @@ func TestPodSpec(t *testing.T) { assert.Equal(t, 5, len(d.Spec.Template.Spec.Volumes)) assert.Equal(t, 1, len(d.Spec.Template.Spec.Volumes[0].Projected.Sources)) assert.Equal(t, 5, len(d.Spec.Template.Spec.Containers[0].VolumeMounts)) + assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName) // Deployment should have Ceph labels test.AssertLabelsContainCephRequirements(t, d.ObjectMeta.Labels, diff --git a/pkg/operator/ceph/nfs/spec.go b/pkg/operator/ceph/nfs/spec.go index 4c4bcbf45e8d..10edf9399f6c 100644 --- a/pkg/operator/ceph/nfs/spec.go +++ b/pkg/operator/ceph/nfs/spec.go @@ -148,7 +148,8 @@ func (r *ReconcileCephNFS) makeDeployment(nfs *cephv1.CephNFS, cfg daemonConfig) // for kerberos, nfs-ganesha uses the hostname via getaddrinfo() and uses that when // connecting to the krb server. give all ganesha servers the same hostname so they can all // use the same krb credentials to auth - Hostname: fmt.Sprintf("%s-%s", nfs.Namespace, nfs.Name), + Hostname: fmt.Sprintf("%s-%s", nfs.Namespace, nfs.Name), + ServiceAccountName: k8sutil.DefaultServiceAccount, } // Replace default unreachable node toleration k8sutil.AddUnreachableNodeToleration(&podSpec) diff --git a/pkg/operator/ceph/nfs/spec_test.go b/pkg/operator/ceph/nfs/spec_test.go index 548faeb830fd..870321581a39 100644 --- a/pkg/operator/ceph/nfs/spec_test.go +++ b/pkg/operator/ceph/nfs/spec_test.go @@ -26,6 +26,7 @@ import ( cephclient "github.com/rook/rook/pkg/daemon/ceph/client" "github.com/rook/rook/pkg/operator/ceph/config" cephver "github.com/rook/rook/pkg/operator/ceph/version" + "github.com/rook/rook/pkg/operator/k8sutil" optest "github.com/rook/rook/pkg/operator/test" exectest "github.com/rook/rook/pkg/util/exec/test" "github.com/stretchr/testify/assert" @@ -145,6 +146,7 @@ func TestDeploymentSpec(t *testing.T) { }, ) assert.Equal(t, "my-priority-class", d.Spec.Template.Spec.PriorityClassName) + assert.Equal(t, k8sutil.DefaultServiceAccount, d.Spec.Template.Spec.ServiceAccountName) }) t.Run("with sssd sidecar", func(t *testing.T) { diff --git a/pkg/operator/k8sutil/cmdreporter/cmdreporter.go b/pkg/operator/k8sutil/cmdreporter/cmdreporter.go index 11aa47f11f6f..affc87558f9e 100644 --- a/pkg/operator/k8sutil/cmdreporter/cmdreporter.go +++ b/pkg/operator/k8sutil/cmdreporter/cmdreporter.go @@ -300,7 +300,8 @@ func (cr *cmdReporterCfg) initJobSpec() (*batch.Job, error) { Containers: []v1.Container{ *cmdReporterContainer, }, - RestartPolicy: v1.RestartPolicyOnFailure, + RestartPolicy: v1.RestartPolicyOnFailure, + ServiceAccountName: k8sutil.DefaultServiceAccount, } copyBinsVol, _ := copyBinariesVolAndMount() podSpec.Volumes = []v1.Volume{copyBinsVol} diff --git a/pkg/operator/k8sutil/k8sutil.go b/pkg/operator/k8sutil/k8sutil.go index 32b8fbbbd8e2..e816f97b980e 100644 --- a/pkg/operator/k8sutil/k8sutil.go +++ b/pkg/operator/k8sutil/k8sutil.go @@ -54,10 +54,11 @@ const ( PodNamespaceEnvVar = "POD_NAMESPACE" // NodeNameEnvVar is the env variable for getting the node via downward api NodeNameEnvVar = "NODE_NAME" - // RookVersionLabelKey is the key used for reporting the Rook version which last created or // modified a resource. RookVersionLabelKey = "rook-version" + // DefaultServiceAccount is a service-account used for components that do not specify a dedicated service-account. + DefaultServiceAccount = "rook-ceph-default" ) // GetK8SVersion gets the version of the running K8S cluster diff --git a/tests/framework/installer/ceph_settings.go b/tests/framework/installer/ceph_settings.go index 41d1d01dcb76..3fb0e7cb1501 100644 --- a/tests/framework/installer/ceph_settings.go +++ b/tests/framework/installer/ceph_settings.go @@ -99,6 +99,7 @@ func replaceNamespaces(name, manifest, operatorNamespace, clusterNamespace strin // SCC namespaces for operator and Ceph daemons manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-system # serviceaccount:namespace:operator", operatorNamespace+":rook-ceph-system") + manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-default # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-default") manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-mgr # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-mgr") manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-osd # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-osd") manifest = strings.ReplaceAll(manifest, "rook-ceph:rook-ceph-rgw # serviceaccount:namespace:cluster", clusterNamespace+":rook-ceph-rgw") From 10dea459ec32f21b190dec0e3bcc6a9bce3db30b Mon Sep 17 00:00:00 2001 From: Praveen M Date: Thu, 22 Feb 2024 14:58:12 +0530 Subject: [PATCH 03/65] doc: pending release notes for update netNamespaceFilePath PR Signed-off-by: Praveen M --- PendingReleaseNotes.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 3c0f7a94bdfe..6fdb6afdb181 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -5,7 +5,9 @@ - The removal of `CSI_ENABLE_READ_AFFINITY` option and its replacement with per-cluster read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https://github.com/rook/rook/pull/13665) - Allow setting the Ceph `application` on a pool - +- updating `netNamespaceFilePath` for all clusterIDs in rook-ceph-csi-config configMap in [PR](https://github.com/rook/rook/pull/13613) + - Issue: The netNamespaceFilePath isn't updated in the CSI config map for all the clusterIDs when `CSI_ENABLE_HOST_NETWORK` is set to false in `operator.yaml` + - Impact: This results in the unintended network configurations, with pods using the host networking instead of pod networking. ## Features - Kubernetes versions **v1.24** through **v1.29** are supported. From ea700bcf9a5449784f15adb66c7e118438185c22 Mon Sep 17 00:00:00 2001 From: karthik-us Date: Thu, 29 Feb 2024 11:22:12 +0530 Subject: [PATCH 04/65] doc: fix broken links Fixing the broken links in the docs. Signed-off-by: karthik-us --- Documentation/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/README.md b/Documentation/README.md index ede5c852315c..55f9e3b288ed 100644 --- a/Documentation/README.md +++ b/Documentation/README.md @@ -18,11 +18,11 @@ Rook is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF ## Quick Start Guide Starting Ceph in your cluster is as simple as a few `kubectl` commands. -See our [Quickstart](quickstart.md) guide to get started with the Ceph operator! +See our [Quickstart](https://github.com/rook/rook/tree/master/Documentation/Getting-Started/quickstart.md) guide to get started with the Ceph operator! ## Designs -[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](storage-architecture.md). +[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](https://github.com/rook/rook/tree/master/Documentation/Getting-Started/storage-architecture.md). For detailed design documentation, see also the [design docs](https://github.com/rook/rook/tree/master/design). From 532e865ba2a4be55fab0c8c3feef2e71b5f38aee Mon Sep 17 00:00:00 2001 From: karthik-us Date: Fri, 1 Mar 2024 00:10:39 +0530 Subject: [PATCH 05/65] Revert "doc: fix broken links" This reverts commit ea700bcf9a5449784f15adb66c7e118438185c22. Signed-off-by: karthik-us --- Documentation/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/README.md b/Documentation/README.md index 55f9e3b288ed..ede5c852315c 100644 --- a/Documentation/README.md +++ b/Documentation/README.md @@ -18,11 +18,11 @@ Rook is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF ## Quick Start Guide Starting Ceph in your cluster is as simple as a few `kubectl` commands. -See our [Quickstart](https://github.com/rook/rook/tree/master/Documentation/Getting-Started/quickstart.md) guide to get started with the Ceph operator! +See our [Quickstart](quickstart.md) guide to get started with the Ceph operator! ## Designs -[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](https://github.com/rook/rook/tree/master/Documentation/Getting-Started/storage-architecture.md). +[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](storage-architecture.md). For detailed design documentation, see also the [design docs](https://github.com/rook/rook/tree/master/design). From d12478beb353c586cccdb9a00c799457e562fe51 Mon Sep 17 00:00:00 2001 From: Scott Miller Date: Mon, 29 Jan 2024 14:18:49 -0500 Subject: [PATCH 06/65] build: add ability to stash docker build context This commit gives builders the necessary tooling to save off a docker build context for use with other tools that dont follow the same command format as $DOCKERCMD. Signed-off-by: Scott Miller --- images/ceph/Makefile | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/images/ceph/Makefile b/images/ceph/Makefile index 495fc037f87e..e93dab7d9fc8 100755 --- a/images/ceph/Makefile +++ b/images/ceph/Makefile @@ -31,7 +31,9 @@ YQv3_VERSION = 3.4.1 GOHOST := GOOS=$(GOHOSTOS) GOARCH=$(GOHOSTARCH) go MANIFESTS_DIR=../../deploy/examples -TEMP := $(shell mktemp -d) +ifeq ($(BUILD_CONTEXT_DIR),) +BUILD_CONTEXT_DIR := $(shell mktemp -d) +endif # Note: as of version 1.3 of operator-sdk, the url format changed to: # ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH} @@ -68,33 +70,36 @@ export OPERATOR_SDK YQv3 do.build: @echo === container build $(CEPH_IMAGE) - @cp Dockerfile $(TEMP) - @cp toolbox.sh $(TEMP) - @cp set-ceph-debug-level $(TEMP) - @cp $(OUTPUT_DIR)/bin/linux_$(GOARCH)/rook $(TEMP) - @cp -r $(MANIFESTS_DIR)/monitoring $(TEMP)/ceph-monitoring - @mkdir -p $(TEMP)/rook-external/test-data - @cp $(MANIFESTS_DIR)/create-external-cluster-resources.* $(TEMP)/rook-external/ - @cp ../../tests/ceph-status-out $(TEMP)/rook-external/test-data/ + @mkdir -p $(BUILD_CONTEXT_DIR) + @cp Dockerfile $(BUILD_CONTEXT_DIR) + @cp toolbox.sh $(BUILD_CONTEXT_DIR) + @cp set-ceph-debug-level $(BUILD_CONTEXT_DIR) + @cp $(OUTPUT_DIR)/bin/linux_$(GOARCH)/rook $(BUILD_CONTEXT_DIR) + @cp -r $(MANIFESTS_DIR)/monitoring $(BUILD_CONTEXT_DIR)/ceph-monitoring + @mkdir -p $(BUILD_CONTEXT_DIR)/rook-external/test-data + @cp $(MANIFESTS_DIR)/create-external-cluster-resources.* $(BUILD_CONTEXT_DIR)/rook-external/ + @cp ../../tests/ceph-status-out $(BUILD_CONTEXT_DIR)/rook-external/test-data/ ifeq ($(INCLUDE_CSV_TEMPLATES),true) @$(MAKE) csv - @cp -r ../../build/csv $(TEMP)/ceph-csv-templates - @rm $(TEMP)/ceph-csv-templates/csv-gen.sh + @cp -r ../../build/csv $(BUILD_CONTEXT_DIR)/ceph-csv-templates + @rm $(BUILD_CONTEXT_DIR)/ceph-csv-templates/csv-gen.sh @$(MAKE) csv-clean else - mkdir $(TEMP)/ceph-csv-templates + mkdir $(BUILD_CONTEXT_DIR)/ceph-csv-templates endif - @cd $(TEMP) && $(SED_IN_PLACE) 's|BASEIMAGE|$(BASEIMAGE)|g' Dockerfile + @cd $(BUILD_CONTEXT_DIR) && $(SED_IN_PLACE) 's|BASEIMAGE|$(BASEIMAGE)|g' Dockerfile @if [ -z "$(BUILD_CONTAINER_IMAGE)" ]; then\ $(DOCKERCMD) build $(BUILD_ARGS) \ --build-arg S5CMD_VERSION=$(S5CMD_VERSION) \ --build-arg S5CMD_ARCH=$(S5CMD_ARCH) \ -t $(CEPH_IMAGE) \ - $(TEMP);\ + $(BUILD_CONTEXT_DIR);\ + fi + @if [ -z "$(SAVE_BUILD_CONTEXT_DIR)" ]; then\ + rm -fr $(BUILD_CONTEXT_DIR);\ fi - @rm -fr $(TEMP) # call this before building multiple arches in parallel to prevent parallel build processes from # conflicting From 77bfcd46e77e9b4f5be0a3edf7cec0e568d243cd Mon Sep 17 00:00:00 2001 From: Madhu Rajanna Date: Wed, 28 Feb 2024 14:04:13 +0100 Subject: [PATCH 07/65] csi: add rbac required for vgs Added required rbac's for required rbac for volumegroupsnapshot feature. Signed-off-by: Madhu Rajanna --- .../rook-ceph/templates/clusterrole.yaml | 26 ++++++++++++++++--- deploy/examples/common.yaml | 26 ++++++++++++++++--- 2 files changed, 44 insertions(+), 8 deletions(-) diff --git a/deploy/charts/rook-ceph/templates/clusterrole.yaml b/deploy/charts/rook-ceph/templates/clusterrole.yaml index 12c2ad02e105..e99f0c0c10f7 100644 --- a/deploy/charts/rook-ceph/templates/clusterrole.yaml +++ b/deploy/charts/rook-ceph/templates/clusterrole.yaml @@ -500,16 +500,25 @@ rules: verbs: ["patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots"] - verbs: ["get", "list"] + verbs: ["get", "list", "watch", "update", "patch", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents"] - verbs: ["get", "list", "watch", "patch", "update"] + verbs: ["get", "list", "watch", "patch", "update", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents/status"] verbs: ["update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotclasses"] + verbs: ["get", "list", "watch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents/status"] + verbs: ["update", "patch"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 @@ -579,16 +588,25 @@ rules: verbs: ["patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots"] - verbs: ["get", "list", "watch"] + verbs: ["get", "list", "watch", "update", "patch", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents"] - verbs: ["get", "list", "watch", "patch", "update"] + verbs: ["get", "list", "watch", "patch", "update", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents/status"] verbs: ["update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotclasses"] + verbs: ["get", "list", "watch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents/status"] + verbs: ["update", "patch"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] diff --git a/deploy/examples/common.yaml b/deploy/examples/common.yaml index 5495bf631ab3..ed523e8cb051 100644 --- a/deploy/examples/common.yaml +++ b/deploy/examples/common.yaml @@ -54,16 +54,25 @@ rules: verbs: ["patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots"] - verbs: ["get", "list"] + verbs: ["get", "list", "watch", "update", "patch", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents"] - verbs: ["get", "list", "watch", "patch", "update"] + verbs: ["get", "list", "watch", "patch", "update", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents/status"] verbs: ["update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotclasses"] + verbs: ["get", "list", "watch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents/status"] + verbs: ["update", "patch"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 @@ -152,16 +161,25 @@ rules: verbs: ["patch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshots"] - verbs: ["get", "list", "watch"] + verbs: ["get", "list", "watch", "update", "patch", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotclasses"] verbs: ["get", "list", "watch"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents"] - verbs: ["get", "list", "watch", "patch", "update"] + verbs: ["get", "list", "watch", "patch", "update", "create"] - apiGroups: ["snapshot.storage.k8s.io"] resources: ["volumesnapshotcontents/status"] verbs: ["update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotclasses"] + verbs: ["get", "list", "watch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["groupsnapshot.storage.k8s.io"] + resources: ["volumegroupsnapshotcontents/status"] + verbs: ["update", "patch"] - apiGroups: [""] resources: ["configmaps"] verbs: ["get"] From 0d5bd70194bd9edd0d0c7c11718fac1ec9673e7c Mon Sep 17 00:00:00 2001 From: Madhu Rajanna Date: Wed, 28 Feb 2024 14:12:00 +0100 Subject: [PATCH 08/65] csi: install vgs CRD in tests update the snapshot controller to 7.0.1 and install new Volumegroup CRD's Signed-off-by: Madhu Rajanna --- tests/framework/utils/snapshot.go | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/tests/framework/utils/snapshot.go b/tests/framework/utils/snapshot.go index 1a20d6144876..254c1b6c77c0 100644 --- a/tests/framework/utils/snapshot.go +++ b/tests/framework/utils/snapshot.go @@ -27,14 +27,18 @@ import ( const ( // snapshotterVersion from which the snapshotcontroller and CRD will be // installed - snapshotterVersion = "v5.0.1" + snapshotterVersion = "v7.0.1" repoURL = "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter" rbacPath = "deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml" controllerPath = "deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml" - // snapshot CRD path + // snapshot CRD path snapshotClassCRDPath = "client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml" volumeSnapshotContentsCRDPath = "client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml" volumeSnapshotCRDPath = "client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml" + // volumegroupsnapshot CRD path + volumeGroupSnapshotClassCRDPath = "client/config/crd/groupsnapshot.storage.k8s.io_volumegroupsnapshotclasses.yaml" + volumeGroupSnapshotContentsCRDPath = "client/config/crd/groupsnapshot.storage.k8s.io_volumegroupsnapshotcontents.yaml" + volumeGroupSnapshotCRDPath = "client/config/crd/groupsnapshot.storage.k8s.io_volumegroupsnapshots.yaml" ) // CheckSnapshotISReadyToUse checks snapshot is ready to use @@ -143,6 +147,24 @@ func (k8sh *K8sHelper) snapshotCRD(action string) error { if err != nil { return err } + vgsClassCRD := fmt.Sprintf("%s/%s/%s", repoURL, snapshotterVersion, volumeGroupSnapshotClassCRDPath) + _, err = k8sh.Kubectl(args(vgsClassCRD)...) + if err != nil { + return err + } + + vgsContentsCRD := fmt.Sprintf("%s/%s/%s", repoURL, snapshotterVersion, volumeGroupSnapshotContentsCRDPath) + _, err = k8sh.Kubectl(args(vgsContentsCRD)...) + if err != nil { + return err + } + + vgsCRD := fmt.Sprintf("%s/%s/%s", repoURL, snapshotterVersion, volumeGroupSnapshotCRDPath) + _, err = k8sh.Kubectl(args(vgsCRD)...) + if err != nil { + return err + } + return nil } From ebc812c4ede094195ebe369e58bce3cb6604d09b Mon Sep 17 00:00:00 2001 From: karthik-us Date: Fri, 1 Mar 2024 19:44:15 +0530 Subject: [PATCH 09/65] doc: fix broken links This patch fixes the broken links in the non rendered documents on github and keeps the structre of the rendered documents on the website intact as well. Signed-off-by: karthik-us --- Documentation/.pages | 12 ++++++++++++ Documentation/Getting-Started/.pages | 2 +- Documentation/Getting-Started/intro.md | 1 - Documentation/README.md | 4 ++-- Documentation/intro.md | 1 + mkdocs.yml | 2 +- 6 files changed, 17 insertions(+), 5 deletions(-) delete mode 120000 Documentation/Getting-Started/intro.md create mode 120000 Documentation/intro.md diff --git a/Documentation/.pages b/Documentation/.pages index c6dc6332dbe2..078a205892c1 100644 --- a/Documentation/.pages +++ b/Documentation/.pages @@ -6,3 +6,15 @@ nav: - Troubleshooting - Upgrade - Contributing + - Getting Started: + - Rook: intro.md + - Glossary: Getting-Started/glossary + - Prerequisites: + - Prerequisites: Getting-Started/Prerequisites/prerequisites.md + - Authenticated Container Registries: Getting-Started/Prerequisites/authenticated-registry + - Quick Start: Getting-Started/quickstart + - Storage Architecture: Getting-Started/storage-architecture + - Example Configurations: Getting-Started/example-configurations + - OpenShift: Getting-Started/ceph-openshift + - Cleanup: Getting-Started/ceph-teardown + - Release: Getting-Started/release-cycle diff --git a/Documentation/Getting-Started/.pages b/Documentation/Getting-Started/.pages index 165242426575..29bb29b3d8a8 100644 --- a/Documentation/Getting-Started/.pages +++ b/Documentation/Getting-Started/.pages @@ -1,5 +1,5 @@ nav: - - intro.md + - Rook: ../intro - glossary.md - Prerequisites - quickstart.md diff --git a/Documentation/Getting-Started/intro.md b/Documentation/Getting-Started/intro.md deleted file mode 120000 index 32d46ee883b5..000000000000 --- a/Documentation/Getting-Started/intro.md +++ /dev/null @@ -1 +0,0 @@ -../README.md \ No newline at end of file diff --git a/Documentation/README.md b/Documentation/README.md index ede5c852315c..986a87b1bfb4 100644 --- a/Documentation/README.md +++ b/Documentation/README.md @@ -18,11 +18,11 @@ Rook is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF ## Quick Start Guide Starting Ceph in your cluster is as simple as a few `kubectl` commands. -See our [Quickstart](quickstart.md) guide to get started with the Ceph operator! +See our [Quickstart](Getting-Started/quickstart.md) guide to get started with the Ceph operator! ## Designs -[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](storage-architecture.md). +[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](Getting-Started/storage-architecture.md). For detailed design documentation, see also the [design docs](https://github.com/rook/rook/tree/master/design). diff --git a/Documentation/intro.md b/Documentation/intro.md new file mode 120000 index 000000000000..42061c01a1c7 --- /dev/null +++ b/Documentation/intro.md @@ -0,0 +1 @@ +README.md \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 6b7b20db610f..b0588903e8ad 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -72,7 +72,7 @@ plugins: #js_files: [] - redirects: redirect_maps: - README.md: Getting-Started/intro.md + README.md: intro.md - mike: # these fields are all optional; the defaults are as below... version_selector: true # set to false to leave out the version selector From 2611e924a2ad474fe9a54761729eefb30c199f6a Mon Sep 17 00:00:00 2001 From: Madhu Rajanna Date: Wed, 28 Feb 2024 14:43:47 +0100 Subject: [PATCH 10/65] csi: provide option to configure VGS volumegroupsnapshot feature will be enabled by default if the required CRD's are present if not its disabled and user will have an option to disable it if they dont require this feature. Signed-off-by: Madhu Rajanna --- Documentation/Helm-Charts/operator-chart.md | 1 + PendingReleaseNotes.md | 1 + deploy/charts/rook-ceph/templates/configmap.yaml | 1 + deploy/charts/rook-ceph/values.yaml | 3 +++ deploy/examples/operator-openshift.yaml | 4 ++++ deploy/examples/operator.yaml | 3 +++ pkg/operator/ceph/csi/csi.go | 13 +++++++++++++ pkg/operator/ceph/csi/spec.go | 2 ++ .../cephfs/csi-cephfsplugin-provisioner-dep.yaml | 3 +++ .../template/rbd/csi-rbdplugin-provisioner-dep.yaml | 3 +++ 10 files changed, 34 insertions(+) diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index d0b7d43af8f5..88b3aff03e9b 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -91,6 +91,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.enablePluginSelinuxHostMount` | Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins | `false` | | `csi.enableRBDSnapshotter` | Enable Snapshotter in RBD provisioner pod | `true` | | `csi.enableRbdDriver` | Enable Ceph CSI RBD driver | `true` | +| `csi.enableVolumeGroupSnapshot` | Enable volume group snapshot feature. This feature is enabled by default as long as the necessary CRDs are available in the cluster. | `true` | | `csi.forceCephFSKernelClient` | Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the [upgrade guide](https://rook.io/docs/rook/v1.2/ceph-upgrade.html) | `true` | | `csi.grpcTimeoutInSeconds` | Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150 | `150` | | `csi.imagePullPolicy` | Image pull policy | `"IfNotPresent"` | diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index b04354c0beab..bf11c3fb0e90 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -12,3 +12,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - Kubernetes versions **v1.24** through **v1.29** are supported. - Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. +- The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. diff --git a/deploy/charts/rook-ceph/templates/configmap.yaml b/deploy/charts/rook-ceph/templates/configmap.yaml index d6af5cc798ad..4ce7b75dc278 100644 --- a/deploy/charts/rook-ceph/templates/configmap.yaml +++ b/deploy/charts/rook-ceph/templates/configmap.yaml @@ -25,6 +25,7 @@ data: CSI_ENABLE_OMAP_GENERATOR: {{ .Values.csi.enableOMAPGenerator | quote }} CSI_ENABLE_HOST_NETWORK: {{ .Values.csi.enableCSIHostNetwork | quote }} CSI_ENABLE_METADATA: {{ .Values.csi.enableMetadata | quote }} + CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: {{ .Values.csi.enableVolumeGroupSnapshot | quote }} {{- if .Values.csi.csiDriverNamePrefix }} CSI_DRIVER_NAME_PREFIX: {{ .Values.csi.csiDriverNamePrefix | quote }} {{- end }} diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 0781cf3b6fe1..de890690f8b0 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -96,6 +96,9 @@ csi: # -- Enable Ceph CSI PVC encryption support enableCSIEncryption: false + # -- Enable volume group snapshot feature. This feature is + # enabled by default as long as the necessary CRDs are available in the cluster. + enableVolumeGroupSnapshot: true # -- PriorityClassName to be set on csi driver plugin pods pluginPriorityClassName: system-node-critical diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 3d6f48049b4a..306ce45a2c3b 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -552,6 +552,10 @@ data: # The GCSI RPC timeout value (in seconds). It should be >= 120. If this variable is not set or is an invalid value, it's default to 150. CSI_GRPC_TIMEOUT_SECONDS: "150" + # set to false to disable volume group snapshot feature. This feature is + # enabled by default as long as the necessary CRDs are available in the cluster. + CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: "true" + # Enable topology based provisioning. CSI_ENABLE_TOPOLOGY: "false" # Domain labels define which node labels to use as domains diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index b5f16fee87aa..97d550328a9c 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -85,6 +85,9 @@ data: # set to false to disable deployment of snapshotter container in RBD provisioner pod. CSI_ENABLE_RBD_SNAPSHOTTER: "true" + # set to false to disable volume group snapshot feature. This feature is + # enabled by default as long as the necessary CRDs are available in the cluster. + CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: "true" # Enable cephfs kernel driver instead of ceph-fuse. # If you disable the kernel client, your application may be disrupted during upgrade. # See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html diff --git a/pkg/operator/ceph/csi/csi.go b/pkg/operator/ceph/csi/csi.go index 26d34d4a052e..65520405d9f3 100644 --- a/pkg/operator/ceph/csi/csi.go +++ b/pkg/operator/ceph/csi/csi.go @@ -17,6 +17,7 @@ limitations under the License. package csi import ( + "context" "strconv" "strings" "time" @@ -24,6 +25,7 @@ import ( "github.com/rook/rook/pkg/operator/k8sutil" "github.com/pkg/errors" + kerrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/version" ) @@ -317,5 +319,16 @@ func (r *ReconcileCSI) setParams(ver *version.Info) error { CSIParam.DriverNamePrefix = k8sutil.GetValue(r.opConfig.Parameters, "CSI_DRIVER_NAME_PREFIX", r.opConfig.OperatorNamespace) + _, err = r.context.ApiExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Get(context.TODO(), "volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io", metav1.GetOptions{}) + if err != nil && !kerrors.IsNotFound(err) { + return errors.Wrapf(err, "failed to get volumegroupsnapshotclasses.groupsnapshot.storage.k8s.io CRD") + } + CSIParam.VolumeGroupSnapshotSupported = (err == nil) + + CSIParam.EnableVolumeGroupSnapshot = true + if strings.EqualFold(k8sutil.GetValue(r.opConfig.Parameters, "CSI_ENABLE_VOLUME_GROUP_SNAPSHOT", "true"), "false") { + CSIParam.EnableVolumeGroupSnapshot = false + } + return nil } diff --git a/pkg/operator/ceph/csi/spec.go b/pkg/operator/ceph/csi/spec.go index 24fb4224432c..a150c81f7794 100644 --- a/pkg/operator/ceph/csi/spec.go +++ b/pkg/operator/ceph/csi/spec.go @@ -82,6 +82,8 @@ type Param struct { CephFSAttachRequired bool RBDAttachRequired bool NFSAttachRequired bool + VolumeGroupSnapshotSupported bool + EnableVolumeGroupSnapshot bool LogLevel uint8 SidecarLogLevel uint8 CephFSLivenessMetricsPort uint16 diff --git a/pkg/operator/ceph/csi/template/cephfs/csi-cephfsplugin-provisioner-dep.yaml b/pkg/operator/ceph/csi/template/cephfs/csi-cephfsplugin-provisioner-dep.yaml index d280f11f9f08..65780bca6fad 100644 --- a/pkg/operator/ceph/csi/template/cephfs/csi-cephfsplugin-provisioner-dep.yaml +++ b/pkg/operator/ceph/csi/template/cephfs/csi-cephfsplugin-provisioner-dep.yaml @@ -55,6 +55,9 @@ spec: - "--leader-election-renew-deadline={{ .LeaderElectionRenewDeadline }}" - "--leader-election-retry-period={{ .LeaderElectionRetryPeriod }}" - "--extra-create-metadata=true" + {{ if .VolumeGroupSnapshotSupported }} + - "--enable-volume-group-snapshots={{ .EnableVolumeGroupSnapshot }}" + {{ end }} env: - name: ADDRESS value: unix:///csi/csi-provisioner.sock diff --git a/pkg/operator/ceph/csi/template/rbd/csi-rbdplugin-provisioner-dep.yaml b/pkg/operator/ceph/csi/template/rbd/csi-rbdplugin-provisioner-dep.yaml index a564063f139e..05dc8bfc443c 100644 --- a/pkg/operator/ceph/csi/template/rbd/csi-rbdplugin-provisioner-dep.yaml +++ b/pkg/operator/ceph/csi/template/rbd/csi-rbdplugin-provisioner-dep.yaml @@ -102,6 +102,9 @@ spec: - "--leader-election-renew-deadline={{ .LeaderElectionRenewDeadline }}" - "--leader-election-retry-period={{ .LeaderElectionRetryPeriod }}" - "--extra-create-metadata=true" + {{ if .VolumeGroupSnapshotSupported }} + - "--enable-volume-group-snapshots={{ .EnableVolumeGroupSnapshot }}" + {{ end }} env: - name: ADDRESS value: unix:///csi/csi-provisioner.sock From b47bd35ddc7afc16288ce39cc2a947dff345f51c Mon Sep 17 00:00:00 2001 From: Madhu Rajanna Date: Wed, 28 Feb 2024 15:17:09 +0100 Subject: [PATCH 11/65] csi: use different variable name for replicas r is the variable name for the CSI reconciler and same is used for a local variable as well chaning it to avoid confusion and variable shadowing. Signed-off-by: Madhu Rajanna --- pkg/operator/ceph/csi/csi.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pkg/operator/ceph/csi/csi.go b/pkg/operator/ceph/csi/csi.go index 65520405d9f3..36690285e0bf 100644 --- a/pkg/operator/ceph/csi/csi.go +++ b/pkg/operator/ceph/csi/csi.go @@ -273,12 +273,12 @@ func (r *ReconcileCSI) setParams(ver *version.Info) error { if len(nodes.Items) == 1 { CSIParam.ProvisionerReplicas = 1 } else { - replicas := k8sutil.GetValue(r.opConfig.Parameters, "CSI_PROVISIONER_REPLICAS", "2") - r, err := strconv.ParseInt(replicas, 10, 32) + replicaStr := k8sutil.GetValue(r.opConfig.Parameters, "CSI_PROVISIONER_REPLICAS", "2") + replicas, err := strconv.ParseInt(replicaStr, 10, 32) if err != nil { logger.Errorf("failed to parse CSI_PROVISIONER_REPLICAS. Defaulting to %d. %v", defaultProvisionerReplicas, err) } else { - CSIParam.ProvisionerReplicas = int32(r) + CSIParam.ProvisionerReplicas = int32(replicas) } } } else { From 43fa57fa57d4e8cc2dbbbf26b0bba56e3e9095c7 Mon Sep 17 00:00:00 2001 From: Madhu Rajanna Date: Wed, 28 Feb 2024 15:09:33 +0100 Subject: [PATCH 12/65] csi: update sidecars to latest release updating all the csi sidecars to the latest release. Signed-off-by: Madhu Rajanna --- Documentation/Helm-Charts/operator-chart.md | 10 +++++----- .../Storage-Configuration/Ceph-CSI/custom-images.md | 10 +++++----- deploy/charts/rook-ceph/values.yaml | 10 +++++----- deploy/examples/images.txt | 10 +++++----- deploy/examples/operator-openshift.yaml | 10 +++++----- deploy/examples/operator.yaml | 10 +++++----- pkg/operator/ceph/csi/spec.go | 10 +++++----- 7 files changed, 35 insertions(+), 35 deletions(-) diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index d0b7d43af8f5..09a0781bf4d4 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -53,7 +53,7 @@ The following table lists the configurable parameters of the rook-operator chart | `containerSecurityContext` | Set the container security context for the operator | `{"capabilities":{"drop":["ALL"]},"runAsGroup":2016,"runAsNonRoot":true,"runAsUser":2016}` | | `crds.enabled` | Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see [the disaster recovery guide](https://rook.io/docs/rook/latest/Troubleshooting/disaster-recovery/#restoring-crds-after-deletion) to restore them. | `true` | | `csi.allowUnsupportedVersion` | Allow starting an unsupported ceph-csi image | `false` | -| `csi.attacher.image` | Kubernetes CSI Attacher image | `registry.k8s.io/sig-storage/csi-attacher:v4.4.2` | +| `csi.attacher.image` | Kubernetes CSI Attacher image | `registry.k8s.io/sig-storage/csi-attacher:v4.5.0` | | `csi.cephFSAttachRequired` | Whether to skip any attach operation altogether for CephFS PVCs. See more details [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object). If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details. | `true` | | `csi.cephFSFSGroupPolicy` | Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | `"File"` | | `csi.cephFSKernelMountOptions` | Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR | `nil` | @@ -104,7 +104,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.pluginNodeAffinity` | The node labels for affinity of the CephCSI RBD plugin DaemonSet [^1] | `nil` | | `csi.pluginPriorityClassName` | PriorityClassName to be set on csi driver plugin pods | `"system-node-critical"` | | `csi.pluginTolerations` | Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet | `nil` | -| `csi.provisioner.image` | Kubernetes CSI provisioner image | `registry.k8s.io/sig-storage/csi-provisioner:v3.6.3` | +| `csi.provisioner.image` | Kubernetes CSI provisioner image | `registry.k8s.io/sig-storage/csi-provisioner:v4.0.0` | | `csi.provisionerNodeAffinity` | The node labels for affinity of the CSI provisioner deployment [^1] | `nil` | | `csi.provisionerPriorityClassName` | PriorityClassName to be set on csi driver provisioner pods | `"system-cluster-critical"` | | `csi.provisionerReplicas` | Set replicas for csi provisioner deployment | `2` | @@ -115,14 +115,14 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.rbdPluginUpdateStrategy` | CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | `RollingUpdate` | | `csi.rbdPluginUpdateStrategyMaxUnavailable` | A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. | `1` | | `csi.rbdPodLabels` | Labels to add to the CSI RBD Deployments and DaemonSets Pods | `nil` | -| `csi.registrar.image` | Kubernetes CSI registrar image | `registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1` | -| `csi.resizer.image` | Kubernetes CSI resizer image | `registry.k8s.io/sig-storage/csi-resizer:v1.9.2` | +| `csi.registrar.image` | Kubernetes CSI registrar image | `registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0` | +| `csi.resizer.image` | Kubernetes CSI resizer image | `registry.k8s.io/sig-storage/csi-resizer:v1.10.0` | | `csi.serviceMonitor.enabled` | Enable ServiceMonitor for Ceph CSI drivers | `false` | | `csi.serviceMonitor.interval` | Service monitor scrape interval | `"5s"` | | `csi.serviceMonitor.labels` | ServiceMonitor additional labels | `{}` | | `csi.serviceMonitor.namespace` | Use a different namespace for the ServiceMonitor | `nil` | | `csi.sidecarLogLevel` | Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. | `0` | -| `csi.snapshotter.image` | Kubernetes CSI snapshotter image | `registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2` | +| `csi.snapshotter.image` | Kubernetes CSI snapshotter image | `registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1` | | `csi.topology.domainLabels` | domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains | `nil` | | `csi.topology.enabled` | Enable topology based provisioning | `false` | | `currentNamespaceOnly` | Whether the operator should watch cluster CRD in its own namespace or not | `false` | diff --git a/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md b/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md index 703f85b453dd..b63ddb0732bd 100644 --- a/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md +++ b/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md @@ -19,11 +19,11 @@ The default upstream images are included below, which you can change to your des ```yaml ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.10.2" -ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" -ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v3.6.3" -ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.4.2" -ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.9.2" -ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2" +ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" +ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" +ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" +ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" +ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.8.0" ``` diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 0781cf3b6fe1..98da16c1ad54 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -474,27 +474,27 @@ csi: registrar: # -- Kubernetes CSI registrar image - # @default -- `registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1` + # @default -- `registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0` image: provisioner: # -- Kubernetes CSI provisioner image - # @default -- `registry.k8s.io/sig-storage/csi-provisioner:v3.6.3` + # @default -- `registry.k8s.io/sig-storage/csi-provisioner:v4.0.0` image: snapshotter: # -- Kubernetes CSI snapshotter image - # @default -- `registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2` + # @default -- `registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1` image: attacher: # -- Kubernetes CSI Attacher image - # @default -- `registry.k8s.io/sig-storage/csi-attacher:v4.4.2` + # @default -- `registry.k8s.io/sig-storage/csi-attacher:v4.5.0` image: resizer: # -- Kubernetes CSI resizer image - # @default -- `registry.k8s.io/sig-storage/csi-resizer:v1.9.2` + # @default -- `registry.k8s.io/sig-storage/csi-resizer:v1.10.0` image: # -- Image pull policy diff --git a/deploy/examples/images.txt b/deploy/examples/images.txt index 741f75b738c1..03353b01dcb8 100644 --- a/deploy/examples/images.txt +++ b/deploy/examples/images.txt @@ -3,9 +3,9 @@ quay.io/ceph/cosi:v0.1.1 quay.io/cephcsi/cephcsi:v3.10.2 quay.io/csiaddons/k8s-sidecar:v0.8.0 - registry.k8s.io/sig-storage/csi-attacher:v4.4.2 - registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1 - registry.k8s.io/sig-storage/csi-provisioner:v3.6.3 - registry.k8s.io/sig-storage/csi-resizer:v1.9.2 - registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2 + registry.k8s.io/sig-storage/csi-attacher:v4.5.0 + registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0 + registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 + registry.k8s.io/sig-storage/csi-resizer:v1.10.0 + registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 rook/ceph:master diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 3d6f48049b4a..37f0c8f3a1c3 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -191,11 +191,11 @@ data: # of the CSI driver to something other than what is officially supported, change # these images to the desired release of the CSI driver. # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.10.2" - # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" - # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.9.2" - # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v3.6.3" - # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2" - # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.4.2" + # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" + # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" # (Optional) set user created priorityclassName for csi plugin pods. CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical" diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index b5f16fee87aa..e50bbda866df 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -113,11 +113,11 @@ data: # of the CSI driver to something other than what is officially supported, change # these images to the desired release of the CSI driver. # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.10.2" - # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" - # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.9.2" - # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v3.6.3" - # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2" - # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.4.2" + # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" + # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" # To indicate the image pull policy to be applied to all the containers in the csi driver pods. # ROOK_CSI_IMAGE_PULL_POLICY: "IfNotPresent" diff --git a/pkg/operator/ceph/csi/spec.go b/pkg/operator/ceph/csi/spec.go index 24fb4224432c..b72530601b65 100644 --- a/pkg/operator/ceph/csi/spec.go +++ b/pkg/operator/ceph/csi/spec.go @@ -137,11 +137,11 @@ var ( var ( // image names DefaultCSIPluginImage = "quay.io/cephcsi/cephcsi:v3.10.2" - DefaultRegistrarImage = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" - DefaultProvisionerImage = "registry.k8s.io/sig-storage/csi-provisioner:v3.6.3" - DefaultAttacherImage = "registry.k8s.io/sig-storage/csi-attacher:v4.4.2" - DefaultSnapshotterImage = "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2" - DefaultResizerImage = "registry.k8s.io/sig-storage/csi-resizer:v1.9.2" + DefaultRegistrarImage = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" + DefaultProvisionerImage = "registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" + DefaultAttacherImage = "registry.k8s.io/sig-storage/csi-attacher:v4.5.0" + DefaultSnapshotterImage = "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" + DefaultResizerImage = "registry.k8s.io/sig-storage/csi-resizer:v1.10.0" DefaultCSIAddonsImage = "quay.io/csiaddons/k8s-sidecar:v0.8.0" // image pull policy From 330f604cca1384b763bce629626b75fce6f73bd4 Mon Sep 17 00:00:00 2001 From: Rakshith R Date: Thu, 29 Feb 2024 17:45:58 +0530 Subject: [PATCH 13/65] csi: update CSIDriverOption params during saving cluster config During cluster creation, csi config map was first filled with mon ips and without CSIDriverOptions. This commit makes sure CSIDriverOptions are added at the begining when the entry is first created. Signed-off-by: Rakshith R --- pkg/operator/ceph/csi/cluster_config.go | 32 ++++++++++++------ pkg/operator/ceph/csi/cluster_config_test.go | 34 ++++++++++++++++++++ 2 files changed, 56 insertions(+), 10 deletions(-) diff --git a/pkg/operator/ceph/csi/cluster_config.go b/pkg/operator/ceph/csi/cluster_config.go index 7775b0e9c8e7..6b9a9ee59d2d 100644 --- a/pkg/operator/ceph/csi/cluster_config.go +++ b/pkg/operator/ceph/csi/cluster_config.go @@ -170,6 +170,9 @@ func updateCsiClusterConfig(curr, clusterKey string, newCsiClusterConfigEntry *C // update default clusterID's entry if clusterKey == centry.Namespace { centry.Monitors = newCsiClusterConfigEntry.Monitors + centry.ReadAffinity = newCsiClusterConfigEntry.ReadAffinity + centry.CephFS.KernelMountOptions = newCsiClusterConfigEntry.CephFS.KernelMountOptions + centry.CephFS.FuseMountOptions = newCsiClusterConfigEntry.CephFS.FuseMountOptions cc[i] = centry } } @@ -183,12 +186,19 @@ func updateCsiClusterConfig(curr, clusterKey string, newCsiClusterConfigEntry *C break } centry.Monitors = newCsiClusterConfigEntry.Monitors + // update subvolumegroup and cephfs netNamespaceFilePath only when either is specified + // while always updating kernel and fuse mount options. if newCsiClusterConfigEntry.CephFS.SubvolumeGroup != "" || newCsiClusterConfigEntry.CephFS.NetNamespaceFilePath != "" { centry.CephFS = newCsiClusterConfigEntry.CephFS + } else { + centry.CephFS.KernelMountOptions = newCsiClusterConfigEntry.CephFS.KernelMountOptions + centry.CephFS.FuseMountOptions = newCsiClusterConfigEntry.CephFS.FuseMountOptions } + // update nfs netNamespaceFilePath only when specified. if newCsiClusterConfigEntry.NFS.NetNamespaceFilePath != "" { centry.NFS = newCsiClusterConfigEntry.NFS } + // update radosNamespace and rbd netNamespaceFilePath only when either is specified. if newCsiClusterConfigEntry.RBD.RadosNamespace != "" || newCsiClusterConfigEntry.RBD.NetNamespaceFilePath != "" { centry.RBD = newCsiClusterConfigEntry.RBD } @@ -207,16 +217,9 @@ func updateCsiClusterConfig(curr, clusterKey string, newCsiClusterConfigEntry *C centry.ClusterID = clusterKey centry.Namespace = newCsiClusterConfigEntry.Namespace centry.Monitors = newCsiClusterConfigEntry.Monitors - if newCsiClusterConfigEntry.RBD.RadosNamespace != "" || newCsiClusterConfigEntry.RBD.NetNamespaceFilePath != "" { - centry.RBD = newCsiClusterConfigEntry.RBD - } - // Add a condition not to fill with empty values - if newCsiClusterConfigEntry.CephFS.SubvolumeGroup != "" || newCsiClusterConfigEntry.CephFS.NetNamespaceFilePath != "" { - centry.CephFS = newCsiClusterConfigEntry.CephFS - } - if newCsiClusterConfigEntry.NFS.NetNamespaceFilePath != "" { - centry.NFS = newCsiClusterConfigEntry.NFS - } + centry.RBD = newCsiClusterConfigEntry.RBD + centry.CephFS = newCsiClusterConfigEntry.CephFS + centry.NFS = newCsiClusterConfigEntry.NFS if len(newCsiClusterConfigEntry.ReadAffinity.CrushLocationLabels) != 0 { centry.ReadAffinity = newCsiClusterConfigEntry.ReadAffinity } @@ -273,6 +276,15 @@ func SaveClusterConfig(clientset kubernetes.Interface, clusterNamespace string, } logger.Debugf("using %q for csi configmap namespace", csiNamespace) + if newCsiClusterConfigEntry != nil { + // set CSIDriverOptions + newCsiClusterConfigEntry.ReadAffinity.Enabled = clusterInfo.CSIDriverSpec.ReadAffinity.Enabled + newCsiClusterConfigEntry.ReadAffinity.CrushLocationLabels = clusterInfo.CSIDriverSpec.ReadAffinity.CrushLocationLabels + + newCsiClusterConfigEntry.CephFS.KernelMountOptions = clusterInfo.CSIDriverSpec.CephFS.KernelMountOptions + newCsiClusterConfigEntry.CephFS.FuseMountOptions = clusterInfo.CSIDriverSpec.CephFS.FuseMountOptions + } + configMutex.Lock() defer configMutex.Unlock() diff --git a/pkg/operator/ceph/csi/cluster_config_test.go b/pkg/operator/ceph/csi/cluster_config_test.go index 9a87c39fc91d..3698e0fb1b0d 100644 --- a/pkg/operator/ceph/csi/cluster_config_test.go +++ b/pkg/operator/ceph/csi/cluster_config_test.go @@ -74,6 +74,16 @@ func TestUpdateCsiClusterConfig(t *testing.T) { }, }, } + csiClusterConfigEntryMountOptions := CSIClusterConfigEntry{ + Namespace: "rook-ceph-1", + ClusterInfo: cephcsi.ClusterInfo{ + Monitors: []string{"1.2.3.4:5000"}, + CephFS: cephcsi.CephFS{ + KernelMountOptions: "ms_mode=crc", + FuseMountOptions: "debug", + }, + }, + } csiClusterConfigEntry2 := CSIClusterConfigEntry{ Namespace: "rook-ceph-2", ClusterInfo: cephcsi.ClusterInfo{ @@ -123,6 +133,17 @@ func TestUpdateCsiClusterConfig(t *testing.T) { assert.Equal(t, 2, len(cc[0].Monitors)) }) + t.Run("add mount options to the current cluster", func(t *testing.T) { + configWithMountOptions, err := updateCsiClusterConfig(s, "rook-ceph-1", &csiClusterConfigEntryMountOptions) + assert.NoError(t, err) + cc, err := parseCsiClusterConfig(configWithMountOptions) + assert.NoError(t, err) + assert.Equal(t, 1, len(cc)) + assert.Equal(t, "rook-ceph-1", cc[0].ClusterID) + assert.Equal(t, csiClusterConfigEntryMountOptions.CephFS.KernelMountOptions, cc[0].CephFS.KernelMountOptions) + assert.Equal(t, csiClusterConfigEntryMountOptions.CephFS.FuseMountOptions, cc[0].CephFS.FuseMountOptions) + }) + t.Run("add a 2nd cluster with 3 mons", func(t *testing.T) { s, err = updateCsiClusterConfig(s, "beta", &csiClusterConfigEntry2) assert.NoError(t, err) @@ -178,6 +199,19 @@ func TestUpdateCsiClusterConfig(t *testing.T) { assert.Equal(t, "my-group", cc[2].CephFS.SubvolumeGroup) }) + t.Run("update mount options in presence of subvolumegroup", func(t *testing.T) { + sMntOptionUpdate, err := updateCsiClusterConfig(s, "baba", &csiClusterConfigEntryMountOptions) + assert.NoError(t, err) + cc, err := parseCsiClusterConfig(sMntOptionUpdate) + assert.NoError(t, err) + assert.Equal(t, 3, len(cc)) + assert.Equal(t, "baba", cc[2].ClusterID) + assert.Equal(t, "my-group", cc[2].CephFS.SubvolumeGroup) + assert.Equal(t, csiClusterConfigEntryMountOptions.CephFS.KernelMountOptions, cc[2].CephFS.KernelMountOptions) + assert.Equal(t, csiClusterConfigEntryMountOptions.CephFS.FuseMountOptions, cc[2].CephFS.FuseMountOptions) + + }) + t.Run("add a 4th mon to the 3rd cluster and subvolumegroup is preserved", func(t *testing.T) { csiClusterConfigEntry3.Monitors = append(csiClusterConfigEntry3.Monitors, "10.11.12.13:5000") s, err = updateCsiClusterConfig(s, "baba", &csiClusterConfigEntry3) From 117bc76f20c6e2a7610bf57572bca367a81639b6 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Mon, 4 Mar 2024 17:08:41 +0530 Subject: [PATCH 14/65] external: enable the use of only v2 mon port currently the script requires to have both v2 and v1 port to enable v2 port, but that is not the necessary condition, so removing the chek, and enabling it only v2 is present to successfully configure with v2 only part-of: https://github.com/rook/rook/issues/13827 Signed-off-by: parth-gr --- deploy/examples/create-external-cluster-resources.py | 8 -------- 1 file changed, 8 deletions(-) diff --git a/deploy/examples/create-external-cluster-resources.py b/deploy/examples/create-external-cluster-resources.py index 5ffef28d3183..61039c9eb1bd 100644 --- a/deploy/examples/create-external-cluster-resources.py +++ b/deploy/examples/create-external-cluster-resources.py @@ -699,14 +699,6 @@ def get_ceph_external_mon_data(self): q_leader_details = q_leader_matching_list[0] # get the address vector of the quorum-leader q_leader_addrvec = q_leader_details.get("public_addrs", {}).get("addrvec", []) - # if the quorum-leader has only one address in the address-vector - # and it is of type 'v2' (ie; with :3300), - # raise an exception to make user aware that - # they have to enable 'v1' (ie; with :6789) type as well - if len(q_leader_addrvec) == 1 and q_leader_addrvec[0]["type"] == "v2": - raise ExecutionFailureException( - "Only 'v2' address type is enabled, user should also enable 'v1' type as well" - ) ip_addr = str(q_leader_details["public_addr"].split("/")[0]) if self._arg_parser.v2_port_enable: From 3cdb79a73da48838ec6391f39069b3c866a9d056 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 4 Mar 2024 12:22:14 +0000 Subject: [PATCH 15/65] build(deps): bump azure/setup-helm from 3 to 4 Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 3 to 4. - [Release notes](https://github.com/azure/setup-helm/releases) - [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md) - [Commits](https://github.com/azure/setup-helm/compare/v3...v4) --- updated-dependencies: - dependency-name: azure/setup-helm dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] --- .github/workflows/build.yml | 2 +- .github/workflows/helm-lint.yaml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index fe4301cef870..f10d02689f7b 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -30,7 +30,7 @@ jobs: go-version: "1.21" - name: Set up Helm - uses: azure/setup-helm@v3 + uses: azure/setup-helm@v4 with: version: v3.6.2 diff --git a/.github/workflows/helm-lint.yaml b/.github/workflows/helm-lint.yaml index b0dc6b644485..b07c06b219bb 100644 --- a/.github/workflows/helm-lint.yaml +++ b/.github/workflows/helm-lint.yaml @@ -31,7 +31,7 @@ jobs: fetch-depth: 0 - name: Set up Helm - uses: azure/setup-helm@v3 + uses: azure/setup-helm@v4 with: version: v3.6.2 From 072884f4512e5be28c6c929e46faa4a8e1f4c7ae Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Mon, 4 Mar 2024 11:08:48 -0700 Subject: [PATCH 16/65] build: use 'baseos' as repo for iproute install The rook/ceph Dockerfile uses dnf to ensure iproute (containing the 'ip' CLI tool) is installed in the Rook image for Multus usage. This comes from the 'baseos' repo, but if any other repos are unavailable temporarily, it can cause the container build to fail. Use the '--repo baseos' flag to help prevent these kinds of failures. Additionally, this will speed up the build slightly since it does not attempt to load any non-necessary repos. This change may make the container build slightly fragile in the future if CentOS changes the name of its baseos repo, or if the Ceph image switches to a non-CentOS base image. Signed-off-by: Blaine Gardner --- images/ceph/Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/images/ceph/Dockerfile b/images/ceph/Dockerfile index 26f6dbb052be..268926856e95 100644 --- a/images/ceph/Dockerfile +++ b/images/ceph/Dockerfile @@ -20,7 +20,7 @@ ARG S5CMD_VERSION ARG S5CMD_ARCH # install 'ip' tool for Multus -RUN dnf install -y --setopt=install_weak_deps=False iproute && dnf clean all +RUN dnf install -y --repo baseos --setopt=install_weak_deps=False iproute && dnf clean all # Install the s5cmd package to interact with s3 gateway RUN curl --fail -sSL -o /s5cmd.tar.gz https://github.com/peak/s5cmd/releases/download/v${S5CMD_VERSION}/s5cmd_${S5CMD_VERSION}_${S5CMD_ARCH}.tar.gz && \ From 580c62e5e643ea4c1358306895503511fbe32552 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Tue, 5 Mar 2024 18:26:45 +0530 Subject: [PATCH 17/65] build: update go mod to latest version updating go mod to latest version. Signed-off-by: subhamkrai --- go.mod | 42 ++++++++++---------- go.sum | 102 ++++++++++++++++++++++++++++-------------------- pkg/apis/go.mod | 38 +++++++++--------- pkg/apis/go.sum | 98 +++++++++++++++++++++++++++------------------- 4 files changed, 157 insertions(+), 123 deletions(-) diff --git a/go.mod b/go.mod index 38986a120ab6..5f304048926f 100644 --- a/go.mod +++ b/go.mod @@ -6,7 +6,7 @@ replace github.com/rook/rook/pkg/apis => ./pkg/apis require ( github.com/IBM/keyprotect-go-client v0.12.2 - github.com/aws/aws-sdk-go v1.50.25 + github.com/aws/aws-sdk-go v1.50.31 github.com/banzaicloud/k8s-objectmatcher v1.8.0 github.com/ceph/go-ceph v0.26.0 github.com/coreos/pkg v0.0.0-20230601102743-20bbbf26f4d8 @@ -25,7 +25,7 @@ require ( github.com/rook/rook/pkg/apis v0.0.0-20231204200402-5287527732f7 github.com/spf13/cobra v1.8.0 github.com/spf13/pflag v1.0.5 - github.com/stretchr/testify v1.8.4 + github.com/stretchr/testify v1.9.0 github.com/sykesm/zap-logfmt v0.0.4 go.uber.org/automaxprocs v1.5.3 go.uber.org/zap v1.27.0 @@ -39,7 +39,7 @@ require ( k8s.io/cli-runtime v0.29.2 k8s.io/client-go v0.29.2 k8s.io/cloud-provider v0.29.2 - k8s.io/utils v0.0.0-20231127182322-b307cd553661 + k8s.io/utils v0.0.0-20240102154912-e7106e64919e sigs.k8s.io/controller-runtime v0.17.2 sigs.k8s.io/mcs-api v0.1.0 sigs.k8s.io/yaml v1.4.0 @@ -57,18 +57,18 @@ require ( github.com/containernetworking/cni v1.1.2 // indirect github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect - github.com/emicklei/go-restful/v3 v3.11.0 // indirect - github.com/evanphx/json-patch v5.7.0+incompatible // indirect + github.com/emicklei/go-restful/v3 v3.11.3 // indirect + github.com/evanphx/json-patch v5.9.0+incompatible // indirect github.com/evanphx/json-patch/v5 v5.8.0 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/gemalto/flume v0.13.1 // indirect github.com/go-errors/errors v1.5.1 // indirect - github.com/go-jose/go-jose/v3 v3.0.1 // indirect + github.com/go-jose/go-jose/v3 v3.0.2 // indirect github.com/go-logr/logr v1.4.1 // indirect github.com/go-logr/zapr v1.3.0 // indirect - github.com/go-openapi/jsonpointer v0.20.0 // indirect - github.com/go-openapi/jsonreference v0.20.2 // indirect - github.com/go-openapi/swag v0.22.4 // indirect + github.com/go-openapi/jsonpointer v0.20.3 // indirect + github.com/go-openapi/jsonreference v0.20.5 // indirect + github.com/go-openapi/swag v0.22.10 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.3 // indirect @@ -87,8 +87,8 @@ require ( github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect github.com/hashicorp/go-sockaddr v1.0.6 // indirect github.com/hashicorp/hcl v1.0.1-vault-5 // indirect - github.com/hashicorp/vault/api/auth/approle v0.5.0 // indirect - github.com/hashicorp/vault/api/auth/kubernetes v0.5.0 // indirect + github.com/hashicorp/vault/api/auth/approle v0.6.0 // indirect + github.com/hashicorp/vault/api/auth/kubernetes v0.6.0 // indirect github.com/imdario/mergo v0.3.16 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect @@ -109,7 +109,7 @@ require ( github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect - github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4 // indirect + github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 // indirect github.com/peterbourgon/diskv v2.0.1+incompatible // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/prometheus/client_golang v1.18.0 // indirect @@ -118,26 +118,26 @@ require ( github.com/prometheus/procfs v0.12.0 // indirect github.com/ryanuber/go-glob v1.0.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect - github.com/stretchr/objx v0.5.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect github.com/xlab/treeprint v1.2.0 // indirect go.starlark.net v0.0.0-20231121155337-90ade8b19d09 // indirect go.uber.org/multierr v1.11.0 // indirect - golang.org/x/crypto v0.17.0 // indirect - golang.org/x/net v0.19.0 // indirect - golang.org/x/oauth2 v0.15.0 // indirect - golang.org/x/sys v0.16.0 // indirect - golang.org/x/term v0.15.0 // indirect + golang.org/x/crypto v0.21.0 // indirect + golang.org/x/net v0.22.0 // indirect + golang.org/x/oauth2 v0.18.0 // indirect + golang.org/x/sys v0.18.0 // indirect + golang.org/x/term v0.18.0 // indirect golang.org/x/text v0.14.0 // indirect golang.org/x/time v0.5.0 // indirect gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect google.golang.org/appengine v1.6.8 // indirect - google.golang.org/protobuf v1.31.0 // indirect + google.golang.org/protobuf v1.32.0 // indirect gopkg.in/evanphx/json-patch.v5 v5.7.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect k8s.io/component-base v0.29.2 // indirect - k8s.io/klog/v2 v2.110.1 // indirect - k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a // indirect + k8s.io/klog/v2 v2.120.1 // indirect + k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/kustomize/api v0.15.0 // indirect sigs.k8s.io/kustomize/kyaml v0.15.0 // indirect diff --git a/go.sum b/go.sum index dc1a688f5ad1..71c5db58266e 100644 --- a/go.sum +++ b/go.sum @@ -111,8 +111,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/aws/aws-sdk-go v1.44.164/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= -github.com/aws/aws-sdk-go v1.50.25 h1:vhiHtLYybv1Nhx3Kv18BBC6L0aPJHaG9aeEsr92W99c= -github.com/aws/aws-sdk-go v1.50.25/go.mod h1:LF8svs817+Nz+DmiMQKTO3ubZ/6IaTpq3TjupRn3Eqk= +github.com/aws/aws-sdk-go v1.50.31 h1:gx2NRLLEDUmQFC4YUsfMUKkGCwpXVO8ijUecq/nOQGA= +github.com/aws/aws-sdk-go v1.50.31/go.mod h1:LF8svs817+Nz+DmiMQKTO3ubZ/6IaTpq3TjupRn3Eqk= github.com/banzaicloud/k8s-objectmatcher v1.8.0 h1:Nugn25elKtPMTA2br+JgHNeSQ04sc05MDPmpJnd1N2A= github.com/banzaicloud/k8s-objectmatcher v1.8.0/go.mod h1:p2LSNAjlECf07fbhDyebTkPUIYnU05G+WfGgkTmgeMg= github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= @@ -192,8 +192,8 @@ github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= -github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/emicklei/go-restful/v3 v3.11.3 h1:yagOQz/38xJmcNeZJtrUcKjkHRltIaIFXKWeG1SkWGE= +github.com/emicklei/go-restful/v3 v3.11.3/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= @@ -208,8 +208,8 @@ github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/evanphx/json-patch v5.7.0+incompatible h1:vgGkfT/9f8zE6tvSCe74nfpAVDQ2tG6yudJd8LBksgI= -github.com/evanphx/json-patch v5.7.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch/v5 v5.0.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4= github.com/evanphx/json-patch/v5 v5.8.0 h1:lRj6N9Nci7MvzrXuX6HFzU8XjmhPiXPlsKEy1u0KQro= github.com/evanphx/json-patch/v5 v5.8.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ= @@ -240,8 +240,9 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2 github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A= github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= -github.com/go-jose/go-jose/v3 v3.0.1 h1:pWmKFVtt+Jl0vBZTIpz/eAKwsm6LkIxDVVbFHKkchhA= github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= +github.com/go-jose/go-jose/v3 v3.0.2 h1:2Edjn8Nrb44UvTdp84KU0bBPs1cO7noRCybtS3eJEUQ= +github.com/go-jose/go-jose/v3 v3.0.2/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= @@ -249,7 +250,6 @@ github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7 github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/zapr v0.1.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk= @@ -269,17 +269,16 @@ github.com/go-openapi/jsonpointer v0.18.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwds github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= -github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= -github.com/go-openapi/jsonpointer v0.20.0 h1:ESKJdU9ASRfaPNOPRx12IUyA1vn3R9GiE3KYD14BXdQ= -github.com/go-openapi/jsonpointer v0.20.0/go.mod h1:6PGzBjjIIumbLYysB73Klnms1mwnU4G3YHOECG3CedA= +github.com/go-openapi/jsonpointer v0.20.3 h1:jykzYWS/kyGtsHfRt6aV8JTB9pcQAXPIA7qlZ5aRlyk= +github.com/go-openapi/jsonpointer v0.20.3/go.mod h1:c7l0rjoouAuIxCm8v/JWKRgMjDG/+/7UBWsXMrv6PsM= github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg= github.com/go-openapi/jsonreference v0.17.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I= github.com/go-openapi/jsonreference v0.18.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I= github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8= github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo= -github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= -github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= +github.com/go-openapi/jsonreference v0.20.5 h1:hutI+cQI+HbSQaIGSfsBsYI0pHk+CATf8Fk5gCSj0yI= +github.com/go-openapi/jsonreference v0.20.5/go.mod h1:thAqAp31UABtI+FQGKAQfmv7DbFpKNUlva2UPCxKu2Y= github.com/go-openapi/loads v0.17.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= github.com/go-openapi/loads v0.18.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= github.com/go-openapi/loads v0.19.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= @@ -302,9 +301,8 @@ github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/ github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= -github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= -github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU= -github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= +github.com/go-openapi/swag v0.22.10 h1:4y86NVn7Z2yYd6pfS4Z+Nyh3aAUL3Nul+LMbhFKy0gA= +github.com/go-openapi/swag v0.22.10/go.mod h1:Cnn8BYtRlx6BNE3DPN86f/xkapGIcLWzh3CLEb4C1jI= github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4= github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA= github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4= @@ -489,10 +487,12 @@ github.com/hashicorp/hcl v1.0.1-vault-5/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06A github.com/hashicorp/vault/api v1.10.0/go.mod h1:jo5Y/ET+hNyz+JnKDt8XLAdKs+AM0G5W0Vp1IrFI8N8= github.com/hashicorp/vault/api v1.12.0 h1:meCpJSesvzQyao8FCOgk2fGdoADAnbDu2WPJN1lDLJ4= github.com/hashicorp/vault/api v1.12.0/go.mod h1:si+lJCYO7oGkIoNPAN8j3azBLTn9SjMGS+jFaHd1Cck= -github.com/hashicorp/vault/api/auth/approle v0.5.0 h1:a1TK6VGwYqSAfkmX4y4dJ4WBxMU5dStIZqScW4EPXR8= github.com/hashicorp/vault/api/auth/approle v0.5.0/go.mod h1:CHOQIA1AZACfjTzHggmyfiOZ+xCSKNRFqe48FTCzH0k= -github.com/hashicorp/vault/api/auth/kubernetes v0.5.0 h1:CXO0fD7M3iCGovP/UApeHhPcH4paDFKcu7AjEXi94rI= +github.com/hashicorp/vault/api/auth/approle v0.6.0 h1:ELfFFQlTM/e97WJKu1HvNFa7lQ3tlTwwzrR1NJE1V7Y= +github.com/hashicorp/vault/api/auth/approle v0.6.0/go.mod h1:CCoIl1xBC3lAWpd1HV+0ovk76Z8b8Mdepyk21h3pGk0= github.com/hashicorp/vault/api/auth/kubernetes v0.5.0/go.mod h1:afrElBIO9Q4sHFVuVWgNevG4uAs1bT2AZFA9aEiI608= +github.com/hashicorp/vault/api/auth/kubernetes v0.6.0 h1:K8sKGhtTAqGKfzaaYvUSIOAqTOIn3Gk1EsCEAMzZHtM= +github.com/hashicorp/vault/api/auth/kubernetes v0.6.0/go.mod h1:Htwcjez5J9PwAHaZ1EYMBlgGq3/in5ajUV4+WCPihPE= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= @@ -645,8 +645,8 @@ github.com/onsi/gomega v1.24.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2 github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM= github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= -github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4 h1:5RyeLvTSZEn/fDQA6e6+qIvFPssWjreY8pbwfg4/EEQ= -github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4/go.mod h1:qNtV0315F+f8ld52TLtPvrfivZpdimOzTi3kn9IVbtU= +github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 h1:+S998xHiJApsJZjRAO8wyedU9GfqFd8mtwWly6LqHDo= +github.com/openshift/api v0.0.0-20240301093301-ce10821dc999/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= github.com/pborman/uuid v1.2.0 h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= @@ -699,8 +699,8 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.8.1/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4nPKWu0nJ5d+o= -github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= -github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= +github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= @@ -739,8 +739,9 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+ github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= -github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= @@ -749,9 +750,9 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/sykesm/zap-logfmt v0.0.4 h1:U2WzRvmIWG1wDLCFY3sz8UeEmsdHQjHFNlIdmroVFaI= github.com/sykesm/zap-logfmt v0.0.4/go.mod h1:AuBd9xQjAe3URrWT1BBDk2v2onAZHkZkWRMiYZXiZWA= github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= @@ -829,8 +830,11 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= -golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= +golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= +golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= +golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -935,8 +939,10 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c= -golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= +golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -957,8 +963,8 @@ golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.15.0 h1:s8pnnxNVzjWyrvYdFUQq5llS1PX2zhPXmccZv99h7uQ= -golang.org/x/oauth2 v0.15.0/go.mod h1:q48ptWNTY5XWf+JNten23lcvHpLJ0ZSxF5ttTHKVCAM= +golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI= +golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1066,8 +1072,12 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU= -golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -1075,8 +1085,12 @@ golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= +golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= +golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8= +golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1091,6 +1105,8 @@ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1172,8 +1188,8 @@ golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.16.1 h1:TLyB3WofjdOEepBHAU20JdNC1Zbg87elYofWYAY5oZA= -golang.org/x/tools v0.16.1/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0= +golang.org/x/tools v0.18.0 h1:k8NLag8AGHnn+PHbl7g43CtqZAwG60vZkLqgyZgIHgQ= +golang.org/x/tools v0.18.0/go.mod h1:GL7B4CwcLLeo59yx/9UWWuNOW1n3VZ4f5axWfML7Lcg= golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -1356,8 +1372,8 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= -google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= +google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I= +google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -1456,24 +1472,24 @@ k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= -k8s.io/klog/v2 v2.110.1 h1:U/Af64HJf7FcwMcXyKm2RPM22WZzyR7OSpYj5tg3cL0= -k8s.io/klog/v2 v2.110.1/go.mod h1:YGtd1984u+GgbuZ7e08/yBuAfKLSO0+uR1Fhi6ExXjo= +k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= +k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20200121204235-bf4fb3bd569c/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E= k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E= k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o= k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM= k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk= k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= -k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a h1:ZeIPbyHHqahGIbeyLJJjAUhnxCKqXaDY+n89Ms8szyA= -k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a/go.mod h1:AsvuZPBlUDVuCdzJ87iajxtXuR9oktsTctW/R9wwouA= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -k8s.io/utils v0.0.0-20231127182322-b307cd553661 h1:FepOBzJ0GXm8t0su67ln2wAZjbQ6RxQGZDnzuLcrUTI= -k8s.io/utils v0.0.0-20231127182322-b307cd553661/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20240102154912-e7106e64919e h1:eQ/4ljkx21sObifjzXwlPKpdGLrCfRziVtos3ofG/sQ= +k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= diff --git a/pkg/apis/go.mod b/pkg/apis/go.mod index 7e780be93047..4c971374f3c6 100644 --- a/pkg/apis/go.mod +++ b/pkg/apis/go.mod @@ -8,7 +8,7 @@ require ( github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1 github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1 github.com/pkg/errors v0.9.1 - github.com/stretchr/testify v1.8.4 + github.com/stretchr/testify v1.9.0 k8s.io/api v0.29.2 k8s.io/apimachinery v0.29.2 ) @@ -19,7 +19,7 @@ require ( github.com/onsi/gomega v1.30.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect k8s.io/client-go v0.29.2 // indirect - k8s.io/utils v0.0.0-20231127182322-b307cd553661 // indirect + k8s.io/utils v0.0.0-20240102154912-e7106e64919e // indirect sigs.k8s.io/yaml v1.4.0 // indirect ) @@ -27,14 +27,14 @@ require ( github.com/cenkalti/backoff/v3 v3.2.2 // indirect github.com/containernetworking/cni v1.1.2 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect - github.com/emicklei/go-restful/v3 v3.11.0 // indirect - github.com/evanphx/json-patch v5.7.0+incompatible // indirect + github.com/emicklei/go-restful/v3 v3.11.3 // indirect + github.com/evanphx/json-patch v5.9.0+incompatible // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/go-jose/go-jose/v3 v3.0.1 // indirect + github.com/go-jose/go-jose/v3 v3.0.2 // indirect github.com/go-logr/logr v1.4.1 // indirect - github.com/go-openapi/jsonpointer v0.20.0 // indirect - github.com/go-openapi/jsonreference v0.20.2 // indirect - github.com/go-openapi/swag v0.22.4 // indirect + github.com/go-openapi/jsonpointer v0.20.3 // indirect + github.com/go-openapi/jsonreference v0.20.5 // indirect + github.com/go-openapi/swag v0.22.10 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/protobuf v1.5.3 // indirect github.com/google/gnostic-models v0.6.8 // indirect @@ -48,8 +48,8 @@ require ( github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect github.com/hashicorp/go-sockaddr v1.0.6 // indirect github.com/hashicorp/hcl v1.0.1-vault-5 // indirect - github.com/hashicorp/vault/api/auth/approle v0.5.0 // indirect - github.com/hashicorp/vault/api/auth/kubernetes v0.5.0 // indirect + github.com/hashicorp/vault/api/auth/approle v0.6.0 // indirect + github.com/hashicorp/vault/api/auth/kubernetes v0.6.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/mailru/easyjson v0.7.7 // indirect @@ -60,23 +60,23 @@ require ( github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4 + github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/ryanuber/go-glob v1.0.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect - golang.org/x/crypto v0.17.0 // indirect - golang.org/x/net v0.19.0 // indirect - golang.org/x/oauth2 v0.15.0 // indirect - golang.org/x/sys v0.16.0 // indirect - golang.org/x/term v0.15.0 // indirect + golang.org/x/crypto v0.21.0 // indirect + golang.org/x/net v0.22.0 // indirect + golang.org/x/oauth2 v0.18.0 // indirect + golang.org/x/sys v0.18.0 // indirect + golang.org/x/term v0.18.0 // indirect golang.org/x/text v0.14.0 // indirect golang.org/x/time v0.5.0 // indirect google.golang.org/appengine v1.6.8 // indirect - google.golang.org/protobuf v1.31.0 // indirect + google.golang.org/protobuf v1.32.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/klog/v2 v2.110.1 // indirect - k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a // indirect + k8s.io/klog/v2 v2.120.1 // indirect + k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect ) diff --git a/pkg/apis/go.sum b/pkg/apis/go.sum index 6ba0b2fd6945..e78f92dc4725 100644 --- a/pkg/apis/go.sum +++ b/pkg/apis/go.sum @@ -109,8 +109,8 @@ github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= -github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/emicklei/go-restful/v3 v3.11.3 h1:yagOQz/38xJmcNeZJtrUcKjkHRltIaIFXKWeG1SkWGE= +github.com/emicklei/go-restful/v3 v3.11.3/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= @@ -122,8 +122,8 @@ github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go. github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/evanphx/json-patch v5.7.0+incompatible h1:vgGkfT/9f8zE6tvSCe74nfpAVDQ2tG6yudJd8LBksgI= -github.com/evanphx/json-patch v5.7.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= +github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= @@ -139,32 +139,30 @@ github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9 github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= -github.com/go-jose/go-jose/v3 v3.0.1 h1:pWmKFVtt+Jl0vBZTIpz/eAKwsm6LkIxDVVbFHKkchhA= github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= +github.com/go-jose/go-jose/v3 v3.0.2 h1:2Edjn8Nrb44UvTdp84KU0bBPs1cO7noRCybtS3eJEUQ= +github.com/go-jose/go-jose/v3 v3.0.2/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= -github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= -github.com/go-openapi/jsonpointer v0.20.0 h1:ESKJdU9ASRfaPNOPRx12IUyA1vn3R9GiE3KYD14BXdQ= -github.com/go-openapi/jsonpointer v0.20.0/go.mod h1:6PGzBjjIIumbLYysB73Klnms1mwnU4G3YHOECG3CedA= +github.com/go-openapi/jsonpointer v0.20.3 h1:jykzYWS/kyGtsHfRt6aV8JTB9pcQAXPIA7qlZ5aRlyk= +github.com/go-openapi/jsonpointer v0.20.3/go.mod h1:c7l0rjoouAuIxCm8v/JWKRgMjDG/+/7UBWsXMrv6PsM= github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8= github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo= -github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= -github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= +github.com/go-openapi/jsonreference v0.20.5 h1:hutI+cQI+HbSQaIGSfsBsYI0pHk+CATf8Fk5gCSj0yI= +github.com/go-openapi/jsonreference v0.20.5/go.mod h1:thAqAp31UABtI+FQGKAQfmv7DbFpKNUlva2UPCxKu2Y= github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= -github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= -github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU= -github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= +github.com/go-openapi/swag v0.22.10 h1:4y86NVn7Z2yYd6pfS4Z+Nyh3aAUL3Nul+LMbhFKy0gA= +github.com/go-openapi/swag v0.22.10/go.mod h1:Cnn8BYtRlx6BNE3DPN86f/xkapGIcLWzh3CLEb4C1jI= github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE= github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls= @@ -316,10 +314,12 @@ github.com/hashicorp/hcl v1.0.1-vault-5/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06A github.com/hashicorp/vault/api v1.10.0/go.mod h1:jo5Y/ET+hNyz+JnKDt8XLAdKs+AM0G5W0Vp1IrFI8N8= github.com/hashicorp/vault/api v1.12.0 h1:meCpJSesvzQyao8FCOgk2fGdoADAnbDu2WPJN1lDLJ4= github.com/hashicorp/vault/api v1.12.0/go.mod h1:si+lJCYO7oGkIoNPAN8j3azBLTn9SjMGS+jFaHd1Cck= -github.com/hashicorp/vault/api/auth/approle v0.5.0 h1:a1TK6VGwYqSAfkmX4y4dJ4WBxMU5dStIZqScW4EPXR8= github.com/hashicorp/vault/api/auth/approle v0.5.0/go.mod h1:CHOQIA1AZACfjTzHggmyfiOZ+xCSKNRFqe48FTCzH0k= -github.com/hashicorp/vault/api/auth/kubernetes v0.5.0 h1:CXO0fD7M3iCGovP/UApeHhPcH4paDFKcu7AjEXi94rI= +github.com/hashicorp/vault/api/auth/approle v0.6.0 h1:ELfFFQlTM/e97WJKu1HvNFa7lQ3tlTwwzrR1NJE1V7Y= +github.com/hashicorp/vault/api/auth/approle v0.6.0/go.mod h1:CCoIl1xBC3lAWpd1HV+0ovk76Z8b8Mdepyk21h3pGk0= github.com/hashicorp/vault/api/auth/kubernetes v0.5.0/go.mod h1:afrElBIO9Q4sHFVuVWgNevG4uAs1bT2AZFA9aEiI608= +github.com/hashicorp/vault/api/auth/kubernetes v0.6.0 h1:K8sKGhtTAqGKfzaaYvUSIOAqTOIn3Gk1EsCEAMzZHtM= +github.com/hashicorp/vault/api/auth/kubernetes v0.6.0/go.mod h1:Htwcjez5J9PwAHaZ1EYMBlgGq3/in5ajUV4+WCPihPE= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= @@ -426,8 +426,8 @@ github.com/onsi/gomega v1.24.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2 github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM= github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= -github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4 h1:5RyeLvTSZEn/fDQA6e6+qIvFPssWjreY8pbwfg4/EEQ= -github.com/openshift/api v0.0.0-20231204192004-bfea29e5e6c4/go.mod h1:qNtV0315F+f8ld52TLtPvrfivZpdimOzTi3kn9IVbtU= +github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 h1:+S998xHiJApsJZjRAO8wyedU9GfqFd8mtwWly6LqHDo= +github.com/openshift/api v0.0.0-20240301093301-ce10821dc999/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= github.com/pborman/uuid v1.2.0 h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= @@ -445,8 +445,8 @@ github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6L github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.8.1/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4nPKWu0nJ5d+o= -github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= -github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= +github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= +github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk= github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= @@ -471,9 +471,9 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= @@ -502,8 +502,11 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= -golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k= +golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= +golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= +golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= +golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -542,6 +545,7 @@ golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -597,8 +601,10 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c= -golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= +golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -619,8 +625,8 @@ golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= golang.org/x/oauth2 v0.0.0-20220524215830-622c5d57e401/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.15.0 h1:s8pnnxNVzjWyrvYdFUQq5llS1PX2zhPXmccZv99h7uQ= -golang.org/x/oauth2 v0.15.0/go.mod h1:q48ptWNTY5XWf+JNten23lcvHpLJ0ZSxF5ttTHKVCAM= +golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI= +golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -634,6 +640,7 @@ golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -713,8 +720,12 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU= -golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -722,8 +733,12 @@ golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= +golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= +golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8= +golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -737,6 +752,8 @@ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -808,8 +825,9 @@ golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA= -golang.org/x/tools v0.16.1 h1:TLyB3WofjdOEepBHAU20JdNC1Zbg87elYofWYAY5oZA= -golang.org/x/tools v0.16.1/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.18.0 h1:k8NLag8AGHnn+PHbl7g43CtqZAwG60vZkLqgyZgIHgQ= +golang.org/x/tools v0.18.0/go.mod h1:GL7B4CwcLLeo59yx/9UWWuNOW1n3VZ4f5axWfML7Lcg= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -985,8 +1003,8 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= -google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= +google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I= +google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -1042,19 +1060,19 @@ k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= -k8s.io/klog/v2 v2.110.1 h1:U/Af64HJf7FcwMcXyKm2RPM22WZzyR7OSpYj5tg3cL0= -k8s.io/klog/v2 v2.110.1/go.mod h1:YGtd1984u+GgbuZ7e08/yBuAfKLSO0+uR1Fhi6ExXjo= +k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= +k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM= k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk= k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= -k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a h1:ZeIPbyHHqahGIbeyLJJjAUhnxCKqXaDY+n89Ms8szyA= -k8s.io/kube-openapi v0.0.0-20231129212854-f0671cc7e66a/go.mod h1:AsvuZPBlUDVuCdzJ87iajxtXuR9oktsTctW/R9wwouA= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= +k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -k8s.io/utils v0.0.0-20231127182322-b307cd553661 h1:FepOBzJ0GXm8t0su67ln2wAZjbQ6RxQGZDnzuLcrUTI= -k8s.io/utils v0.0.0-20231127182322-b307cd553661/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20240102154912-e7106e64919e h1:eQ/4ljkx21sObifjzXwlPKpdGLrCfRziVtos3ofG/sQ= +k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= From bbae0644902b21c35e86ba3524dfb975e922ec9e Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Mon, 4 Mar 2024 11:08:11 -0700 Subject: [PATCH 18/65] docs: separate network providers into sub-doc Some users are still confused by Ceph networking, especially involving Multus. Continuing to add network provider-specific information -- a pretty advanced concept -- to the already-heavy CephCluster CRD doc is becoming cumbersome. Separate network provider settings into a separate child doc that notes it is an advanced topic. This separation will also help with work to remove the holder pod design, which requires additional Multus prerequisites. See: https://github.com/rook/rook/issues/13055 Signed-off-by: Blaine Gardner --- .../CRDs/Cluster/ceph-cluster-crd.md | 150 +------------- .../CRDs/Cluster/network-providers.md | 191 ++++++++++++++++++ 2 files changed, 194 insertions(+), 147 deletions(-) create mode 100644 Documentation/CRDs/Cluster/network-providers.md diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index c88e09478e18..e9df0f3f8edb 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -223,154 +223,10 @@ Configure the network that will be enabled for the cluster and services. Changing networking configuration after a Ceph cluster has been deployed is NOT supported and will result in a non-functioning cluster. -#### Ceph public and cluster networks +#### Provider -Ceph daemons can operate on up to two distinct networks: public, and cluster. - -Ceph daemons always use the public network, which is the Kubernetes pod network by default. The -public network is used for client communications with the Ceph cluster (reads/writes). - -If specified, the cluster network is used to isolate internal Ceph replication traffic. This includes -additional copies of data replicated between OSDs during client reads/writes. This also includes OSD -data recovery (re-replication) when OSDs or nodes go offline. If the cluster network is unspecified, -the public network is used for this traffic instead. - -Some Rook network providers allow manually specifying the public and network interfaces that Ceph -will use for data traffic. Use `addressRanges` to specify this. For example: - -```yaml - network: - provider: host - addressRanges: - public: - - "192.168.100.0/24" - - "192.168.101.0/24" - cluster: - - "192.168.200.0/24" -``` - -This spec translates directly to Ceph's `public_network` and `host_network` configurations. -Refer to [Ceph networking documentation](https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/) -for more details. - -The default, unspecified network provider cannot make use of these configurations. - -Ceph public and cluster network configurations are allowed to change, but this should be done with -great care. When updating underlying networks or Ceph network settings, Rook assumes that the -current network configuration used by Ceph daemons will continue to operate as intended. Network -changes are not applied to Ceph daemon pods (like OSDs and MDSes) until the pod is restarted. When -making network changes, ensure that restarted pods will not lose connectivity to existing pods, and -vice versa. - -#### Host Networking - -To use host networking, set `provider: host`. - -To instruct Ceph to operate on specific host interfaces or networks, use `addressRanges` to select -the network CIDRs Ceph will bind to on the host. - -If the host networking setting is changed in a cluster where mons are already running, the existing mons will -remain running with the same network settings with which they were created. To complete the conversion -to or from host networking after you update this setting, you will need to -[failover the mons](../../Storage-Configuration/Advanced/ceph-mon-health.md#failing-over-a-monitor) -in order to have mons on the desired network configuration. - -#### Multus - -Rook supports using Multus NetworkAttachmentDefinitions for Ceph public and cluster networks. - -Refer to [Multus documentation](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) -for details about how to set up and select Multus networks. - -Rook will attempt to auto-discover the network CIDRs for selected public and/or cluster networks. -This process is not guaranteed to succeed. Furthermore, this process will get a new network lease -for each CephCluster reconcile. Specify `addressRanges` manually if the auto-detection process -fails or if the selected network configuration cannot automatically recycle released network leases. - -Only OSD pods will have both public and cluster networks attached (if specified). The rest of the -Ceph component pods and CSI pods will only have the public network attached. The Rook operator will -not have any networks attached; it proxies Ceph commands via a sidecar container in the mgr pod. - -A NetworkAttachmentDefinition must exist before it can be used by Multus for a Ceph network. A -recommended definition will look like the following: - -```yaml -apiVersion: "k8s.cni.cncf.io/v1" -kind: NetworkAttachmentDefinition -metadata: - name: ceph-multus-net - namespace: rook-ceph -spec: - config: '{ - "cniVersion": "0.3.0", - "type": "macvlan", - "master": "eth0", - "mode": "bridge", - "ipam": { - "type": "whereabouts", - "range": "192.168.200.0/24" - } - }' -``` - -* Ensure that `master` matches the network interface on hosts that you want to use. - It must be the same across all hosts. -* CNI type `macvlan` is highly recommended. - It has less CPU and memory overhead compared to traditional Linux `bridge` configurations. -* IPAM type `whereabouts` is recommended because it ensures each pod gets an IP address unique - within the Kubernetes cluster. No DHCP server is required. If a DHCP server is present on the - network, ensure the IP range does not overlap with the DHCP server's range. - -NetworkAttachmentDefinitions are selected for the desired Ceph network using `selectors`. Selector -values should include the namespace in which the NAD is present. `public` and `cluster` may be -selected independently. If `public` is left unspecified, Rook will configure Ceph to use the -Kubernetes pod network for Ceph client traffic. - -Consider the example below which selects a hypothetical Kubernetes-wide Multus network in the -default namespace for Ceph's public network and selects a Ceph-specific network in the `rook-ceph` -namespace for Ceph's cluster network. The commented-out portion shows an example of how address -ranges could be manually specified for the networks if needed. - -```yaml - network: - provider: multus - selectors: - public: default/kube-multus-net - cluster: rook-ceph/ceph-multus-net - # addressRanges: - # public: - # - "192.168.100.0/24" - # - "192.168.101.0/24" - # cluster: - # - "192.168.200.0/24" -``` - -##### Validating Multus configuration - -We **highly** recommend validating your Multus configuration before you install Rook. A tool exists -to facilitate validating the Multus configuration. After installing the Rook operator and before -installing any Custom Resources, run the tool from the operator pod. - -The tool's CLI is designed to be as helpful as possible. Get help text for the multus validation -tool like so: -```console -kubectl --namespace rook-ceph exec -it deploy/rook-ceph-operator -- rook multus validation run --help -``` - -Then, update the args in the [multus-validation](https://github.com/rook/rook/blob/master/deploy/examples/multus-validation.yaml) job template. Minimally, add the NAD names(s) for public and/or cluster as needed and and then, create the job to validate the Multus configuration. - -If the tool fails, it will suggest what things may be preventing Multus networks from working -properly, and it will request the logs and outputs that will help debug issues. - -Check the logs of the pod created by the job to know the status of the validation test. - -##### Known limitations with Multus - -Daemons leveraging Kubernetes service IPs (Monitors, Managers, Rados Gateways) are not listening on the NAD specified in the `selectors`. -Instead the daemon listens on the default network, however the NAD is attached to the container, -allowing the daemon to communicate with the rest of the cluster. There is work in progress to fix -this issue in the [multus-service](https://github.com/k8snetworkplumbingwg/multus-service) -repository. At the time of writing it's unclear when this will be supported. +Selecting a non-default network provider is an advanced topic. Read more in the +[Network Providers](./network-providers.md) documentation. #### IPFamily diff --git a/Documentation/CRDs/Cluster/network-providers.md b/Documentation/CRDs/Cluster/network-providers.md new file mode 100644 index 000000000000..a74e6c7a02fc --- /dev/null +++ b/Documentation/CRDs/Cluster/network-providers.md @@ -0,0 +1,191 @@ +--- +title: Network Providers +--- + +Rook deploys CephClusters using Kubernetes' software-defined networks by default. This is simple for +users, provides necessary connectivity, and has good node-level security. However, this comes at the +expense of additional latency, and the storage network must contend with Kubernetes applications for +network bandwidth. It also means that Kubernetes applications coexist on the same network as Ceph +daemons and can reach the Ceph cluster easily via network scanning. + +Rook allows selecting alternative network providers to address some of these downsides, sometimes at +the expense of others. Selecting alternative network providers is an advanced topic. + +!!! Note + This is an advanced networking topic. + See also the [CephCluster general networking settings.](./ceph-cluster-crd.md#network-configuration-settings) + +## Ceph Networking Fundamentals + +Ceph daemons can operate on up to two distinct networks: public, and cluster. + +Ceph daemons always use the public network. The public network is used for client communications +with the Ceph cluster (reads/writes). Rook configures this as the Kubernetes pod network by default. +Ceph-CSI uses this network for PVCs. + +The cluster network is optional and is used to isolate internal Ceph replication traffic. This +includes additional copies of data replicated between OSDs during client reads/writes. This also +includes OSD data recovery (re-replication) when OSDs or nodes go offline. If the cluster network is +unspecified, the public network is used for this traffic instead. + +Refer to [Ceph networking documentation](https://docs.ceph.com/docs/master/rados/configuration/network-config-ref/) +for deeper explanations of any topics. + +## Specifying Ceph Network Selections +`network.addressRanges` + +This configuration is always optional but is important to understand. + +Some Rook network providers allow specifying the public and network interfaces that Ceph will use +for data traffic. Use `addressRanges` to specify this. For example: + +```yaml + network: + provider: host + addressRanges: + public: + - "192.168.100.0/24" + - "192.168.101.0/24" + cluster: + - "192.168.200.0/24" +``` + +This `public` and `cluster` translate directly to Ceph's `public_network` and `host_network` +configurations. + +The default network provider cannot make use of these configurations. + +Ceph public and cluster network configurations are allowed to change, but this should be done with +great care. When updating underlying networks or Ceph network settings, Rook assumes that the +current network configuration used by Ceph daemons will continue to operate as intended. Network +changes are not applied to Ceph daemon pods (like OSDs and MDSes) until the pod is restarted. When +making network changes, ensure that restarted pods will not lose connectivity to existing pods, and +vice versa. + +## Host Networking +`network.provider: host` + +Host networking allows the Ceph cluster to use network interfaces on Kubernetes hosts for +communication. This eliminates latency from the software-defined pod network, but it provides no +host-level security isolation. + +Ceph daemons will use any network available on the host for communication. To restrict Ceph to using +only a specific specific host interfaces or networks, use `addressRanges` to select the network +CIDRs Ceph will bind to on the host. + +If the host networking setting is changed in a cluster where mons are already running, the existing mons will +remain running with the same network settings with which they were created. To complete the conversion +to or from host networking after you update this setting, you will need to +[failover the mons](../../Storage-Configuration/Advanced/ceph-mon-health.md#failing-over-a-monitor) +in order to have mons on the desired network configuration. + +## Multus +`network.provider: multus` + +Rook supports using Multus NetworkAttachmentDefinitions for Ceph public and cluster networks. + +This allows Rook to attach any CNI to Ceph as a public and/or cluster network. This provides strong +isolation between Kubernetes applications and Ceph cluster daemons. + +While any CNI may be used, the intent is to allow use of CNIs which allow Ceph to be connected to +specific host interfaces. This improves latency and bandwidth while preserving host-level network +isolation. + +### Multus Configuration + +Refer to [Multus documentation](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) +for details about how to set up and select Multus networks. + +Rook will attempt to auto-discover the network CIDRs for selected public and/or cluster networks. +This process is not guaranteed to succeed. Furthermore, this process will get a new network lease +for each CephCluster reconcile. Specify `addressRanges` manually if the auto-detection process +fails or if the selected network configuration cannot automatically recycle released network leases. + +Only OSD pods will have both public and cluster networks attached (if specified). The rest of the +Ceph component pods and CSI pods will only have the public network attached. The Rook operator will +not have any networks attached; it proxies Ceph commands via a sidecar container in the mgr pod. + +A NetworkAttachmentDefinition must exist before it can be used by Multus for a Ceph network. A +recommended definition will look like the following: + +```yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: ceph-multus-net + namespace: rook-ceph +spec: + config: '{ + "cniVersion": "0.3.0", + "type": "macvlan", + "master": "eth0", + "mode": "bridge", + "ipam": { + "type": "whereabouts", + "range": "192.168.200.0/24" + } + }' +``` + +* Ensure that `master` matches the network interface on hosts that you want to use. + It must be the same across all hosts. +* CNI type `macvlan` is highly recommended. + It has less CPU and memory overhead compared to traditional Linux `bridge` configurations. +* IPAM type `whereabouts` is recommended because it ensures each pod gets an IP address unique + within the Kubernetes cluster. No DHCP server is required. If a DHCP server is present on the + network, ensure the IP range does not overlap with the DHCP server's range. + +NetworkAttachmentDefinitions are selected for the desired Ceph network using `selectors`. Selector +values should include the namespace in which the NAD is present. `public` and `cluster` may be +selected independently. If `public` is left unspecified, Rook will configure Ceph to use the +Kubernetes pod network for Ceph client traffic. + +Consider the example below which selects a hypothetical Kubernetes-wide Multus network in the +default namespace for Ceph's public network and selects a Ceph-specific network in the `rook-ceph` +namespace for Ceph's cluster network. The commented-out portion shows an example of how address +ranges could be manually specified for the networks if needed. + +```yaml + network: + provider: multus + selectors: + public: default/kube-multus-net + cluster: rook-ceph/ceph-multus-net + # addressRanges: + # public: + # - "192.168.100.0/24" + # - "192.168.101.0/24" + # cluster: + # - "192.168.200.0/24" +``` + +### Validating Multus configuration + +We **highly** recommend validating your Multus configuration before you install Rook. A tool exists +to facilitate validating the Multus configuration. After installing the Rook operator and before +installing any Custom Resources, run the tool from the operator pod. + +The tool's CLI is designed to be as helpful as possible. Get help text for the multus validation +tool like so: +```console +kubectl --namespace rook-ceph exec -it deploy/rook-ceph-operator -- rook multus validation run --help +``` + +Then, update the args in the +[multus-validation](https://github.com/rook/rook/blob/master/deploy/examples/multus-validation.yaml) +job template. Minimally, add the NAD names(s) for public and/or cluster as needed and and then, +create the job to validate the Multus configuration. + +If the tool fails, it will suggest what things may be preventing Multus networks from working +properly, and it will request the logs and outputs that will help debug issues. + +Check the logs of the pod created by the job to know the status of the validation test. + +### Known limitations with Multus + +Daemons leveraging Kubernetes service IPs (Monitors, Managers, Rados Gateways) are not listening on +the NAD specified in the `selectors`. Instead the daemon listens on the default network, however the +NAD is attached to the container, allowing the daemon to communicate with the rest of the cluster. +There is work in progress to fix this issue in the +[multus-service](https://github.com/k8snetworkplumbingwg/multus-service) repository. At the time of +writing it's unclear when this will be supported. From 6f61f4600edb7f4169ad94728ff2e42fcfc6e6c0 Mon Sep 17 00:00:00 2001 From: Alexander Trost Date: Tue, 5 Mar 2024 16:51:09 +0100 Subject: [PATCH 19/65] build: fix master version helm values replacement Signed-off-by: Alexander Trost --- deploy/charts/rook-ceph-cluster/values.yaml | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/deploy/charts/rook-ceph-cluster/values.yaml b/deploy/charts/rook-ceph-cluster/values.yaml index 9dd02d154717..ea139b662a57 100644 --- a/deploy/charts/rook-ceph-cluster/values.yaml +++ b/deploy/charts/rook-ceph-cluster/values.yaml @@ -440,7 +440,7 @@ cephBlockPools: replicated: size: 3 # Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. - # For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics + # For reference: https://docs.ceph.com/docs/latest/mgr/prometheus/#rbd-io-statistics # enableRBDStats: true storageClass: enabled: true @@ -460,16 +460,16 @@ cephBlockPools: parameters: # (optional) mapOptions is a comma-separated list of map options. # For krbd options refer - # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options + # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options # For nbd options refer - # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options + # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options # mapOptions: lock_on_read,queue_depth=1024 # (optional) unmapOptions is a comma-separated list of unmap options. # For krbd options refer - # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options + # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options # For nbd options refer - # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options + # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options # unmapOptions: force # RBD image format. Defaults to "2". @@ -637,16 +637,16 @@ cephObjectStores: # clusterID: rook-ceph # namespace:cluster # # (optional) mapOptions is a comma-separated list of map options. # # For krbd options refer -# # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options +# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options # # For nbd options refer -# # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options +# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options # # mapOptions: lock_on_read,queue_depth=1024 # # # (optional) unmapOptions is a comma-separated list of unmap options. # # For krbd options refer -# # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options +# # https://docs.ceph.com/docs/latest/man/8/rbd/#kernel-rbd-krbd-options # # For nbd options refer -# # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options +# # https://docs.ceph.com/docs/latest/man/8/rbd-nbd/#options # # unmapOptions: force # # # RBD image format. Defaults to "2". From 9061bffd86e41c1614d611e72a1f53ed39adb81b Mon Sep 17 00:00:00 2001 From: parth-gr Date: Tue, 5 Mar 2024 19:03:57 +0530 Subject: [PATCH 20/65] doc: update objectuser example in the external install differentiate betwwen two different ways of s3 storage consumption Signed-off-by: parth-gr --- .../CRDs/Cluster/external-cluster.md | 21 +++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/Documentation/CRDs/Cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster.md index 8a803fe33cab..3cd431838e18 100644 --- a/Documentation/CRDs/Cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster.md @@ -191,17 +191,26 @@ If not installing with Helm, here are the steps to install with manifests. ### Connect to an External Object Store -Create the object store resources: - -1. Create the [external object store CR](https://github.com/rook/rook/blob/master/deploy/examples/object-external.yaml) to configure connection to external gateways. -2. Create an [Object store user](https://github.com/rook/rook/blob/master/deploy/examples/object-user.yaml) for credentials to access the S3 endpoint. -3. Create a [bucket storage class](https://github.com/rook/rook/blob/master/deploy/examples/storageclass-bucket-delete.yaml) where a client can request creating buckets. -4. Create the [Object Bucket Claim](https://github.com/rook/rook/blob/master/deploy/examples/object-bucket-claim-delete.yaml), which will create an individual bucket for reading and writing objects. +Create the [external object store CR](https://github.com/rook/rook/blob/master/deploy/examples/object-external.yaml) to configure connection to external gateways. ```console cd deploy/examples kubectl create -f object-external.yaml +``` + +Consume the S3 Storage, in two different ways: + +1. Create an [Object store user](https://github.com/rook/rook/blob/master/deploy/examples/object-user.yaml) for credentials to access the S3 endpoint. + +```console + cd deploy/examples kubectl create -f object-user.yaml +``` + +2. Create a [bucket storage class](https://github.com/rook/rook/blob/master/deploy/examples/storageclass-bucket-delete.yaml) where a client can request creating buckets and then create the [Object Bucket Claim](https://github.com/rook/rook/blob/master/deploy/examples/object-bucket-claim-delete.yaml), which will create an individual bucket for reading and writing objects. + +```console + cd deploy/examples kubectl create -f storageclass-bucket-delete.yaml kubectl create -f object-bucket-claim-delete.yaml ``` From 2c922c28798e16d9f4bfc6f4272ca2c0c3842685 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Wed, 6 Mar 2024 20:57:05 +0530 Subject: [PATCH 21/65] doc: remove legacy rgw service documentation the rgw service was removed, because users reported certificate issues when Rook created Services for RGWs that had TLS (HTTPS) enabled. It is now up to the user to create a Service if they want to use one. part-of: https://github.com/rook/rook/discussions/13863 Signed-off-by: parth-gr --- Documentation/CRDs/Object-Storage/ceph-object-store-crd.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md b/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md index 37dc1b2da4a1..83c6fffc2ca1 100644 --- a/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md +++ b/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md @@ -143,9 +143,6 @@ gateway: # hostname: example.com ``` -This will create a service with the endpoint `192.168.39.182` on port `80`, pointing to the Ceph object external gateway. -All the other settings from the gateway section will be ignored, except for `securePort`. - ## Zone Settings The [zone](../../Storage-Configuration/Object-Storage-RGW/ceph-object-multisite.md) settings allow the object store to join custom created [ceph-object-zone](ceph-object-zone-crd.md). From 6867adaf291830ca7cc951155dbea4707352365e Mon Sep 17 00:00:00 2001 From: karthik-us Date: Wed, 6 Mar 2024 23:09:39 +0530 Subject: [PATCH 22/65] Revert "doc: fix broken links" This reverts commit ebc812c4ede094195ebe369e58bce3cb6604d09b. Signed-off-by: karthik-us --- Documentation/.pages | 12 ------------ Documentation/Getting-Started/.pages | 2 +- Documentation/Getting-Started/intro.md | 1 + Documentation/README.md | 4 ++-- Documentation/intro.md | 1 - mkdocs.yml | 2 +- 6 files changed, 5 insertions(+), 17 deletions(-) create mode 120000 Documentation/Getting-Started/intro.md delete mode 120000 Documentation/intro.md diff --git a/Documentation/.pages b/Documentation/.pages index 078a205892c1..c6dc6332dbe2 100644 --- a/Documentation/.pages +++ b/Documentation/.pages @@ -6,15 +6,3 @@ nav: - Troubleshooting - Upgrade - Contributing - - Getting Started: - - Rook: intro.md - - Glossary: Getting-Started/glossary - - Prerequisites: - - Prerequisites: Getting-Started/Prerequisites/prerequisites.md - - Authenticated Container Registries: Getting-Started/Prerequisites/authenticated-registry - - Quick Start: Getting-Started/quickstart - - Storage Architecture: Getting-Started/storage-architecture - - Example Configurations: Getting-Started/example-configurations - - OpenShift: Getting-Started/ceph-openshift - - Cleanup: Getting-Started/ceph-teardown - - Release: Getting-Started/release-cycle diff --git a/Documentation/Getting-Started/.pages b/Documentation/Getting-Started/.pages index 29bb29b3d8a8..165242426575 100644 --- a/Documentation/Getting-Started/.pages +++ b/Documentation/Getting-Started/.pages @@ -1,5 +1,5 @@ nav: - - Rook: ../intro + - intro.md - glossary.md - Prerequisites - quickstart.md diff --git a/Documentation/Getting-Started/intro.md b/Documentation/Getting-Started/intro.md new file mode 120000 index 000000000000..32d46ee883b5 --- /dev/null +++ b/Documentation/Getting-Started/intro.md @@ -0,0 +1 @@ +../README.md \ No newline at end of file diff --git a/Documentation/README.md b/Documentation/README.md index 986a87b1bfb4..ede5c852315c 100644 --- a/Documentation/README.md +++ b/Documentation/README.md @@ -18,11 +18,11 @@ Rook is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF ## Quick Start Guide Starting Ceph in your cluster is as simple as a few `kubectl` commands. -See our [Quickstart](Getting-Started/quickstart.md) guide to get started with the Ceph operator! +See our [Quickstart](quickstart.md) guide to get started with the Ceph operator! ## Designs -[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](Getting-Started/storage-architecture.md). +[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](storage-architecture.md). For detailed design documentation, see also the [design docs](https://github.com/rook/rook/tree/master/design). diff --git a/Documentation/intro.md b/Documentation/intro.md deleted file mode 120000 index 42061c01a1c7..000000000000 --- a/Documentation/intro.md +++ /dev/null @@ -1 +0,0 @@ -README.md \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index b0588903e8ad..6b7b20db610f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -72,7 +72,7 @@ plugins: #js_files: [] - redirects: redirect_maps: - README.md: intro.md + README.md: Getting-Started/intro.md - mike: # these fields are all optional; the defaults are as below... version_selector: true # set to false to leave out the version selector From 9ceb8aa7e08427a05ffe4db4579cd58b11587ecc Mon Sep 17 00:00:00 2001 From: ESASHIKA Kaoru Date: Tue, 27 Feb 2024 07:16:00 +0000 Subject: [PATCH 23/65] docs: note that changing encryption option settings for existing clusters is not dangerous We have prohibited changing all network-related configurations after cluster creation. However, as a result of https://github.com/rook/rook/discussions/13584, we found that the connections.encryption flag can be regarded as an exception. Signed-off-by: ESASHIKA Kaoru --- Documentation/CRDs/Cluster/ceph-cluster-crd.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index e9df0f3f8edb..c6efd41fb3e1 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -220,8 +220,9 @@ Configure the network that will be enabled for the cluster and services. See the kernel requirements above for encryption. !!! caution - Changing networking configuration after a Ceph cluster has been deployed is NOT - supported and will result in a non-functioning cluster. + Changing networking configuration after a Ceph cluster has been deployed is only supported for + the network encryption settings. Changing other network settings is *NOT* supported and will + likely result in a non-functioning cluster. #### Provider From fdacd1e587d2349665c509e5a44217dd0023554e Mon Sep 17 00:00:00 2001 From: NymanRobin Date: Thu, 7 Mar 2024 11:09:45 +0200 Subject: [PATCH 24/65] ci: remove minor version from setup-go in govulncheck The patch version was pinned which caused problems with security patches. Unpin it and always check for new patch version Signed-off-by: NymanRobin --- .github/workflows/golangci-lint.yaml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/golangci-lint.yaml b/.github/workflows/golangci-lint.yaml index 5bf7d673d18a..d45881701c9e 100644 --- a/.github/workflows/golangci-lint.yaml +++ b/.github/workflows/golangci-lint.yaml @@ -54,6 +54,7 @@ jobs: steps: - uses: actions/setup-go@v5 with: - go-version: "1.21.5" + go-version: "1.21" + check-latest: true - name: govulncheck uses: golang/govulncheck-action@v1 From ed6fb58bd19f7171abc311d09fc3bb8fa7ab00c5 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Thu, 7 Mar 2024 20:31:45 +0530 Subject: [PATCH 25/65] ci: upgrade min k8s supported version to 1.25 combiner upgrading minimum kubernetes supported version to v1.25.16 and max to k8s 1.29.2 Signed-off-by: parth-gr --- .github/workflows/canary-test-config/action.yaml | 2 +- .github/workflows/integration-test-helm-suite.yaml | 2 +- .github/workflows/integration-test-mgr-suite.yaml | 3 +-- .../integration-test-multi-cluster-suite.yaml | 2 +- .github/workflows/integration-test-object-suite.yaml | 2 +- .github/workflows/integration-test-smoke-suite.yaml | 2 +- .../workflows/integration-test-upgrade-suite.yaml | 4 ++-- .github/workflows/integration-tests-on-release.yaml | 12 ++++++------ .../Getting-Started/Prerequisites/prerequisites.md | 2 +- Documentation/Getting-Started/quickstart.md | 2 +- PendingReleaseNotes.md | 2 +- 11 files changed, 17 insertions(+), 18 deletions(-) diff --git a/.github/workflows/canary-test-config/action.yaml b/.github/workflows/canary-test-config/action.yaml index 49c032d9c417..cb1ed4d961e4 100644 --- a/.github/workflows/canary-test-config/action.yaml +++ b/.github/workflows/canary-test-config/action.yaml @@ -19,7 +19,7 @@ runs: - name: Setup Minikube shell: bash --noprofile --norc -eo pipefail -x {0} run: | - tests/scripts/github-action-helper.sh install_minikube_with_none_driver v1.29.1 + tests/scripts/github-action-helper.sh install_minikube_with_none_driver v1.29.2 - name: install deps shell: bash --noprofile --norc -eo pipefail -x {0} diff --git a/.github/workflows/integration-test-helm-suite.yaml b/.github/workflows/integration-test-helm-suite.yaml index ab7d731c54ff..37b626daca64 100644 --- a/.github/workflows/integration-test-helm-suite.yaml +++ b/.github/workflows/integration-test-helm-suite.yaml @@ -25,7 +25,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/.github/workflows/integration-test-mgr-suite.yaml b/.github/workflows/integration-test-mgr-suite.yaml index 0b94b916aeea..c843add54988 100644 --- a/.github/workflows/integration-test-mgr-suite.yaml +++ b/.github/workflows/integration-test-mgr-suite.yaml @@ -24,7 +24,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.29.1"] + kubernetes-versions: ["v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -36,7 +36,6 @@ jobs: with: use-tmate: ${{ secrets.USE_TMATE }} - - name: setup cluster resources uses: ./.github/workflows/integration-test-config-latest-k8s with: diff --git a/.github/workflows/integration-test-multi-cluster-suite.yaml b/.github/workflows/integration-test-multi-cluster-suite.yaml index 8ce883ece1c2..fc16eaa2a9fc 100644 --- a/.github/workflows/integration-test-multi-cluster-suite.yaml +++ b/.github/workflows/integration-test-multi-cluster-suite.yaml @@ -25,7 +25,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.29.1"] + kubernetes-versions: ["v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/.github/workflows/integration-test-object-suite.yaml b/.github/workflows/integration-test-object-suite.yaml index 11a394f3620b..6190dc4f9c2e 100644 --- a/.github/workflows/integration-test-object-suite.yaml +++ b/.github/workflows/integration-test-object-suite.yaml @@ -25,7 +25,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/.github/workflows/integration-test-smoke-suite.yaml b/.github/workflows/integration-test-smoke-suite.yaml index a2acb9653b94..9366960cd296 100644 --- a/.github/workflows/integration-test-smoke-suite.yaml +++ b/.github/workflows/integration-test-smoke-suite.yaml @@ -25,7 +25,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/.github/workflows/integration-test-upgrade-suite.yaml b/.github/workflows/integration-test-upgrade-suite.yaml index e006af209144..3e55f368f109 100644 --- a/.github/workflows/integration-test-upgrade-suite.yaml +++ b/.github/workflows/integration-test-upgrade-suite.yaml @@ -25,7 +25,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -69,7 +69,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/.github/workflows/integration-tests-on-release.yaml b/.github/workflows/integration-tests-on-release.yaml index 0f15ed334bbe..567a2864e3f4 100644 --- a/.github/workflows/integration-tests-on-release.yaml +++ b/.github/workflows/integration-tests-on-release.yaml @@ -18,7 +18,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.25.16", "v1.27.10", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.27.11", "v1.28.7", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -58,7 +58,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.25.16", "v1.27.10", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.27.11", "v1.28.7", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -99,7 +99,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.25.16", "v1.27.10", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.27.11", "v1.28.7", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -137,7 +137,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.25.16", "v1.27.10", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.27.11", "v1.28.7", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -175,7 +175,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.26.11", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.27.11", "v1.28.7", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 @@ -216,7 +216,7 @@ jobs: strategy: fail-fast: false matrix: - kubernetes-versions: ["v1.24.17", "v1.29.1"] + kubernetes-versions: ["v1.25.16", "v1.29.2"] steps: - name: checkout uses: actions/checkout@v4 diff --git a/Documentation/Getting-Started/Prerequisites/prerequisites.md b/Documentation/Getting-Started/Prerequisites/prerequisites.md index 9a34d3cf9fbb..9281e6338b00 100644 --- a/Documentation/Getting-Started/Prerequisites/prerequisites.md +++ b/Documentation/Getting-Started/Prerequisites/prerequisites.md @@ -7,7 +7,7 @@ and Rook is granted the required privileges (see below for more information). ## Kubernetes Version -Kubernetes versions **v1.24** through **v1.29** are supported. +Kubernetes versions **v1.25** through **v1.29** are supported. ## CPU Architecture diff --git a/Documentation/Getting-Started/quickstart.md b/Documentation/Getting-Started/quickstart.md index 380f3d0ef063..f71939cc54a8 100644 --- a/Documentation/Getting-Started/quickstart.md +++ b/Documentation/Getting-Started/quickstart.md @@ -12,7 +12,7 @@ This guide will walk through the basic setup of a Ceph cluster and enable K8s ap ## Kubernetes Version -Kubernetes versions **v1.24** through **v1.29** are supported. +Kubernetes versions **v1.25** through **v1.29** are supported. ## CPU Architecture diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index bf11c3fb0e90..4ca3fbd82e6c 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -10,6 +10,6 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - Impact: This results in the unintended network configurations, with pods using the host networking instead of pod networking. ## Features -- Kubernetes versions **v1.24** through **v1.29** are supported. +- Kubernetes versions **v1.25** through **v1.29** are supported. - Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. - The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. From 2e8468a47c183f2794326bcb8d27ef4d6e153c6b Mon Sep 17 00:00:00 2001 From: Travis Nielsen Date: Thu, 7 Mar 2024 16:31:09 -0700 Subject: [PATCH 26/65] core: upgrade test from 1.13 to master The upgrade test should always upgrade from the previous minor release to the latest master. With 1.14 releasing soon, now we upgrade from 1.13 to master, to confirm if there are any upgrade issues to 1.14. Signed-off-by: Travis Nielsen --- tests/framework/installer/ceph_manifests.go | 2 +- tests/framework/installer/ceph_manifests_previous.go | 2 +- tests/integration/ceph_upgrade_test.go | 12 ++++++------ 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/tests/framework/installer/ceph_manifests.go b/tests/framework/installer/ceph_manifests.go index 96becfc95705..471d79e763ac 100644 --- a/tests/framework/installer/ceph_manifests.go +++ b/tests/framework/installer/ceph_manifests.go @@ -71,7 +71,7 @@ func NewCephManifests(settings *TestCephSettings) CephManifests { switch settings.RookVersion { case LocalBuildTag: return &CephManifestsMaster{settings} - case Version1_12: + case Version1_13: return &CephManifestsPreviousVersion{settings, &CephManifestsMaster{settings}} } panic(fmt.Errorf("unrecognized ceph manifest version: %s", settings.RookVersion)) diff --git a/tests/framework/installer/ceph_manifests_previous.go b/tests/framework/installer/ceph_manifests_previous.go index ff41757d7a47..3656659cea99 100644 --- a/tests/framework/installer/ceph_manifests_previous.go +++ b/tests/framework/installer/ceph_manifests_previous.go @@ -24,7 +24,7 @@ import ( const ( // The version from which the upgrade test will start - Version1_12 = "v1.12.7" + Version1_13 = "v1.13.6" ) // CephManifestsPreviousVersion wraps rook yaml definitions diff --git a/tests/integration/ceph_upgrade_test.go b/tests/integration/ceph_upgrade_test.go index 0383b65b389a..98db0ee65e86 100644 --- a/tests/integration/ceph_upgrade_test.go +++ b/tests/integration/ceph_upgrade_test.go @@ -88,7 +88,7 @@ func (s *UpgradeSuite) baseSetup(useHelm bool, initialCephVersion v1.CephVersion Mons: 1, EnableDiscovery: true, SkipClusterCleanup: true, - RookVersion: installer.Version1_12, + RookVersion: installer.Version1_13, CephVersion: initialCephVersion, } @@ -127,9 +127,9 @@ func (s *UpgradeSuite) testUpgrade(useHelm bool, initialCephVersion v1.CephVersi _ = s.helper.BucketClient.DeleteBucketStorageClass(s.namespace, installer.ObjectStoreName, installer.ObjectStoreSCName, "Delete") // - // Upgrade Rook from v1.12 to master + // Upgrade Rook from v1.13 to master // - logger.Infof("*** UPGRADING ROOK FROM %s to master ***", installer.Version1_12) + logger.Infof("*** UPGRADING ROOK FROM %s to master ***", installer.Version1_13) s.gatherLogs(s.settings.OperatorNamespace, "_before_master_upgrade") s.upgradeToMaster() @@ -138,7 +138,7 @@ func (s *UpgradeSuite) testUpgrade(useHelm bool, initialCephVersion v1.CephVersi err := s.installer.WaitForToolbox(s.namespace) assert.NoError(s.T(), err) - logger.Infof("Done with automatic upgrade from %s to master", installer.Version1_12) + logger.Infof("Done with automatic upgrade from %s to master", installer.Version1_13) newFile := "post-upgrade-previous-to-master-file" s.verifyFilesAfterUpgrade(newFile, rbdFilesToRead, cephfsFilesToRead) rbdFilesToRead = append(rbdFilesToRead, newFile) @@ -150,7 +150,7 @@ func (s *UpgradeSuite) testUpgrade(useHelm bool, initialCephVersion v1.CephVersi // do not need retry b/c the OBC controller runs parallel to Rook-Ceph orchestration assert.True(s.T(), s.helper.BucketClient.CheckOBC(obcName, "bound")) - logger.Infof("Verified upgrade from %s to master", installer.Version1_12) + logger.Infof("Verified upgrade from %s to master", installer.Version1_13) // SKIP the Ceph version upgrades for the helm test if s.settings.UseHelm { @@ -286,7 +286,7 @@ func (s *UpgradeSuite) deployClusterforUpgrade(objectUserID, preFilename string) require.True(s.T(), created) // verify that we're actually running the right pre-upgrade image - s.verifyOperatorImage(installer.Version1_12) + s.verifyOperatorImage(installer.Version1_13) assert.NoError(s.T(), s.k8sh.WriteToPod("", rbdPodName, preFilename, simpleTestMessage)) assert.NoError(s.T(), s.k8sh.ReadFromPod("", rbdPodName, preFilename, simpleTestMessage)) From b0989ee1d281508232d7bde987903dde4c14126c Mon Sep 17 00:00:00 2001 From: Jiffin Tony Thottan Date: Mon, 4 Dec 2023 19:17:43 +0530 Subject: [PATCH 27/65] object: add rgw dns names The virtual hosting for bucket is provided with help of `rgw_dns_name` Signed-off-by: Jiffin Tony Thottan --- .../Object-Storage/ceph-object-store-crd.md | 10 ++ Documentation/CRDs/specification.md | 63 +++++++ .../Object-Storage-RGW/object-storage.md | 3 + PendingReleaseNotes.md | 1 + .../charts/rook-ceph/templates/resources.yaml | 9 + deploy/examples/crds.yaml | 9 + deploy/examples/object.yaml | 5 + pkg/apis/ceph.rook.io/v1/types.go | 16 ++ .../ceph.rook.io/v1/zz_generated.deepcopy.go | 26 +++ pkg/operator/ceph/object/rgw.go | 19 ++- pkg/operator/ceph/object/spec.go | 96 +++++++++++ pkg/operator/ceph/object/spec_test.go | 158 ++++++++++++++++++ 12 files changed, 409 insertions(+), 6 deletions(-) diff --git a/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md b/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md index 83c6fffc2ca1..1575ad3020b5 100644 --- a/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md +++ b/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md @@ -64,6 +64,10 @@ spec: # memory: "1024Mi" #zone: #name: zone-a + #hosting: + # dnsName: + # - "mystore.example.com" + # - "mystore.example.org" ``` ## Object Store Settings @@ -149,6 +153,12 @@ The [zone](../../Storage-Configuration/Object-Storage-RGW/ceph-object-multisite. * `name`: the name of the ceph-object-zone the object store will be in. +## Hosting Settings + +The hosting settings allow you to host buckets in the object store on a custom DNS name, enabling virtual-hosted-style access to buckets similar to AWS S3 (https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). + +* `dnsNames`: a list of DNS names to host buckets on. These names need to valid according RFC-1123. Otherwise it will fail. Each endpoint requires wildcard support like [ingress loadbalancer](https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards). Do not include the wildcard itself in the list of hostnames (e.g., use "mystore.example.com" instead of "*.mystore.example.com"). Add all the hostnames like openshift routes otherwise access will be denied, but if the hostname does not support wild card then virtual host style won't work those hostname. By default cephobjectstore service endpoint and custom endpoints from cephobjectzone is included. The feature is supported only for Ceph v18 and later versions. + ## Runtime settings ### MIME types diff --git a/Documentation/CRDs/specification.md b/Documentation/CRDs/specification.md index c88f883d1172..f3795b8225c8 100644 --- a/Documentation/CRDs/specification.md +++ b/Documentation/CRDs/specification.md @@ -1955,6 +1955,20 @@ to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty.

+ + +hosting
+ + +ObjectStoreHostingSpec + + + + +(Optional) +

Hosting settings for the object store

+ + @@ -8938,6 +8952,41 @@ PullSpec +

ObjectStoreHostingSpec +

+

+(Appears on:ObjectStoreSpec) +

+
+

ObjectStoreHostingSpec represents the hosting settings for the object store

+
+ + + + + + + + + + + + + +
FieldDescription
+dnsNames
+ +[]string + +
+(Optional) +

A list of DNS names in which bucket can be accessed via virtual host path. These names need to valid according RFC-1123. +Each domain requires wildcard support like ingress loadbalancer. +Do not include the wildcard itself in the list of hostnames (e.g. use “mystore.example.com” instead of “*.mystore.example.com”). +Add all hostnames including user-created Kubernetes Service endpoints to the list. +CephObjectStore Service Endpoints and CephObjectZone customEndpoints are automatically added to the list. +The feature is supported only for Ceph v18 and later versions.

+

ObjectStoreSecuritySpec

@@ -9112,6 +9161,20 @@ to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty.

+ + +hosting
+ + +ObjectStoreHostingSpec + + + + +(Optional) +

Hosting settings for the object store

+ +

ObjectStoreStatus diff --git a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md index 01b2bd6df9c5..41b120fcb310 100644 --- a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md +++ b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md @@ -180,6 +180,9 @@ export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-bucket -o jsonpath export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode) ``` +If any `hosting.dnsNames` are set in the `CephObjectStore` CRD, S3 clients can access buckets in [virtual-host-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). +Otherwise, S3 clients must be configured to use path-style access. + ## Consume the Object Storage Now that you have the object store configured and a bucket created, you can consume the diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 4ca3fbd82e6c..8379ded52360 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -13,3 +13,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - Kubernetes versions **v1.25** through **v1.29** are supported. - Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. - The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. +- Support for virtual style hosting for s3 buckets in the CephObjectStore. diff --git a/deploy/charts/rook-ceph/templates/resources.yaml b/deploy/charts/rook-ceph/templates/resources.yaml index 572c5e57a6e4..77336f3d1475 100644 --- a/deploy/charts/rook-ceph/templates/resources.yaml +++ b/deploy/charts/rook-ceph/templates/resources.yaml @@ -9880,6 +9880,15 @@ spec: type: object type: object type: object + hosting: + description: Hosting settings for the object store + properties: + dnsNames: + description: A list of DNS names in which bucket can be accessed via virtual host path. These names need to valid according RFC-1123. Each domain requires wildcard support like ingress loadbalancer. Do not include the wildcard itself in the list of hostnames (e.g. use "mystore.example.com" instead of "*.mystore.example.com"). Add all hostnames including user-created Kubernetes Service endpoints to the list. CephObjectStore Service Endpoints and CephObjectZone customEndpoints are automatically added to the list. The feature is supported only for Ceph v18 and later versions. + items: + type: string + type: array + type: object metadataPool: description: The metadata pool settings nullable: true diff --git a/deploy/examples/crds.yaml b/deploy/examples/crds.yaml index e08ce43bb3ba..6889c429ed8c 100644 --- a/deploy/examples/crds.yaml +++ b/deploy/examples/crds.yaml @@ -9871,6 +9871,15 @@ spec: type: object type: object type: object + hosting: + description: Hosting settings for the object store + properties: + dnsNames: + description: A list of DNS names in which bucket can be accessed via virtual host path. These names need to valid according RFC-1123. Each domain requires wildcard support like ingress loadbalancer. Do not include the wildcard itself in the list of hostnames (e.g. use "mystore.example.com" instead of "*.mystore.example.com"). Add all hostnames including user-created Kubernetes Service endpoints to the list. CephObjectStore Service Endpoints and CephObjectZone customEndpoints are automatically added to the list. The feature is supported only for Ceph v18 and later versions. + items: + type: string + type: array + type: object metadataPool: description: The metadata pool settings nullable: true diff --git a/deploy/examples/object.yaml b/deploy/examples/object.yaml index 3755fb0690c4..ba22f3a94f15 100644 --- a/deploy/examples/object.yaml +++ b/deploy/examples/object.yaml @@ -108,6 +108,11 @@ spec: disabled: false readinessProbe: disabled: false + # hosting: + # The list of subdomain names for virtual hosting of buckets. + # dnsNames: + # - "mystore.example.com" + # If a CephObjectStoreUser is created in a namespace other than the Rook cluster namespace, # the namespace must be added to the list of allowed namespaces, or specify "*" to allow all namespaces. # allowUsersInNamespaces: diff --git a/pkg/apis/ceph.rook.io/v1/types.go b/pkg/apis/ceph.rook.io/v1/types.go index 3b7d93c97d36..d0c661d5a3f2 100755 --- a/pkg/apis/ceph.rook.io/v1/types.go +++ b/pkg/apis/ceph.rook.io/v1/types.go @@ -1463,6 +1463,10 @@ type ObjectStoreSpec struct { // is being used to create buckets. The default is empty. // +optional AllowUsersInNamespaces []string `json:"allowUsersInNamespaces,omitempty"` + + // Hosting settings for the object store + // +optional + Hosting *ObjectStoreHostingSpec `json:"hosting,omitempty"` } // ObjectHealthCheckSpec represents the health check of an object store @@ -1621,6 +1625,18 @@ type ObjectEndpoints struct { Secure []string `json:"secure"` } +// ObjectStoreHostingSpec represents the hosting settings for the object store +type ObjectStoreHostingSpec struct { + // A list of DNS names in which bucket can be accessed via virtual host path. These names need to valid according RFC-1123. + // Each domain requires wildcard support like ingress loadbalancer. + // Do not include the wildcard itself in the list of hostnames (e.g. use "mystore.example.com" instead of "*.mystore.example.com"). + // Add all hostnames including user-created Kubernetes Service endpoints to the list. + // CephObjectStore Service Endpoints and CephObjectZone customEndpoints are automatically added to the list. + // The feature is supported only for Ceph v18 and later versions. + // +optional + DNSNames []string `json:"dnsNames,omitempty"` +} + // +genclient // +genclient:noStatus // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object diff --git a/pkg/apis/ceph.rook.io/v1/zz_generated.deepcopy.go b/pkg/apis/ceph.rook.io/v1/zz_generated.deepcopy.go index d70a304f2044..fee2022a0bf4 100644 --- a/pkg/apis/ceph.rook.io/v1/zz_generated.deepcopy.go +++ b/pkg/apis/ceph.rook.io/v1/zz_generated.deepcopy.go @@ -3474,6 +3474,27 @@ func (in *ObjectRealmSpec) DeepCopy() *ObjectRealmSpec { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ObjectStoreHostingSpec) DeepCopyInto(out *ObjectStoreHostingSpec) { + *out = *in + if in.DNSNames != nil { + in, out := &in.DNSNames, &out.DNSNames + *out = make([]string, len(*in)) + copy(*out, *in) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ObjectStoreHostingSpec. +func (in *ObjectStoreHostingSpec) DeepCopy() *ObjectStoreHostingSpec { + if in == nil { + return nil + } + out := new(ObjectStoreHostingSpec) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ObjectStoreSecuritySpec) DeepCopyInto(out *ObjectStoreSecuritySpec) { *out = *in @@ -3510,6 +3531,11 @@ func (in *ObjectStoreSpec) DeepCopyInto(out *ObjectStoreSpec) { *out = make([]string, len(*in)) copy(*out, *in) } + if in.Hosting != nil { + in, out := &in.Hosting, &out.Hosting + *out = new(ObjectStoreHostingSpec) + (*in).DeepCopyInto(*out) + } return } diff --git a/pkg/operator/ceph/object/rgw.go b/pkg/operator/ceph/object/rgw.go index 5e2f3d637167..f41ecff1f012 100644 --- a/pkg/operator/ceph/object/rgw.go +++ b/pkg/operator/ceph/object/rgw.go @@ -341,18 +341,25 @@ func GetStableDomainName(s *cephv1.CephObjectStore) string { } func getDomainName(s *cephv1.CephObjectStore, returnRandomDomainIfMultiple bool) string { + endpoints := []string{} if s.Spec.IsExternal() { // if the store is external, pick a random endpoint to use. if the endpoint is down, this // reconcile may fail, but a future reconcile will eventually pick a different endpoint to try - endpoints := s.Spec.Gateway.ExternalRgwEndpoints - idx := 0 - if returnRandomDomainIfMultiple { - idx = rand.Intn(len(endpoints)) //nolint:gosec // G404: cryptographically weak RNG is fine here + for _, e := range s.Spec.Gateway.ExternalRgwEndpoints { + endpoints = append(endpoints, e.String()) } - return endpoints[idx].String() + } else if s.Spec.Hosting != nil && len(s.Spec.Hosting.DNSNames) > 0 { + // if the store is internal and has DNS names, pick a random DNS name to use + endpoints = s.Spec.Hosting.DNSNames + } else { + return domainNameOfService(s) } - return domainNameOfService(s) + idx := 0 + if returnRandomDomainIfMultiple { + idx = rand.Intn(len(endpoints)) //nolint:gosec // G404: cryptographically weak RNG is fine here + } + return endpoints[idx] } func domainNameOfService(s *cephv1.CephObjectStore) string { diff --git a/pkg/operator/ceph/object/spec.go b/pkg/operator/ceph/object/spec.go index 95e3f81ea896..60cab390f175 100644 --- a/pkg/operator/ceph/object/spec.go +++ b/pkg/operator/ceph/object/spec.go @@ -20,6 +20,7 @@ import ( "bytes" _ "embed" "fmt" + "net/url" "path" "strings" "text/template" @@ -37,6 +38,7 @@ import ( v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/validation" ) const ( @@ -418,6 +420,14 @@ func (c *clusterConfig) makeDaemonContainer(rgwConfig *rgwConfig) (v1.Container, vaultVolMount := v1.VolumeMount{Name: rgwVaultVolumeName, MountPath: rgwVaultDirName} container.VolumeMounts = append(container.VolumeMounts, vaultVolMount) } + + hostingOptions, err := c.addDNSNamesToRGWServer() + if err != nil { + return v1.Container{}, err + } + if hostingOptions != "" { + container.Args = append(container.Args, hostingOptions) + } return container, nil } @@ -912,3 +922,89 @@ func renderProbe(cfg rgwProbeConfig) (string, error) { return writer.String(), nil } + +func (c *clusterConfig) addDNSNamesToRGWServer() (string, error) { + if (c.store.Spec.Hosting == nil) || len(c.store.Spec.Hosting.DNSNames) <= 0 { + return "", nil + } + if !c.clusterInfo.CephVersion.IsAtLeastReef() { + return "", errors.New("rgw dns names are supported from ceph v18 onwards") + } + + // add default RGW service name to dns names + dnsNames := c.store.Spec.Hosting.DNSNames + dnsNames = append(dnsNames, domainNameOfService(c.store)) + + // add custom endpoints from zone spec if exists + if c.store.Spec.Zone.Name != "" { + zone, err := c.context.RookClientset.CephV1().CephObjectZones(c.store.Namespace).Get(c.clusterInfo.Context, c.store.Spec.Zone.Name, metav1.GetOptions{}) + if err != nil { + return "", err + } + dnsNames = append(dnsNames, zone.Spec.CustomEndpoints...) + } + + // validate dns names + var hostNames []string + for _, dnsName := range dnsNames { + hostName, err := GetHostnameFromEndpoint(dnsName) + if err != nil { + return "", errors.Wrap(err, + "failed to interpret endpoint from one of the following sources: CephObjectStore.spec.hosting.dnsNames, CephObjectZone.spec.customEndpoints") + } + hostNames = append(hostNames, hostName) + } + + // remove duplicate host names + checkDuplicate := make(map[string]bool) + removeDuplicateHostNames := []string{} + for _, hostName := range hostNames { + if _, ok := checkDuplicate[hostName]; !ok { + checkDuplicate[hostName] = true + removeDuplicateHostNames = append(removeDuplicateHostNames, hostName) + } + } + + return cephconfig.NewFlag("rgw dns name", strings.Join(removeDuplicateHostNames, ",")), nil +} + +func GetHostnameFromEndpoint(endpoint string) (string, error) { + if len(endpoint) == 0 { + return "", fmt.Errorf("endpoint cannot be empty string") + } + + // if endpoint doesn't end in '/', Ceph adds it + // Rook can do this also to get more accurate error results from this function + if !strings.HasSuffix(endpoint, "/") { + endpoint = endpoint + "/" + } + + // url.Parse() requires a protocol to parse the host name correctly + if !strings.Contains(endpoint, "://") { + endpoint = "http://" + endpoint + } + + // "net/url".Parse() assumes that a URL is a "path" with optional things surrounding it. + // For Ceph RGWs, we assume an endpoint is a "hostname" with optional things surrounding it. + // Because of this difference in fundamental assumption, Rook needs to adjust some endpoints + // used as input to url.Parse() to allow the function to extract a hostname reliably. Also, + // Rook needs to look at several parts of the url.Parse() output to identify more failure scenarios + parsedURL, err := url.Parse(endpoint) + if err != nil { + return "", err + } + + // error in this case: url.Parse("https://http://hostname") parses without error with `Host = "http:"` + // also catches issue where user adds colon but no port number after + if strings.HasSuffix(parsedURL.Host, ":") { + return "", fmt.Errorf("host %q parsed from endpoint %q has invalid colon (:) placement", parsedURL.Host, endpoint) + } + + hostname := parsedURL.Hostname() + dnsErrs := validation.IsDNS1123Subdomain(parsedURL.Hostname()) + if len(dnsErrs) > 0 { + return "", fmt.Errorf("hostname %q parsed from endpoint %q is not a valid DNS-1123 subdomain: %v", hostname, endpoint, strings.Join(dnsErrs, ", ")) + } + + return hostname, nil +} diff --git a/pkg/operator/ceph/object/spec_test.go b/pkg/operator/ceph/object/spec_test.go index d9583624c03f..5665fc0902f2 100644 --- a/pkg/operator/ceph/object/spec_test.go +++ b/pkg/operator/ceph/object/spec_test.go @@ -23,6 +23,7 @@ import ( "github.com/pkg/errors" cephv1 "github.com/rook/rook/pkg/apis/ceph.rook.io/v1" + rookclient "github.com/rook/rook/pkg/client/clientset/versioned/fake" "github.com/rook/rook/pkg/clusterd" "github.com/rook/rook/pkg/daemon/ceph/client" clienttest "github.com/rook/rook/pkg/daemon/ceph/client/test" @@ -876,3 +877,160 @@ func TestAWSServerSideEncryption(t *testing.T) { assert.True(t, checkRGWOptions(rgwContainer.Args, c.sseS3VaultTLSOptions(true))) }) } + +func TestAddDNSNamesToRGWPodSpec(t *testing.T) { + setupTest := func(zoneName string, cephvers cephver.CephVersion, dnsNames, customEndpoints []string) *clusterConfig { + store := simpleStore() + info := clienttest.CreateTestClusterInfo(1) + data := cephconfig.NewStatelessDaemonDataPathMap(cephconfig.RgwType, "default", "rook-ceph", "/var/lib/rook/") + ctx := &clusterd.Context{RookClientset: rookclient.NewSimpleClientset()} + info.CephVersion = cephvers + store.Spec.Hosting = &cephv1.ObjectStoreHostingSpec{ + DNSNames: dnsNames, + } + if zoneName != "" { + store.Spec.Zone.Name = zoneName + zone := &cephv1.CephObjectZone{ + ObjectMeta: metav1.ObjectMeta{ + Name: zoneName, + Namespace: store.Namespace, + }, + Spec: cephv1.ObjectZoneSpec{}, + } + if len(customEndpoints) > 0 { + zone.Spec.CustomEndpoints = customEndpoints + } + _, err := ctx.RookClientset.CephV1().CephObjectZones(store.Namespace).Create(context.TODO(), zone, metav1.CreateOptions{}) + assert.NoError(t, err) + } + return &clusterConfig{ + clusterInfo: info, + store: store, + context: ctx, + rookVersion: "rook/rook:myversion", + clusterSpec: &cephv1.ClusterSpec{ + CephVersion: cephv1.CephVersionSpec{Image: "quay.io/ceph/ceph:v18"}, + }, + DataPathMap: data, + } + } + tests := []struct { + name string + dnsNames []string + expectedDNSArg string + cephvers cephver.CephVersion + zoneName string + CustomEndpoints []string + wantErr bool + }{ + {"no dns names ceph v18", []string{}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, false}, + {"no dns names with zone ceph v18", []string{}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{}, false}, + {"no dns names with zone and custom endpoints ceph v18", []string{}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://my.custom.endpoint2:80"}, false}, + {"one dns name ceph v18", []string{"my.dns.name"}, "--rgw-dns-name=my.dns.name,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, false}, + {"multiple dns names ceph v18", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, false}, + {"duplicate dns names ceph v18", []string{"my.dns.name1", "my.dns.name2", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, false}, + {"invalid dns name ceph v18", []string{"!my.invalid-dns.com"}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, true}, + {"mixed invalid and valid dns names ceph v18", []string{"my.dns.name", "!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "", []string{}, true}, + {"dns name with zone without custom endpoints ceph v18", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{}, false}, + {"dns name with zone with custom endpoints ceph v18", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc,my.custom.endpoint1,my.custom.endpoint2", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://my.custom.endpoint2:80"}, false}, + {"dns name with zone with custom invalid endpoints ceph v18", []string{"my.dns.name1", "my.dns.name2"}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint:80", "http://!my.invalid-custom.endpoint:80"}, true}, + {"dns name with zone with mixed invalid and valid dnsnames/custom endpoint ceph v18", []string{"my.dns.name", "!my.dns.name"}, "", cephver.CephVersion{Major: 18, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://my.custom.endpoint2:80:80"}, true}, + {"no dns names ceph v17", []string{}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, false}, + {"one dns name ceph v17", []string{"my.dns.name"}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, true}, + {"multiple dns names ceph v17", []string{"my.dns.name1", "my.dns.name2"}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, true}, + {"duplicate dns names ceph v17", []string{"my.dns.name1", "my.dns.name2", "my.dns.name2"}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, true}, + {"invalid dns name ceph v17", []string{"!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, true}, + {"mixed invalid and valid dns names ceph v17", []string{"my.dns.name", "!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 17, Minor: 0, Extra: 0}, "", []string{}, true}, + {"no dns names ceph v19", []string{}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, false}, + {"no dns names with zone ceph v19", []string{}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{}, false}, + {"no dns names with zone and custom endpoints ceph v19", []string{}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://my.custom.endpoint2:80"}, false}, + {"one dns name ceph v19", []string{"my.dns.name"}, "--rgw-dns-name=my.dns.name,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, false}, + {"multiple dns names ceph v19", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, false}, + {"duplicate dns names ceph v19", []string{"my.dns.name1", "my.dns.name2", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, false}, + {"invalid dns name ceph v19", []string{"!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, true}, + {"mixed invalid and valid dns names ceph v19", []string{"my.dns.name", "!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "", []string{}, true}, + {"dns name with zone without custom endpoints ceph v19", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{}, false}, + {"dns name with zone with custom endpoints ceph v19", []string{"my.dns.name1", "my.dns.name2"}, "--rgw-dns-name=my.dns.name1,my.dns.name2,rook-ceph-rgw-default.mycluster.svc,my.custom.endpoint1,my.custom.endpoint2", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://my.custom.endpoint2:80"}, false}, + {"dns name with zone with custom invalid endpoints ceph v19", []string{"my.dns.name1", "my.dns.name2"}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint:80", "http://!my.custom.invalid-endpoint:80"}, true}, + {"dns with zone with mixed invalid and valid dnsnames/custom endpoint ceph v19", []string{"my.dns.name", "!my.invalid-dns.name"}, "", cephver.CephVersion{Major: 19, Minor: 0, Extra: 0}, "myzone", []string{"http://my.custom.endpoint1:80", "http://!my.custom.invalid-endpoint:80"}, true}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + c := setupTest(tt.zoneName, tt.cephvers, tt.dnsNames, tt.CustomEndpoints) + res, err := c.addDNSNamesToRGWServer() + if tt.wantErr { + assert.Error(t, err) + } else { + assert.NoError(t, err) + } + assert.Equal(t, tt.expectedDNSArg, res) + + }) + } +} + +func TestGetHostnameFromEndpoint(t *testing.T) { + tests := []struct { + name string + endpoint string + expected string + wantErr bool + }{ + {"empty endpoint", "", "", true}, + {"http endpoint", "http://my.custom.endpoint1:80", "my.custom.endpoint1", false}, + {"https endpoint", "https://my.custom.endpoint1:80", "my.custom.endpoint1", false}, + {"http endpoint without port", "http://my.custom.endpoint1", "my.custom.endpoint1", false}, + {"https endpoint without port", "https://my.custom.endpoint1", "my.custom.endpoint1", false}, + {"multiple portocol endpoint case 1", "http://https://my.custom.endpoint1:80", "", true}, + {"multiple portocol endpoint case 2", "https://http://my.custom.endpoint1:80", "", true}, + {"ftp endpoint", "ftp://my.custom.endpoint1:80", "my.custom.endpoint1", false}, + {"custom protocol endpoint", "custom://my.custom.endpoint1:80", "my.custom.endpoint1", false}, + {"custom protocol endpoint without port", "custom://my.custom.endpoint1", "my.custom.endpoint1", false}, + {"invalid endpoint", "http://!my.custom.endpoint1:80", "", true}, + {"invalid endpoint multiple ports", "http://my.custom.endpoint1:80:80", "", true}, + {"invalid http endpoint with upper case", "http://MY.CUSTOM.ENDPOINT1:80", "", true}, + {"invalid http endpoint with upper case without port", "http://MY.CUSTOM.ENDPOINT1", "", true}, + {"invalid https endpoint with upper case", "https://MY.CUSTOM.ENDPOINT1:80", "", true}, + {"invalid https endpoint with upper case without port", "https://MY.CUSTOM.ENDPOINT1", "", true}, + {"invalid http endpoint with camel case", "http://myCustomEndpoint1:80", "", true}, + {"invalid http endpoint with camel case without port", "http://myCustomEndpoint1", "", true}, + {"invalid https endpoint with camel case", "https://myCustomEndpoint1:80", "", true}, + {"invalid https endpoint with camel case without port", "https://myCustomEndpoint1", "", true}, + {"endpoint without protocol", "my.custom.endpoint1:80", "my.custom.endpoint1", false}, + {"endpoint without protocol without port", "my.custom.endpoint1", "my.custom.endpoint1", false}, + {"invalid endpoint without protocol with upper case", "MY.CUSTOM.ENDPOINT1:80", "", true}, + {"invalid endpoint without protocol with upper case without port", "MY.CUSTOM.ENDPOINT1", "", true}, + {"invalid endpoint without protocol with camel case", "myCustomEndpoint1:80", "", true}, + {"invalid endpoint without protocol with camel case without port", "myCustomEndpoint1", "", true}, + {"invalid endpoint without protocol ending :", "my.custom.endpoint1:", "", true}, + {"invalid endpoint without protocol and multiple ports", "my.custom.endpoint1:80:80", "", true}, + {"http endpoint containing ip address", "http://1.1.1.1:80", "1.1.1.1", false}, + {"http endpoint containing ip address without port", "http://1.1.1.1", "1.1.1.1", false}, + {"https endpoint containing ip address", "https://1.1.1.1:80", "1.1.1.1", false}, + {"https endpoint containing ip address without port", "https://1.1.1.1:80", "1.1.1.1", false}, + {"invalid endpoint ending ://", "my.custom.endpoint1://", "", true}, + {"invalid endpoint ending : without port", "http://my.custom.endpoint1:", "", true}, + {"http endpoint with user and password", "http://user:password@mycustomendpoint1:80", "mycustomendpoint1", false}, + {"http endpoint with user and password without port", "http://user:password@mycustomendpoint1", "mycustomendpoint1", false}, + {"https endpoint with user and password", "https://user:password@mycustomendpoint1:80", "mycustomendpoint1", false}, + {"https endpoint with user and password without port", "https://user:password@mycustomendpoint1", "mycustomendpoint1", false}, + {"endpoint with user and password without protocol", "user:password@mycustomendpoint:80", "mycustomendpoint", false}, + {"endpoint with user and password without protocol without port", "user:password@mycustomendpoint", "mycustomendpoint", false}, + {"invalid endpoint with user and password ending ://", "user:password@mycustomendpoint1://", "", true}, + {"invalid endpoint with user and password ending : without port", "user:password@mycustomendpoint1:", "", true}, + {"invalid endpoint with user and password ending with multiple ports", "user:password@mycustomendpoint1:80:80", "", true}, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + res, err := GetHostnameFromEndpoint(tt.endpoint) + if tt.wantErr { + assert.Error(t, err) + } else { + assert.NoError(t, err) + } + assert.Equal(t, tt.expected, res) + + }) + } +} From 51302c9dcbd28566d2019fff33397df8f8de2ccf Mon Sep 17 00:00:00 2001 From: parth-gr Date: Mon, 4 Mar 2024 17:27:10 +0530 Subject: [PATCH 28/65] doc: add steps to configure external mode with admin privileges sometimes user want to use the admin ower to create some resources in the external ceph cluster, so adding a way to use the admin privilege part-of: https://github.com/rook/rook/issues/13827 Signed-off-by: parth-gr --- Documentation/CRDs/Cluster/external-cluster.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/Documentation/CRDs/Cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster.md index 2dd19fedeab5..81e422020ed7 100644 --- a/Documentation/CRDs/Cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster.md @@ -103,6 +103,24 @@ python3 create-external-cluster-resources.py --upgrade --rbd-data-pool-name repl An existing non-restricted user cannot be converted to a restricted user by upgrading. The upgrade flag should only be used to append new permissions to users. It shouldn't be used for changing a csi user already applied permissions. For example, you shouldn't change the pool(s) a user has access to. +### Admin privileges + +If in case the cluster needs the admin keyring to configure, update the admin key `rook-ceph-mon` secret with client.admin keyring + +!!! note + Sharing the admin key with the external cluster is not generally recommended + +1. Get the `client.admin` keyring from the ceph cluster + ```console + ceph auth get client.admin + ``` + +2. Update two values in the `rook-ceph-mon` secret: + - `ceph-username`: Set to `client.admin` + - `ceph-secret`: Set the client.admin keyring + +After restarting the rook operator (and the toolbox if in use), rook will configure ceph with admin privileges. + ### 2. Copy the bash output Example Output: From c00982ae48f1e55836285162c6e7f57c08914c77 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Wed, 6 Mar 2024 20:52:30 +0530 Subject: [PATCH 29/65] external: restructure the external install steps Updating the installation steps of external cluster to make it more user friendly part-of: https://github.com/rook/rook/discussions/13831 Signed-off-by: parth-gr --- .../CRDs/Cluster/external-cluster.md | 46 +++++++++---------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/Documentation/CRDs/Cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster.md index 8a803fe33cab..5fc0a88eef0a 100644 --- a/Documentation/CRDs/Cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster.md @@ -123,29 +123,6 @@ export RGW_POOL_PREFIX=default ## Commands on the K8s consumer cluster -### Import the Source Data - -1. Paste the above output from `create-external-cluster-resources.py` into your current shell to allow importing the source data. - -1. The import script in the next step uses the current kubeconfig context by - default. If you want to specify the kubernetes cluster to use without - changing the current context, you can specify the cluster name by setting - the KUBECONTEXT environment variable. - - ```console - export KUBECONTEXT=hub-cluster - ``` - -1. Run the [import](https://github.com/rook/rook/blob/master/deploy/examples/import-external-cluster.sh) script. - - !!! note - If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove - `fast-diff,object-map,deep-flatten,exclusive-lock` from the `imageFeatures` line. - - ```console - . import-external-cluster.sh - ``` - ### Helm Installation To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example `values-external.yaml`. @@ -170,6 +147,29 @@ If not installing with Helm, here are the steps to install with manifests. 2. Create [common-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/common-external.yaml) and [cluster-external.yaml](https://github.com/rook/rook/blob/master/deploy/examples/cluster-external.yaml) +### Import the Source Data + +1. Paste the above output from `create-external-cluster-resources.py` into your current shell to allow importing the source data. + +1. The import script in the next step uses the current kubeconfig context by + default. If you want to specify the kubernetes cluster to use without + changing the current context, you can specify the cluster name by setting + the KUBECONTEXT environment variable. + + ```console + export KUBECONTEXT= + ``` + +1. Run the [import](https://github.com/rook/rook/blob/master/deploy/examples/import-external-cluster.sh) script. + + !!! note + If your Rook cluster nodes are running a kernel earlier than or equivalent to 5.4, remove + `fast-diff,object-map,deep-flatten,exclusive-lock` from the `imageFeatures` line. + + ```console + . import-external-cluster.sh + ``` + ### Cluster Verification 1. Verify the consumer cluster is connected to the source ceph cluster: From 10e26709c45441352ee9a36a9faa4e5e95fade52 Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Fri, 8 Mar 2024 14:38:27 +0100 Subject: [PATCH 30/65] exporter: applying labels from monitoring section to ceph-exporter the labels listed under the 'monitoring' section are currently only being applied to the rook-ceph-mgr ServiceMonitor. This change extends those labels to also include the rook-ceph-exporter ServiceMonitor. Fixes: https://github.com/rook/rook/issues/13774 Signed-off-by: Redouane Kachach --- pkg/operator/ceph/cluster/nodedaemon/exporter.go | 1 + 1 file changed, 1 insertion(+) diff --git a/pkg/operator/ceph/cluster/nodedaemon/exporter.go b/pkg/operator/ceph/cluster/nodedaemon/exporter.go index ca66fb540ce5..a92834ccb702 100644 --- a/pkg/operator/ceph/cluster/nodedaemon/exporter.go +++ b/pkg/operator/ceph/cluster/nodedaemon/exporter.go @@ -262,6 +262,7 @@ func EnableCephExporterServiceMonitor(context *clusterd.Context, cephCluster cep func applyCephExporterLabels(cephCluster cephv1.CephCluster, serviceMonitor *monitoringv1.ServiceMonitor) { if cephCluster.Spec.Labels != nil { + cephv1.GetMonitoringLabels(cephCluster.Spec.Labels).OverwriteApplyToObjectMeta(&serviceMonitor.ObjectMeta) if cephExporterLabels, ok := cephCluster.Spec.Labels["exporter"]; ok { if managedBy, ok := cephExporterLabels["rook.io/managedBy"]; ok { relabelConfig := monitoringv1.RelabelConfig{ From 04b71cf63e33fdfc57acb93ea9ede3d600c76786 Mon Sep 17 00:00:00 2001 From: parth-gr Date: Mon, 26 Feb 2024 20:39:55 +0530 Subject: [PATCH 31/65] external: add toplogyconstraintpool support for rbd storageclass with topologyconstrain pool we can enable any replica pool usage and also can store data at any topology Signed-off-by: parth-gr --- .../CRDs/Cluster/external-cluster.md | 14 ++ .../create-external-cluster-resources.py | 134 +++++++++++++++++- deploy/examples/import-external-cluster.sh | 63 +++++++- 3 files changed, 203 insertions(+), 8 deletions(-) diff --git a/Documentation/CRDs/Cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster.md index 7f1155946100..417ff4400c31 100644 --- a/Documentation/CRDs/Cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster.md @@ -60,6 +60,9 @@ python3 create-external-cluster-resources.py --rbd-data-pool-name -- * `--upgrade`: (optional) Upgrades the cephCSIKeyrings(For example: client.csi-cephfs-provisioner) and client.healthchecker ceph users with new permissions needed for the new cluster version and older permission will still be applied. * `--restricted-auth-permission`: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are `--rbd-data-pool-name`, and `--k8s-cluster-name`. `--cephfs-filesystem-name` flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem. * `--v2-port-enable`: (optional) Enables the v2 mon port (3300) for mons. +* `--topology-pools`: (optional) comma-separated list of topology-constrained rbd pools +* `--topology-failure-domain-label`: (optional) k8s cluster failure domain label (example: zone,rack,host,etc) for the topology-pools that are matching the ceph domain +* `--topology-failure-domain-values`: (optional) comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the topology-pools list ### Multi-tenancy @@ -84,6 +87,17 @@ See the [Multisite doc](https://docs.ceph.com/en/quincy/radosgw/multisite/#confi python3 create-external-cluster-resources.py --rbd-data-pool-name --format bash --rgw-endpoint --rgw-realm-name > --rgw-zonegroup-name --rgw-zone-name > ``` +### Topology Based Provisioning + +Enable Topology Based Provisioning for RBD pools by passing `--topology-pools`, `--topology-failure-domain-label` and `--topology-failure-domain-values` flags. +A new storageclass will be created by the import script named `ceph-rbd-topology` with `volumeBindingMode: WaitForFirstConsumer` +and will configure topologyConstrainedPools according the input provided. +Later use the storageclass to create a volume in the pool matching the topology of the pod scheduling. + +```console +python3 create-external-cluster-resources.py --rbd-data-pool-name pool_name --topology-pools p,q,r --topology-failure-domain-label labelName --topology-failure-domain-values x,y,z --format bash +``` + ### Upgrade Example 1) If consumer cluster doesn't have restricted caps, this will upgrade all the default csi-users (non-restricted): diff --git a/deploy/examples/create-external-cluster-resources.py b/deploy/examples/create-external-cluster-resources.py index 61039c9eb1bd..a27da02f5178 100644 --- a/deploy/examples/create-external-cluster-resources.py +++ b/deploy/examples/create-external-cluster-resources.py @@ -474,6 +474,24 @@ def gen_arg_parser(cls, args_to_parse=None): required=False, help="provides the name of the rgw-zonegroup", ) + output_group.add_argument( + "--topology-pools", + default="", + required=False, + help="comma-separated list of topology-constrained rbd pools", + ) + output_group.add_argument( + "--topology-failure-domain-label", + default="", + required=False, + help="k8s cluster failure domain label (example: zone,rack,host,etc) for the topology-pools that are matching the ceph domain", + ) + output_group.add_argument( + "--topology-failure-domain-values", + default="", + required=False, + help="comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the topology-pools list", + ) upgrade_group = argP.add_argument_group("upgrade") upgrade_group.add_argument( @@ -1321,16 +1339,15 @@ def create_rgw_admin_ops_user(self): "", ) - def validate_rbd_pool(self): - if not self.cluster.pool_exists(self._arg_parser.rbd_data_pool_name): + def validate_rbd_pool(self, pool_name): + if not self.cluster.pool_exists(pool_name): raise ExecutionFailureException( - f"The provided pool, '{self._arg_parser.rbd_data_pool_name}', does not exist" + f"The provided pool, '{pool_name}', does not exist" ) - def init_rbd_pool(self): + def init_rbd_pool(self, rbd_pool_name): if isinstance(self.cluster, DummyRados): return - rbd_pool_name = self._arg_parser.rbd_data_pool_name ioctx = self.cluster.open_ioctx(rbd_pool_name) rbd_inst = rbd.RBD() rbd_inst.pool_init(ioctx, True) @@ -1501,6 +1518,54 @@ def validate_rgw_multisite(self, rgw_multisite_config_name, rgw_multisite_config return "-1" return "" + def convert_comma_seprated_to_array(self, value): + return value.split(",") + + def raise_exception_if_any_topology_flag_is_missing(self): + if ( + ( + self._arg_parser.topology_pools != "" + and ( + self._arg_parser.topology_failure_domain_label == "" + or self._arg_parser.topology_failure_domain_values == "" + ) + ) + or ( + self._arg_parser.topology_failure_domain_label != "" + and ( + self._arg_parser.topology_pools == "" + or self._arg_parser.topology_failure_domain_values == "" + ) + ) + or ( + self._arg_parser.topology_failure_domain_values != "" + and ( + self._arg_parser.topology_pools == "" + or self._arg_parser.topology_failure_domain_label == "" + ) + ) + ): + raise ExecutionFailureException( + "provide all the topology flags --topology-pools, --topology-failure-domain-label, --topology-failure-domain-values" + ) + + def validate_topology_values(self, topology_pools, topology_fd): + if len(topology_pools) != len(topology_fd): + raise ExecutionFailureException( + f"The provided topology pools, '{topology_pools}', and " + f"topology failure domain, '{topology_fd}'," + f"are of different length, '{len(topology_pools)}' and '{len(topology_fd)}' respctively" + ) + return + + def validate_topology_rbd_pools(self, topology_rbd_pools): + for pool in topology_rbd_pools: + self.validate_rbd_pool(pool) + + def init_topology_rbd_pools(self, topology_rbd_pools): + for pool in topology_rbd_pools: + self.init_rbd_pool(pool) + def _gen_output_map(self): if self.out_map: return @@ -1510,8 +1575,8 @@ def _gen_output_map(self): self._arg_parser.k8s_cluster_name = ( self._arg_parser.k8s_cluster_name.lower() ) # always convert cluster name to lowercase characters - self.validate_rbd_pool() - self.init_rbd_pool() + self.validate_rbd_pool(self._arg_parser.rbd_data_pool_name) + self.init_rbd_pool(self._arg_parser.rbd_data_pool_name) self.validate_rados_namespace() self._excluded_keys.add("K8S_CLUSTER_NAME") self.get_cephfs_data_pool_details() @@ -1585,6 +1650,33 @@ def _gen_output_map(self): self.out_map["RBD_METADATA_EC_POOL_NAME"] = ( self.validate_rbd_metadata_ec_pool_name() ) + self.out_map["TOPOLOGY_POOLS"] = self._arg_parser.topology_pools + self.out_map["TOPOLOGY_FAILURE_DOMAIN_LABEL"] = ( + self._arg_parser.topology_failure_domain_label + ) + self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] = ( + self._arg_parser.topology_failure_domain_values + ) + if ( + self._arg_parser.topology_pools != "" + and self._arg_parser.topology_failure_domain_label != "" + and self._arg_parser.topology_failure_domain_values != "" + ): + self.validate_topology_values( + self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]), + self.convert_comma_seprated_to_array( + self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] + ), + ) + self.validate_topology_rbd_pools( + self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]) + ) + self.init_topology_rbd_pools( + self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]) + ) + else: + self.raise_exception_if_any_topology_flag_is_missing() + self.out_map["RGW_POOL_PREFIX"] = self._arg_parser.rgw_pool_prefix self.out_map["RGW_ENDPOINT"] = "" if self._arg_parser.rgw_endpoint: @@ -1821,6 +1913,34 @@ def gen_json_out(self): } ) + # if 'TOPOLOGY_POOLS', 'TOPOLOGY_FAILURE_DOMAIN_LABEL', 'TOPOLOGY_FAILURE_DOMAIN_VALUES' exists, + # then only add 'topology' StorageClass + if ( + self.out_map["TOPOLOGY_POOLS"] + and self.out_map["TOPOLOGY_FAILURE_DOMAIN_LABEL"] + and self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] + ): + json_out.append( + { + "name": "ceph-rbd-topology", + "kind": "StorageClass", + "data": { + "topologyFailureDomainLabel": self.out_map[ + "TOPOLOGY_FAILURE_DOMAIN_LABEL" + ], + "topologyFailureDomainValues": self.convert_comma_seprated_to_array( + self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] + ), + "topologyPools": self.convert_comma_seprated_to_array( + self.out_map["TOPOLOGY_POOLS"] + ), + "csi.storage.k8s.io/provisioner-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", + "csi.storage.k8s.io/controller-expand-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", + "csi.storage.k8s.io/node-stage-secret-name": f"rook-{self.out_map['CSI_RBD_NODE_SECRET_NAME']}", + }, + } + ) + # if 'CEPHFS_FS_NAME' exists, then only add 'cephfs' StorageClass if self.out_map["CEPHFS_FS_NAME"]: json_out.append( diff --git a/deploy/examples/import-external-cluster.sh b/deploy/examples/import-external-cluster.sh index 77381e715a29..ed17a7abc950 100644 --- a/deploy/examples/import-external-cluster.sh +++ b/deploy/examples/import-external-cluster.sh @@ -19,9 +19,10 @@ ROOK_RBD_FEATURES=${ROOK_RBD_FEATURES:-"layering"} ROOK_EXTERNAL_MAX_MON_ID=2 ROOK_EXTERNAL_MAPPING={} RBD_STORAGE_CLASS_NAME=ceph-rbd +RBD_TOPOLOGY_STORAGE_CLASS_NAME=ceph-rbd-topology CEPHFS_STORAGE_CLASS_NAME=cephfs ROOK_EXTERNAL_MONITOR_SECRET=mon-secret -OPERATOR_NAMESPACE=rook-ceph # default set to rook-ceph +OPERATOR_NAMESPACE=rook-ceph # default set to rook-ceph CSI_DRIVER_NAME_PREFIX=${CSI_DRIVER_NAME_PREFIX:-$OPERATOR_NAMESPACE} RBD_PROVISIONER=$CSI_DRIVER_NAME_PREFIX".rbd.csi.ceph.com" # csi-provisioner-name CEPHFS_PROVISIONER=$CSI_DRIVER_NAME_PREFIX".cephfs.csi.ceph.com" # csi-provisioner-name @@ -298,6 +299,62 @@ eof fi } +function getTopologyTemplate() { + topology=$( + cat <<-END + {"poolName":"$1", + "domainSegments":[ + {"domainLabel":"$2","value":"$3"}]}, +END + ) +} + +function createTopology() { + TOPOLOGY="" + declare -a topology_failure_domain_values_array=() + declare -a topology_pools_array=() + topology_pools=("$(echo "$TOPOLOGY_POOLS" | tr "," "\n")") + for i in ${topology_pools[0]}; do topology_pools_array+=("$i"); done + topology_failure_domain_values=("$(echo "$TOPOLOGY_FAILURE_DOMAIN_VALUES" | tr "," "\n")") + for i in ${topology_failure_domain_values[0]}; do topology_failure_domain_values_array+=("$i"); done + for ((i = 0; i < ${#topology_failure_domain_values_array[@]}; i++)); do + getTopologyTemplate "${topology_pools_array[$i]}" "$TOPOLOGY_FAILURE_DOMAIN_LABEL" "${topology_failure_domain_values_array[$i]}" + TOPOLOGY="$TOPOLOGY"$'\n'"$topology" + topology="" + done +} + +function createRBDTopologyStorageClass() { + if ! kubectl -n "$NAMESPACE" get storageclass $RBD_TOPOLOGY_STORAGE_CLASS_NAME &>/dev/null; then + cat </dev/null; then cat < Date: Tue, 5 Mar 2024 18:38:24 +0530 Subject: [PATCH 32/65] external: add replicated 3 pool for the rbd sc till the csi pr get fix https://github.com/ceph/ceph-csi/pull/4459 we can still make use of topology by adding the replicated 3 pool once the pr gets merged will revert this commit Signed-off-by: parth-gr --- deploy/examples/create-external-cluster-resources.py | 1 + deploy/examples/import-external-cluster.sh | 1 + 2 files changed, 2 insertions(+) diff --git a/deploy/examples/create-external-cluster-resources.py b/deploy/examples/create-external-cluster-resources.py index a27da02f5178..acd265dcd1f0 100644 --- a/deploy/examples/create-external-cluster-resources.py +++ b/deploy/examples/create-external-cluster-resources.py @@ -1934,6 +1934,7 @@ def gen_json_out(self): "topologyPools": self.convert_comma_seprated_to_array( self.out_map["TOPOLOGY_POOLS"] ), + "pool": self.out_map["RBD_POOL_NAME"], "csi.storage.k8s.io/provisioner-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", "csi.storage.k8s.io/controller-expand-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", "csi.storage.k8s.io/node-stage-secret-name": f"rook-{self.out_map['CSI_RBD_NODE_SECRET_NAME']}", diff --git a/deploy/examples/import-external-cluster.sh b/deploy/examples/import-external-cluster.sh index ed17a7abc950..2da209316974 100644 --- a/deploy/examples/import-external-cluster.sh +++ b/deploy/examples/import-external-cluster.sh @@ -334,6 +334,7 @@ metadata: provisioner: $RBD_PROVISIONER parameters: clusterID: $CLUSTER_ID_RBD + pool: $RBD_POOL_NAME imageFormat: "2" imageFeatures: $ROOK_RBD_FEATURES topologyConstrainedPools: | From e0768f5e5f1c92885e554d07c92009e1aec099b0 Mon Sep 17 00:00:00 2001 From: Jiffin Tony Thottan Date: Mon, 5 Feb 2024 17:15:59 +0530 Subject: [PATCH 33/65] object: provisoner prefix support add an option to set prefix for the name of obc provisioner instead of ceph cluster namespace. Signed-off-by: Jiffin Tony Thottan --- Documentation/Helm-Charts/operator-chart.md | 1 + .../ceph-object-bucket-claim.md | 5 ++++ PendingReleaseNotes.md | 1 + .../charts/rook-ceph/templates/configmap.yaml | 3 ++ deploy/charts/rook-ceph/values.yaml | 4 +++ deploy/examples/operator-openshift.yaml | 3 ++ deploy/examples/operator.yaml | 5 +++- pkg/operator/ceph/object/bucket/util.go | 5 +++- pkg/operator/ceph/object/objectstore.go | 14 +++++++-- pkg/operator/ceph/object/objectstore_test.go | 29 +++++++++++++++++-- 10 files changed, 62 insertions(+), 8 deletions(-) diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index 352c5498e785..45f930cbf8fa 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -146,6 +146,7 @@ The following table lists the configurable parameters of the rook-operator chart | `logLevel` | Global log level for the operator. Options: `ERROR`, `WARNING`, `INFO`, `DEBUG` | `"INFO"` | | `monitoring.enabled` | Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors | `false` | | `nodeSelector` | Kubernetes [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) to add to the Deployment. | `{}` | +| `obcProvisionerNamePrefix` | Specify the prefix for the OBC provisioner in place of the cluster namespace | `ceph cluster namespace` | | `priorityClassName` | Set the priority class for the rook operator deployment if desired | `nil` | | `pspEnable` | If true, create & use PSP resources | `false` | | `rbacAggregate.enableOBCs` | If true, create a ClusterRole aggregated to [user facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) for objectbucketclaims | `false` | diff --git a/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md b/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md index f95664a3bd41..4e80e6eeb782 100644 --- a/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md +++ b/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md @@ -10,6 +10,11 @@ Rook supports the creation of new buckets and access to existing buckets via two An OBC references a storage class which is created by an administrator. The storage class defines whether the bucket requested is a new bucket or an existing bucket. It also defines the bucket retention policy. Users request a new or existing bucket by creating an OBC which is shown below. The ceph provisioner detects the OBC and creates a new bucket or grants access to an existing bucket, depending the storage class referenced in the OBC. It also generates a Secret which provides credentials to access the bucket, and a ConfigMap which contains the bucket's endpoint. Application pods consume the information in the Secret and ConfigMap to access the bucket. Please note that to make provisioner watch the cluster namespace only you need to set `ROOK_OBC_WATCH_OPERATOR_NAMESPACE` to `true` in the operator manifest, otherwise it watches all namespaces. +The OBC provisioner name found in the storage class by default includes the operator namespace as a prefix. A custom prefix can be applied by the operator setting in the `rook-ceph-operator-config` configmap: `ROOK_OBC_PROVISIONER_NAME_PREFIX`. + +!!! Note + Changing the prefix is not supported on existing clusters. This may impact the function of existing OBCs. + ## Example ### OBC Custom Resource diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 8379ded52360..76067b3b8716 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -14,3 +14,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. - The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. - Support for virtual style hosting for s3 buckets in the CephObjectStore. +- Add option to specify prefix for the OBC provisioner. diff --git a/deploy/charts/rook-ceph/templates/configmap.yaml b/deploy/charts/rook-ceph/templates/configmap.yaml index 4ce7b75dc278..60a143010418 100644 --- a/deploy/charts/rook-ceph/templates/configmap.yaml +++ b/deploy/charts/rook-ceph/templates/configmap.yaml @@ -9,6 +9,9 @@ data: ROOK_LOG_LEVEL: {{ .Values.logLevel | quote }} ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: {{ .Values.cephCommandsTimeoutSeconds | quote }} ROOK_OBC_WATCH_OPERATOR_NAMESPACE: {{ .Values.enableOBCWatchOperatorNamespace | quote }} +{{- if .Values.obcProvisionerNamePrefix }} + ROOK_OBC_PROVISIONER_NAME_PREFIX: {{ .Values.obcProvisionerNamePrefix | quote }} +{{- end }} ROOK_CEPH_ALLOW_LOOP_DEVICES: {{ .Values.allowLoopDevices | quote }} ROOK_ENABLE_DISCOVERY_DAEMON: {{ .Values.enableDiscoveryDaemon | quote }} {{- if .Values.discoverDaemonUdev }} diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index c01b4016dc3d..66c4b1687ec1 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -616,6 +616,10 @@ imagePullSecrets: # -- Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used enableOBCWatchOperatorNamespace: true +# -- Specify the prefix for the OBC provisioner in place of the cluster namespace +# @default -- `ceph cluster namespace` +obcProvisionerNamePrefix: + monitoring: # -- Enable monitoring. Requires Prometheus to be pre-installed. # Enabling will also create RBAC rules to allow Operator to create ServiceMonitors diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index d269206da14f..0a2128edb12e 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -539,6 +539,9 @@ data: # Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true" + # Custom prefix value for the OBC provisioner instead of ceph cluster namespace, do not set on existing cluster + # ROOK_OBC_PROVISIONER_NAME_PREFIX: "custom-prefix" + # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster. # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. ROOK_ENABLE_DISCOVERY_DAEMON: "false" diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index 7bb91ac1f065..96eb0fc8c57e 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -481,9 +481,12 @@ data: # (Optional) Retry Period in seconds the LeaderElector clients should wait between tries of actions. Defaults to 26 seconds. # CSI_LEADER_ELECTION_RETRY_PERIOD: "26s" - # Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used + # Whether the OBC provisioner should watch on the ceph cluster namespace or not, if not default provisioner value is set ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true" + # Custom prefix value for the OBC provisioner instead of ceph cluster namespace, do not set on existing cluster + # ROOK_OBC_PROVISIONER_NAME_PREFIX: "custom-prefix" + # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster. # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. ROOK_ENABLE_DISCOVERY_DAEMON: "false" diff --git a/pkg/operator/ceph/object/bucket/util.go b/pkg/operator/ceph/object/bucket/util.go index 32c07308ff6a..56afa0910db8 100644 --- a/pkg/operator/ceph/object/bucket/util.go +++ b/pkg/operator/ceph/object/bucket/util.go @@ -43,7 +43,10 @@ const ( func NewBucketController(cfg *rest.Config, p *Provisioner, data map[string]string) (*provisioner.Provisioner, error) { const allNamespaces = "" - provName := cephObject.GetObjectBucketProvisioner(data, p.clusterInfo.Namespace) + provName, err := cephObject.GetObjectBucketProvisioner(data, p.clusterInfo.Namespace) + if err != nil { + return nil, errors.Wrap(err, "failed to get provisioner name") + } logger.Infof("ceph bucket provisioner launched watching for provisioner %q", provName) return provisioner.NewProvisioner(cfg, provName, p, allNamespaces) diff --git a/pkg/operator/ceph/object/objectstore.go b/pkg/operator/ceph/object/objectstore.go index 2781d716330e..279925552487 100644 --- a/pkg/operator/ceph/object/objectstore.go +++ b/pkg/operator/ceph/object/objectstore.go @@ -41,6 +41,7 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" + validation "k8s.io/apimachinery/pkg/util/validation" ) const ( @@ -845,13 +846,20 @@ func poolName(poolPrefix, poolName string) string { } // GetObjectBucketProvisioner returns the bucket provisioner name appended with operator namespace if OBC is watching on it -func GetObjectBucketProvisioner(data map[string]string, namespace string) string { +func GetObjectBucketProvisioner(data map[string]string, namespace string) (string, error) { provName := bucketProvisionerName obcWatchOnNamespace := k8sutil.GetValue(data, "ROOK_OBC_WATCH_OPERATOR_NAMESPACE", "false") - if strings.EqualFold(obcWatchOnNamespace, "true") { + obcProvisionerNamePrefix := k8sutil.GetValue(data, "ROOK_OBC_PROVISIONER_NAME_PREFIX", "") + if obcProvisionerNamePrefix != "" { + errList := validation.IsDNS1123Label(obcProvisionerNamePrefix) + if len(errList) > 0 { + return "", errors.Errorf("invalid OBC provisioner name prefix %q. %v", obcProvisionerNamePrefix, errList) + } + provName = fmt.Sprintf("%s.%s", obcProvisionerNamePrefix, bucketProvisionerName) + } else if obcWatchOnNamespace == "true" { provName = fmt.Sprintf("%s.%s", namespace, bucketProvisionerName) } - return provName + return provName, nil } // CheckDashboardUser returns true if the dashboard user exists and has the same credentials as the given user, else return false diff --git a/pkg/operator/ceph/object/objectstore_test.go b/pkg/operator/ceph/object/objectstore_test.go index 625083ed78bd..39484991a578 100644 --- a/pkg/operator/ceph/object/objectstore_test.go +++ b/pkg/operator/ceph/object/objectstore_test.go @@ -227,17 +227,40 @@ func TestGetObjectBucketProvisioner(t *testing.T) { testNamespace := "test-namespace" t.Setenv(k8sutil.PodNamespaceEnvVar, testNamespace) - t.Run("watch single namespace", func(t *testing.T) { + t.Run("watch ceph cluster namespace", func(t *testing.T) { data := map[string]string{"ROOK_OBC_WATCH_OPERATOR_NAMESPACE": "true"} - bktprovisioner := GetObjectBucketProvisioner(data, testNamespace) + bktprovisioner, err := GetObjectBucketProvisioner(data, testNamespace) assert.Equal(t, fmt.Sprintf("%s.%s", testNamespace, bucketProvisionerName), bktprovisioner) + assert.NoError(t, err) }) t.Run("watch all namespaces", func(t *testing.T) { data := map[string]string{"ROOK_OBC_WATCH_OPERATOR_NAMESPACE": "false"} - bktprovisioner := GetObjectBucketProvisioner(data, testNamespace) + bktprovisioner, err := GetObjectBucketProvisioner(data, testNamespace) assert.Equal(t, bucketProvisionerName, bktprovisioner) + assert.NoError(t, err) + }) + + t.Run("prefix object provisioner", func(t *testing.T) { + data := map[string]string{"ROOK_OBC_PROVISIONER_NAME_PREFIX": "my-prefix"} + bktprovisioner, err := GetObjectBucketProvisioner(data, testNamespace) + assert.Equal(t, "my-prefix."+bucketProvisionerName, bktprovisioner) + assert.NoError(t, err) + }) + + t.Run("watch ceph cluster namespace and prefix object provisioner", func(t *testing.T) { + data := map[string]string{"ROOK_OBC_WATCH_OPERATOR_NAMESPACE": "true", "ROOK_OBC_PROVISIONER_NAME_PREFIX": "my-prefix"} + bktprovisioner, err := GetObjectBucketProvisioner(data, testNamespace) + assert.Equal(t, "my-prefix."+bucketProvisionerName, bktprovisioner) + assert.NoError(t, err) + }) + + t.Run("invalid prefix value for object provisioner", func(t *testing.T) { + data := map[string]string{"ROOK_OBC_PROVISIONER_NAME_PREFIX": "my-prefix."} + _, err := GetObjectBucketProvisioner(data, testNamespace) + assert.Error(t, err) }) + } func TestRGWPGNumVersion(t *testing.T) { From eb7acca57bcfd6f2b791b32cc522a694f4bebc68 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 11 Mar 2024 12:08:24 +0000 Subject: [PATCH 34/65] build(deps): bump the github-dependencies group with 3 updates Bumps the github-dependencies group with 3 updates: [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go), [github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring](https://github.com/prometheus-operator/prometheus-operator) and [github.com/prometheus-operator/prometheus-operator/pkg/client](https://github.com/prometheus-operator/prometheus-operator). Updates `github.com/aws/aws-sdk-go` from 1.50.31 to 1.50.35 - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.50.31...v1.50.35) Updates `github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring` from 0.71.2 to 0.72.0 - [Release notes](https://github.com/prometheus-operator/prometheus-operator/releases) - [Changelog](https://github.com/prometheus-operator/prometheus-operator/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus-operator/prometheus-operator/compare/v0.71.2...v0.72.0) Updates `github.com/prometheus-operator/prometheus-operator/pkg/client` from 0.71.2 to 0.72.0 - [Release notes](https://github.com/prometheus-operator/prometheus-operator/releases) - [Changelog](https://github.com/prometheus-operator/prometheus-operator/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus-operator/prometheus-operator/compare/v0.71.2...v0.72.0) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch dependency-group: github-dependencies - dependency-name: github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring dependency-type: direct:production update-type: version-update:semver-minor dependency-group: github-dependencies - dependency-name: github.com/prometheus-operator/prometheus-operator/pkg/client dependency-type: direct:production update-type: version-update:semver-minor dependency-group: github-dependencies ... Signed-off-by: dependabot[bot] --- go.mod | 6 +++--- go.sum | 12 ++++++------ 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/go.mod b/go.mod index 5f304048926f..cc3fb0f9f8e4 100644 --- a/go.mod +++ b/go.mod @@ -6,7 +6,7 @@ replace github.com/rook/rook/pkg/apis => ./pkg/apis require ( github.com/IBM/keyprotect-go-client v0.12.2 - github.com/aws/aws-sdk-go v1.50.31 + github.com/aws/aws-sdk-go v1.50.35 github.com/banzaicloud/k8s-objectmatcher v1.8.0 github.com/ceph/go-ceph v0.26.0 github.com/coreos/pkg v0.0.0-20230601102743-20bbbf26f4d8 @@ -20,8 +20,8 @@ require ( github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1 github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1 github.com/pkg/errors v0.9.1 - github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.71.2 - github.com/prometheus-operator/prometheus-operator/pkg/client v0.71.2 + github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0 + github.com/prometheus-operator/prometheus-operator/pkg/client v0.72.0 github.com/rook/rook/pkg/apis v0.0.0-20231204200402-5287527732f7 github.com/spf13/cobra v1.8.0 github.com/spf13/pflag v1.0.5 diff --git a/go.sum b/go.sum index 71c5db58266e..982c1d2749af 100644 --- a/go.sum +++ b/go.sum @@ -111,8 +111,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/aws/aws-sdk-go v1.44.164/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= -github.com/aws/aws-sdk-go v1.50.31 h1:gx2NRLLEDUmQFC4YUsfMUKkGCwpXVO8ijUecq/nOQGA= -github.com/aws/aws-sdk-go v1.50.31/go.mod h1:LF8svs817+Nz+DmiMQKTO3ubZ/6IaTpq3TjupRn3Eqk= +github.com/aws/aws-sdk-go v1.50.35 h1:llQnNddBI/64pK7pwUFBoWYmg8+XGQUCs214eMbSDZc= +github.com/aws/aws-sdk-go v1.50.35/go.mod h1:LF8svs817+Nz+DmiMQKTO3ubZ/6IaTpq3TjupRn3Eqk= github.com/banzaicloud/k8s-objectmatcher v1.8.0 h1:Nugn25elKtPMTA2br+JgHNeSQ04sc05MDPmpJnd1N2A= github.com/banzaicloud/k8s-objectmatcher v1.8.0/go.mod h1:p2LSNAjlECf07fbhDyebTkPUIYnU05G+WfGgkTmgeMg= github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= @@ -667,10 +667,10 @@ github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndr github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= -github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.71.2 h1:HZdPRm0ApWPg7F4sHgbqWkL+ddWfpTZsopm5HM/2g4o= -github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.71.2/go.mod h1:3RiUkFmR9kmPZi9r/8a5jw0a9yg+LMmr7qa0wjqvSiI= -github.com/prometheus-operator/prometheus-operator/pkg/client v0.71.2 h1:7eyX8MypewjShiOFj6sOX+Ad+EJUIQ5qzdvM/U76cHs= -github.com/prometheus-operator/prometheus-operator/pkg/client v0.71.2/go.mod h1:dH5cun6jo8vesIzplptAQpdXW9dL8rD2jpvWylG4B6s= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0 h1:9h7PxMhT1S8lOdadEKJnBh3ELMdO60XkoDV98grYjuM= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0/go.mod h1:4FiLCL664L4dNGeqZewiiD0NS7hhqi/CxyM4UOq5dfM= +github.com/prometheus-operator/prometheus-operator/pkg/client v0.72.0 h1:UQT8vi8NK8Nt/wYZXY0Asx5XcGAhiQ1SQG190Ei4Pto= +github.com/prometheus-operator/prometheus-operator/pkg/client v0.72.0/go.mod h1:AYjK2t/SjtOmdEAi2CxQ/t/TOQ0j3zzuMhJ5WgM+Ok0= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= From 4836ede41accd6f4840f0c01c657bb3161e1e955 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Mon, 11 Mar 2024 18:15:34 +0530 Subject: [PATCH 35/65] build: update go dep go-jose synk security check is failing and it requires go-jose v3.0.3 or later version. Signed-off-by: subhamkrai --- go.mod | 2 +- go.sum | 4 ++-- pkg/apis/go.mod | 2 +- pkg/apis/go.sum | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/go.mod b/go.mod index cc3fb0f9f8e4..45880beaf16d 100644 --- a/go.mod +++ b/go.mod @@ -63,7 +63,7 @@ require ( github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/gemalto/flume v0.13.1 // indirect github.com/go-errors/errors v1.5.1 // indirect - github.com/go-jose/go-jose/v3 v3.0.2 // indirect + github.com/go-jose/go-jose/v3 v3.0.3 // indirect github.com/go-logr/logr v1.4.1 // indirect github.com/go-logr/zapr v1.3.0 // indirect github.com/go-openapi/jsonpointer v0.20.3 // indirect diff --git a/go.sum b/go.sum index 982c1d2749af..24adc1da3fb2 100644 --- a/go.sum +++ b/go.sum @@ -241,8 +241,8 @@ github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A= github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= -github.com/go-jose/go-jose/v3 v3.0.2 h1:2Edjn8Nrb44UvTdp84KU0bBPs1cO7noRCybtS3eJEUQ= -github.com/go-jose/go-jose/v3 v3.0.2/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= +github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= diff --git a/pkg/apis/go.mod b/pkg/apis/go.mod index 4c971374f3c6..9d0d013c23f4 100644 --- a/pkg/apis/go.mod +++ b/pkg/apis/go.mod @@ -30,7 +30,7 @@ require ( github.com/emicklei/go-restful/v3 v3.11.3 // indirect github.com/evanphx/json-patch v5.9.0+incompatible // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/go-jose/go-jose/v3 v3.0.2 // indirect + github.com/go-jose/go-jose/v3 v3.0.3 // indirect github.com/go-logr/logr v1.4.1 // indirect github.com/go-openapi/jsonpointer v0.20.3 // indirect github.com/go-openapi/jsonreference v0.20.5 // indirect diff --git a/pkg/apis/go.sum b/pkg/apis/go.sum index e78f92dc4725..9248d9cd3332 100644 --- a/pkg/apis/go.sum +++ b/pkg/apis/go.sum @@ -140,8 +140,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2 github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= -github.com/go-jose/go-jose/v3 v3.0.2 h1:2Edjn8Nrb44UvTdp84KU0bBPs1cO7noRCybtS3eJEUQ= -github.com/go-jose/go-jose/v3 v3.0.2/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= +github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= From dd8c3c9974f9a9e8abde119c1b66132e49eb94ac Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Mon, 11 Mar 2024 11:34:30 +0530 Subject: [PATCH 36/65] build: remove csv related files remove csv related files/content which are not used/required for upstream uses. Signed-off-by: subhamkrai --- .github/workflows/build.yml | 8 +- Makefile | 19 +- build/crds/validate-csv-crd-list.sh | 12 - build/csv/csv-gen.sh | 61 --- .../library/templates/_cluster-role.tpl | 16 - .../templates/_cluster-rolebinding.tpl | 15 - deploy/examples/common.yaml | 31 -- deploy/olm/assemble/metadata-common.yaml | 440 ------------------ deploy/olm/assemble/metadata-k8s.yaml | 14 - deploy/olm/assemble/metadata-ocp.yaml | 19 - deploy/olm/assemble/metadata-okd.yaml | 27 -- deploy/olm/assemble/rook-ceph.package.yaml | 4 - images/ceph/Dockerfile | 1 - images/ceph/Makefile | 40 -- 14 files changed, 3 insertions(+), 704 deletions(-) delete mode 100755 build/crds/validate-csv-crd-list.sh delete mode 100755 build/csv/csv-gen.sh delete mode 100644 deploy/olm/assemble/metadata-common.yaml delete mode 100644 deploy/olm/assemble/metadata-k8s.yaml delete mode 100644 deploy/olm/assemble/metadata-ocp.yaml delete mode 100644 deploy/olm/assemble/metadata-okd.yaml delete mode 100644 deploy/olm/assemble/rook-ceph.package.yaml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index f10d02689f7b..68333853525f 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -36,12 +36,6 @@ jobs: - name: build rook run: | - # Install kubectl binary as required for generating csv - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" - chmod +x ./kubectl - sudo mv ./kubectl /usr/local/bin/kubectl - sudo chown root: /usr/local/bin/kubectl - GOPATH=$(go env GOPATH) make clean && make -j$nproc IMAGES='ceph' BUILD_CONTAINER_IMAGE=false build - name: validate build @@ -60,7 +54,7 @@ jobs: run: tests/scripts/validate_modified_files.sh modcheck - name: run crds-gen - run: make csv-clean && GOPATH=$(go env GOPATH) make crds + run: GOPATH=$(go env GOPATH) make crds - name: validate crds-gen run: tests/scripts/validate_modified_files.sh crd diff --git a/Makefile b/Makefile index 140093b5ffd4..2426b3ae41f7 100644 --- a/Makefile +++ b/Makefile @@ -118,11 +118,6 @@ build.version: @mkdir -p $(OUTPUT_DIR) @echo "$(VERSION)" > $(OUTPUT_DIR)/version -# Change how CRDs are generated for CSVs -ifneq ($(REAL_HOST_PLATFORM),darwin_arm64) -build.common: export NO_OB_OBC_VOL_GEN=true -build.common: export MAX_DESC_LEN=0 -endif build.common: export SKIP_GEN_CRD_DOCS=true build.common: build.version helm.build mod.check crds gen-rbac @$(MAKE) go.init @@ -134,7 +129,7 @@ do.build.platform.%: do.build.parallel: $(foreach p,$(PLATFORMS_TO_BUILD_FOR), do.build.platform.$(p)) -build: csv-clean build.common ## Only build for linux platform +build: build.common ## Only build for linux platform @$(MAKE) go.build PLATFORM=linux_$(GOHOSTARCH) @$(MAKE) -C images PLATFORM=linux_$(GOHOSTARCH) @@ -172,7 +167,7 @@ codegen: ${CODE_GENERATOR} ## Run code generators. mod.check: go.mod.check ## Check if any go modules changed. mod.update: go.mod.update ## Update all go modules. -clean: csv-clean ## Remove all files that are created by building. +clean: ## Remove all files that are created by building. @$(MAKE) go.mod.clean @$(MAKE) -C images clean @rm -fr $(OUTPUT_DIR) $(WORK_DIR) @@ -183,15 +178,6 @@ distclean: clean ## Remove all files that are created by building or configuring prune: ## Prune cached artifacts. @$(MAKE) -C images prune -# Change how CRDs are generated for CSVs -csv: export MAX_DESC_LEN=0 # sets the description length to 0 since CSV cannot be bigger than 1MB -csv: export NO_OB_OBC_VOL_GEN=true -csv: csv-clean crds ## Generate a CSV file for OLM. - $(MAKE) -C images/ceph csv - -csv-clean: ## Remove existing OLM files. - @$(MAKE) -C images/ceph csv-clean - docs: helm-docs @build/deploy/generate-deploy-examples.sh @@ -199,7 +185,6 @@ crds: $(CONTROLLER_GEN) $(YQ) @echo Updating CRD manifests @build/crds/build-crds.sh $(CONTROLLER_GEN) $(YQ) @GOBIN=$(GOBIN) build/crds/generate-crd-docs.sh - @build/crds/validate-csv-crd-list.sh gen-rbac: $(HELM) $(YQ) ## Generate RBAC from Helm charts @# output only stdout to the file; stderr for debugging should keep going to stderr diff --git a/build/crds/validate-csv-crd-list.sh b/build/crds/validate-csv-crd-list.sh deleted file mode 100755 index 145c3c089531..000000000000 --- a/build/crds/validate-csv-crd-list.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash - -script_root=$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd -P) - -list_of_crd_in_crd_yaml=$(grep -oE '[^ ]*\.ceph\.rook\.io' "${script_root}/deploy/examples/crds.yaml" | sort) -list_of_csv_in_csv_yaml=$(grep -oE '[^ ]*\.ceph\.rook\.io' "${script_root}/deploy/olm/assemble/metadata-common.yaml" | sort) - -if [ "$list_of_crd_in_crd_yaml" != "$list_of_csv_in_csv_yaml" ]; then - echo "CRD list in crds.yaml file and metadata-common.yaml is different. Make sure to add crd in metadata-common.yaml." - echo -e "crd file list in crd.yaml:\n$list_of_crd_in_crd_yaml" - echo -e "crd file list in csv.yaml:\n$list_of_csv_in_csv_yaml" -fi diff --git a/build/csv/csv-gen.sh b/build/csv/csv-gen.sh deleted file mode 100755 index cab017e01dc4..000000000000 --- a/build/csv/csv-gen.sh +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/env bash -set -e - -############# -# VARIABLES # -############# - -operator_sdk="${OPERATOR_SDK:-operator-sdk}" -yq="${YQv3:-yq}" -PLATFORM=$(go env GOARCH) - -YQ_CMD_DELETE=("$yq" delete -i) -YQ_CMD_MERGE_OVERWRITE=("$yq" merge --inplace --overwrite --prettyPrint) -YQ_CMD_MERGE=("$yq" merge --arrays=append --inplace) -YQ_CMD_WRITE=("$yq" write --inplace -P) -CSV_FILE_NAME="../../build/csv/ceph/$PLATFORM/manifests/rook-ceph.clusterserviceversion.yaml" -CEPH_EXTERNAL_SCRIPT_FILE="../../deploy/examples/create-external-cluster-resources.py" -ASSEMBLE_FILE_COMMON="../../deploy/olm/assemble/metadata-common.yaml" -ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml" - -############# -# FUNCTIONS # -############# - -function generate_csv() { - kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-default,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter - - # cleanup to get the expected state before merging the real data from assembles - "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.icon[*]' - "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.installModes[*]' - "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.keywords[0]' - "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.maintainers[0]' - - "${YQ_CMD_MERGE_OVERWRITE[@]}" "$CSV_FILE_NAME" "$ASSEMBLE_FILE_COMMON" - "${YQ_CMD_WRITE[@]}" "$CSV_FILE_NAME" metadata.annotations.externalClusterScript "$(base64 <$CEPH_EXTERNAL_SCRIPT_FILE)" - "${YQ_CMD_WRITE[@]}" "$CSV_FILE_NAME" metadata.name "rook-ceph.v${VERSION}" - - "${YQ_CMD_MERGE[@]}" "$CSV_FILE_NAME" "$ASSEMBLE_FILE_OCP" - - # We don't need to include these files in csv as ocs-operator creates its own. - rm -rf "../../build/csv/ceph/$PLATFORM/manifests/rook-ceph-operator-config_v1_configmap.yaml" - - # This change are just to make the CSV file as it was earlier and as ocs-operator reads. - # Skipping this change for darwin since `sed -i` doesn't work with darwin properly. - # and the csv is not ever needed in the mac builds. - if [[ "$OSTYPE" == "darwin"* ]]; then - return - fi - - sed -i 's/image: rook\/ceph:.*/image: {{.RookOperatorImage}}/g' "$CSV_FILE_NAME" - sed -i 's/name: rook-ceph.v.*/name: rook-ceph.v{{.RookOperatorCsvVersion}}/g' "$CSV_FILE_NAME" - sed -i 's/version: 0.0.0/version: {{.RookOperatorCsvVersion}}/g' "$CSV_FILE_NAME" - - mv "$CSV_FILE_NAME" "../../build/csv/" - mv "../../build/csv/ceph/$PLATFORM/manifests/"* "../../build/csv/ceph/" - rm -rf "../../build/csv/ceph/$PLATFORM" -} - -if [ "$PLATFORM" == "amd64" ]; then - generate_csv -fi diff --git a/deploy/charts/library/templates/_cluster-role.tpl b/deploy/charts/library/templates/_cluster-role.tpl index fd79b7ce908e..1d0f4ed8f047 100644 --- a/deploy/charts/library/templates/_cluster-role.tpl +++ b/deploy/charts/library/templates/_cluster-role.tpl @@ -20,22 +20,6 @@ rules: resources: ["cephclusters", "cephclusters/finalizers"] verbs: ["get", "list", "create", "update", "delete"] --- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: rook-ceph-rgw - namespace: {{ .Release.Namespace }} # namespace:cluster -rules: - # Placeholder role so the rgw service account will - # be generated in the csv. Remove this role and role binding - # when fixing https://github.com/rook/rook/issues/10141. - - apiGroups: - - "" - resources: - - configmaps - verbs: - - get ---- # Aspects of ceph-mgr that operate within the cluster's namespace kind: Role apiVersion: rbac.authorization.k8s.io/v1 diff --git a/deploy/charts/library/templates/_cluster-rolebinding.tpl b/deploy/charts/library/templates/_cluster-rolebinding.tpl index dc5e05f29daf..b9748d40120c 100644 --- a/deploy/charts/library/templates/_cluster-rolebinding.tpl +++ b/deploy/charts/library/templates/_cluster-rolebinding.tpl @@ -32,21 +32,6 @@ subjects: name: rook-ceph-osd namespace: {{ .Release.Namespace }} # namespace:cluster --- -# Allow the rgw pods in this namespace to work with configmaps -kind: RoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: rook-ceph-rgw - namespace: {{ .Release.Namespace }} # namespace:cluster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: rook-ceph-rgw -subjects: - - kind: ServiceAccount - name: rook-ceph-rgw - namespace: {{ .Release.Namespace }} # namespace:cluster ---- # Allow the ceph mgr to access resources scoped to the CephCluster namespace necessary for mgr modules kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 diff --git a/deploy/examples/common.yaml b/deploy/examples/common.yaml index ed523e8cb051..4ea76243cd4c 100644 --- a/deploy/examples/common.yaml +++ b/deploy/examples/common.yaml @@ -902,22 +902,6 @@ rules: resources: ["persistentvolumeclaims"] verbs: ["get", "update", "delete", "list"] --- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: rook-ceph-rgw - namespace: rook-ceph # namespace:cluster -rules: - # Placeholder role so the rgw service account will - # be generated in the csv. Remove this role and role binding - # when fixing https://github.com/rook/rook/issues/10141. - - apiGroups: - - "" - resources: - - configmaps - verbs: - - get ---- # Allow the operator to manage resources in its own namespace apiVersion: rbac.authorization.k8s.io/v1 kind: Role @@ -1112,21 +1096,6 @@ subjects: name: rook-ceph-purge-osd namespace: rook-ceph # namespace:cluster --- -# Allow the rgw pods in this namespace to work with configmaps -kind: RoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: rook-ceph-rgw - namespace: rook-ceph # namespace:cluster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: rook-ceph-rgw -subjects: - - kind: ServiceAccount - name: rook-ceph-rgw - namespace: rook-ceph # namespace:cluster ---- # Grant the operator, agent, and discovery agents access to resources in the rook-ceph-system namespace kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml deleted file mode 100644 index d590491a3d1d..000000000000 --- a/deploy/olm/assemble/metadata-common.yaml +++ /dev/null @@ -1,440 +0,0 @@ -spec: - replaces: rook-ceph.v1.1.1 - customresourcedefinitions: - owned: - - kind: CephCluster - name: cephclusters.ceph.rook.io - version: v1 - displayName: Ceph Cluster - description: Represents a Ceph cluster. - - kind: CephBlockPool - name: cephblockpools.ceph.rook.io - version: v1 - displayName: Ceph Block Pool - description: Represents a Ceph Block Pool. - - kind: CephObjectStore - name: cephobjectstores.ceph.rook.io - version: v1 - displayName: Ceph Object Store - description: Represents a Ceph Object Store. - specDescriptors: - - description: Coding Chunks - displayName: Coding Chunks - path: dataPool.erasureCoded.codingChunks - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: Data Chunks - displayName: Data Chunks - path: dataPool.erasureCoded.dataChunks - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: failureDomain - displayName: failureDomain - path: dataPool.failureDomain - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool" - - "urn:alm:descriptor:com.tectonic.ui:text" - - description: Size - displayName: Size - path: dataPool.replicated.size - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: Annotations - displayName: Annotations - path: gateway.annotations - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:io.kubernetes:annotations" - - description: Instances - displayName: Instances - path: gateway.instances - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: Resources - displayName: Resources - path: gateway.resources - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:com.tectonic.ui:resourceRequirements" - - description: placement - displayName: placement - path: gateway.placement - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:io.kubernetes:placement" - - description: securePort - displayName: securePort - path: gateway.securePort - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:io.kubernetes:securePort" - - description: sslCertificateRef - displayName: sslCertificateRef - path: gateway.sslCertificateRef - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:io.kubernetes:sslCertificateRef" - - description: Type - displayName: Type - path: gateway.type - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway" - - "urn:alm:descriptor:com.tectonic.ui:text" - - description: Coding Chunks - displayName: Coding Chunks - path: metadataPool.erasureCoded.codingChunks - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: Data Chunks - displayName: Data Chunks - path: metadataPool.erasureCoded.dataChunks - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - description: failureDomain - displayName: failureDomain - path: metadataPool.failureDomain - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool" - - "urn:alm:descriptor:com.tectonic.ui:text" - - description: Size - displayName: Size - path: metadataPool.replicated.size - x-descriptors: - - "urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool" - - "urn:alm:descriptor:com.tectonic.ui:number" - - kind: CephObjectStoreUser - name: cephobjectstoreusers.ceph.rook.io - version: v1 - displayName: Ceph Object Store User - description: Represents a Ceph Object Store User. - - kind: CephNFS - name: cephnfses.ceph.rook.io - version: v1 - displayName: Ceph NFS - description: Represents a cluster of Ceph NFS ganesha gateways. - - kind: CephClient - name: cephclients.ceph.rook.io - version: v1 - displayName: Ceph Client - description: Represents a Ceph User. - - kind: CephFilesystem - name: cephfilesystems.ceph.rook.io - version: v1 - displayName: Ceph Filesystem - description: Represents a Ceph Filesystem. - - kind: CephFilesystemMirror - name: cephfilesystemmirrors.ceph.rook.io - version: v1 - displayName: Ceph Filesystem Mirror - description: Represents a Ceph Filesystem Mirror. - - kind: CephRBDMirror - name: cephrbdmirrors.ceph.rook.io - version: v1 - displayName: Ceph RBD Mirror - description: Represents a Ceph RBD Mirror. - - kind: CephObjectRealm - name: cephobjectrealms.ceph.rook.io - version: v1 - displayName: Ceph Object Store Realm - description: Represents a Ceph Object Store Realm. - - kind: CephObjectZoneGroup - name: cephobjectzonegroups.ceph.rook.io - version: v1 - displayName: Ceph Object Store Zone Group - description: Represents a Ceph Object Store Zone Group. - - kind: CephObjectZone - name: cephobjectzones.ceph.rook.io - version: v1 - displayName: Ceph Object Store Zone - description: Represents a Ceph Object Store Zone. - - kind: CephBucketNotification - name: cephbucketnotifications.ceph.rook.io - version: v1 - displayName: Ceph Bucket Notification - description: Represents a Ceph Bucket Notification. - - kind: CephBucketTopic - name: cephbuckettopics.ceph.rook.io - version: v1 - displayName: Ceph Bucket Topic - description: Represents a Ceph Bucket Topic. - - kind: CephFilesystemSubVolumeGroup - name: cephfilesystemsubvolumegroups.ceph.rook.io - version: v1 - displayName: Ceph Filesystem SubVolumeGroup - description: Represents a Ceph Filesystem SubVolumeGroup. - - kind: CephBlockPoolRadosNamespace - name: cephblockpoolradosnamespaces.ceph.rook.io - version: v1 - displayName: Ceph BlockPool Rados Namespace - description: Represents a Ceph BlockPool Rados Namespace. - - kind: CephCOSIDriver - name: cephcosidrivers.ceph.rook.io - version: v1 - displayName: Ceph COSI Driver - description: Represents a Ceph COSI Driver. - displayName: Rook-Ceph - description: | - - The Rook-Ceph storage operator packages, deploys, manages, upgrades and scales Ceph storage for providing persistent storage to infrastructure services (Logging, Metrics, Registry) as well as stateful applications in Kubernetes clusters. - - ## Rook-Ceph Storage Operator - - Rook runs as a cloud-native service in Kubernetes clusters for optimal integration with applications in need of storage, and handles the heavy-lifting behind the scenes such as provisioning and management. - Rook orchestrates battle-tested open-source storage technology Ceph, which has years of production deployments and runs some of the worlds largest clusters. - - Ceph is a massively scalable, software-defined, cloud native storage platform that offers block, file and object storage services. - Ceph can be used to back a wide variety of applications including relational databases, NoSQL databases, CI/CD tool-sets, messaging, AI/ML and analytics applications. - Ceph is a proven storage platform that backs some of the world's largest storage deployments and has a large vibrant open source community backing the project. - - ## Supported features - * **High Availability and resiliency** - Ceph has no single point of failures (SPOF) and all its components work natively in a highly available fashion - * **Data Protection** - Ceph periodically scrub for inconsistent objects and repair them if necessary, making sure your replicas are always coherent - * **Consistent storage platform across hybrid cloud** - Ceph can be deployed anywhere (on-premise or bare metal) and thus offers a similar experience regardless - * **Block, File & Object storage service** - Ceph can expose your data through several storage interfaces, solving all the application use cases - * **Scale up/down** - addition and removal of storage is fully covered by the operator. - * **Dashboard** - The Operator deploys a dashboard for monitoring and introspecting your cluster. - - ## Before you start - https://rook.io/docs/rook/v1.0/k8s-pre-reqs.html - - keywords: - [ - "rook", - "ceph", - "storage", - "object storage", - "open source", - "block storage", - "shared filesystem", - ] - minKubeVersion: 1.10.0 - labels: - alm-owner-etcd: rookoperator - operated-by: rookoperator - selector: - matchLabels: - alm-owner-etcd: rookoperator - operated-by: rookoperator - links: - - name: Blog - url: https://blog.rook.io - - name: Documentation - url: https://rook.github.io/docs/rook/v1.0/ - icon: - - base64data: PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPCEtLSBHZW5lcmF0b3I6IEFkb2JlIElsbHVzdHJhdG9yIDIzLjAuMiwgU1ZHIEV4cG9ydCBQbHVnLUluIC4gU1ZHIFZlcnNpb246IDYuMDAgQnVpbGQgMCkgIC0tPgo8c3ZnIHZlcnNpb249IjEuMSIgaWQ9IkxheWVyXzEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IgoJIHZpZXdCb3g9IjAgMCA3MCA3MCIgc3R5bGU9ImVuYWJsZS1iYWNrZ3JvdW5kOm5ldyAwIDAgNzAgNzA7IiB4bWw6c3BhY2U9InByZXNlcnZlIj4KPHN0eWxlIHR5cGU9InRleHQvY3NzIj4KCS5zdDB7ZmlsbDojMkIyQjJCO30KPC9zdHlsZT4KPGc+Cgk8Zz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTUwLjUsNjcuNkgxOS45Yy04LDAtMTQuNS02LjUtMTQuNS0xNC41VjI5LjJjMC0xLjEsMC45LTIuMSwyLjEtMi4xaDU1LjRjMS4xLDAsMi4xLDAuOSwyLjEsMi4xdjIzLjkKCQkJCUM2NSw2MS4xLDU4LjUsNjcuNiw1MC41LDY3LjZ6IE05LjYsMzEuMnYyMS45YzAsNS43LDQuNiwxMC4zLDEwLjMsMTAuM2gzMC42YzUuNywwLDEwLjMtNC42LDEwLjMtMTAuM1YzMS4ySDkuNnoiLz4KCQk8L2c+CgkJPGc+CgkJCTxwYXRoIGNsYXNzPSJzdDAiIGQ9Ik00Mi40LDU2LjdIMjhjLTEuMSwwLTIuMS0wLjktMi4xLTIuMXYtNy4yYzAtNS4xLDQuMi05LjMsOS4zLTkuM3M5LjMsNC4yLDkuMyw5LjN2Ny4yCgkJCQlDNDQuNSw1NS43LDQzLjYsNTYuNyw0Mi40LDU2Ljd6IE0zMCw1Mi41aDEwLjN2LTUuMmMwLTIuOS0yLjMtNS4yLTUuMi01LjJjLTIuOSwwLTUuMiwyLjMtNS4yLDUuMlY1Mi41eiIvPgoJCTwvZz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTYyLjksMjMuMkM2Mi45LDIzLjIsNjIuOSwyMy4yLDYyLjksMjMuMmwtMTEuMSwwYy0xLjEsMC0yLjEtMC45LTIuMS0yLjFjMC0xLjEsMC45LTIuMSwyLjEtMi4xCgkJCQljMCwwLDAsMCwwLDBsOS4xLDBWNi43aC02Ljl2My41YzAsMC41LTAuMiwxLjEtMC42LDEuNWMtMC40LDAuNC0wLjksMC42LTEuNSwwLjZsMCwwbC0xMS4xLDBjLTEuMSwwLTIuMS0wLjktMi4xLTIuMVY2LjdoLTYuOQoJCQkJdjMuNWMwLDEuMS0wLjksMi4xLTIuMSwyLjFsLTExLjEsMGMtMC41LDAtMS4xLTAuMi0xLjUtMC42Yy0wLjQtMC40LTAuNi0wLjktMC42LTEuNVY2LjdIOS42djEyLjRoOWMxLjEsMCwyLjEsMC45LDIuMSwyLjEKCQkJCXMtMC45LDIuMS0yLjEsMi4xaC0xMWMtMS4xLDAtMi4xLTAuOS0yLjEtMi4xVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDcsMFY0LjYKCQkJCWMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDYuOSwwVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMUM2NCwyLjYsNjUsMy41LDY1LDQuNnYxNi41CgkJCQljMCwwLjUtMC4yLDEuMS0wLjYsMS41QzY0LDIzLDYzLjQsMjMuMiw2Mi45LDIzLjJ6Ii8+CgkJPC9nPgoJPC9nPgo8L2c+Cjwvc3ZnPg== - mediatype: image/svg+xml - installModes: - - type: OwnNamespace - supported: true - - type: SingleNamespace - supported: true - - type: MultiNamespace - supported: false - - type: AllNamespaces - supported: false - -metadata: - annotations: - tectonic-visibility: ocs - repository: https://github.com/rook/rook - containerImage: "{{.RookOperatorImage}}" - alm-examples: |- - [ - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephCluster", - "metadata": { - "name": "my-rook-ceph", - "namespace": "my-rook-ceph" - }, - "spec": { - "cephVersion": { - "image": "quay.io/ceph/ceph:v17.2.6" - }, - "dataDirHostPath": "/var/lib/rook", - "mon": { - "count": 3 - }, - "dashboard": { - "enabled": true - }, - "network": { - "hostNetwork": false - }, - "rbdMirroring": { - "workers": 0 - }, - "storage": { - "useAllNodes": true, - "useAllDevices": true - } - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephBlockPool", - "metadata": { - "name": "replicapool", - "namespace": "my-rook-ceph" - }, - "spec": { - "failureDomain": "host", - "replicated": { - "size": 3 - }, - "annotations": null - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephObjectStore", - "metadata": { - "name": "my-store", - "namespace": "my-rook-ceph" - }, - "spec": { - "metadataPool": { - "failureDomain": "host", - "replicated": { - "size": 3 - } - }, - "dataPool": { - "failureDomain": "host", - "replicated": { - "size": 3 - } - }, - "gateway": { - "type": "s3", - "sslCertificateRef": null, - "port": 8080, - "securePort": null, - "instances": 1, - "placement": null, - "annotations": null, - "resources": null - } - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephObjectStoreUser", - "metadata": { - "name": "my-user", - "namespace": "my-rook-ceph" - }, - "spec": { - "store": "my-store", - "displayName": "my display name" - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephNFS", - "metadata": { - "name": "my-nfs", - "namespace": "rook-ceph" - }, - "spec": { - "rados": { - "pool": "myfs-data0", - "namespace": "nfs-ns" - }, - "server": { - "active": 3, - "placement": null, - "annotations": null, - "resources": null - } - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephClient", - "metadata": { - "name": "cinder", - "namespace": "rook-ceph" - }, - "spec": { - "caps": { - "mon": "profile rbd", - "osd": "profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images" - } - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephFilesystem", - "metadata": { - "name": "myfs", - "namespace": "rook-ceph" - }, - "spec": { - "dataPools": [ - { - "compressionMode": "", - "crushRoot": "", - "deviceClass": "", - "erasureCoded": { - "algorithm": "", - "codingChunks": 0, - "dataChunks": 0 - }, - "failureDomain": "host", - "replicated": { - "requireSafeReplicaSize": false, - "size": 1, - "targetSizeRatio": 0.5 - } - } - ], - "metadataPool": { - "compressionMode": "", - "crushRoot": "", - "deviceClass": "", - "erasureCoded": { - "algorithm": "", - "codingChunks": 0, - "dataChunks": 0 - }, - "failureDomain": "", - "replicated": { - "requireSafeReplicaSize": false, - "size": 1, - "targetSizeRatio": 0 - } - }, - "metadataServer": { - "activeCount": 1, - "activeStandby": true, - "placement": {}, - "resources": {} - }, - "preservePoolsOnDelete": false, - "preserveFilesystemOnDelete": false - } - }, - { - "apiVersion": "ceph.rook.io/v1", - "kind": "CephRBDMirror", - "metadata": { - "name": "my-rbd-mirror", - "namespace": "rook-ceph" - }, - "spec": { - "annotations": null, - "count": 1, - "placement": { - "topologyKey": "kubernetes.io/hostname" - }, - "resources": null - } - } - ] diff --git a/deploy/olm/assemble/metadata-k8s.yaml b/deploy/olm/assemble/metadata-k8s.yaml deleted file mode 100644 index ca0da382b369..000000000000 --- a/deploy/olm/assemble/metadata-k8s.yaml +++ /dev/null @@ -1,14 +0,0 @@ -metadata: - annotations: - categories: Storage - description: Install and maintain Ceph Storage cluster - createdAt: 2019-05-13T18-08-04Z - support: https://slack.rook.io/ - certified: "false" - capabilities: Full Lifecycle -spec: - maintainers: - - name: The Rook Authors - email: cncf-rook-info@lists.cncf.io - provider: - name: The Rook Authors diff --git a/deploy/olm/assemble/metadata-ocp.yaml b/deploy/olm/assemble/metadata-ocp.yaml deleted file mode 100644 index 97472c92396e..000000000000 --- a/deploy/olm/assemble/metadata-ocp.yaml +++ /dev/null @@ -1,19 +0,0 @@ -spec: - install: - spec: - clusterPermissions: - - rules: - - verbs: - - use - apiGroups: - - security.openshift.io - resources: - - securitycontextconstraints - resourceNames: - - privileged - serviceAccountName: rook-ceph-system - maintainers: - - name: Red Hat, Inc. - email: customerservice@redhat.com - provider: - name: Red Hat, Inc. diff --git a/deploy/olm/assemble/metadata-okd.yaml b/deploy/olm/assemble/metadata-okd.yaml deleted file mode 100644 index 2516a1ebf37c..000000000000 --- a/deploy/olm/assemble/metadata-okd.yaml +++ /dev/null @@ -1,27 +0,0 @@ -metadata: - annotations: - categories: Storage - description: Install and maintain Ceph Storage cluster - createdAt: 2019-05-13T18-08-04Z - support: https://slack.rook.io/ - certified: "false" - capabilities: Full Lifecycle -spec: - install: - spec: - clusterPermissions: - - rules: - - verbs: - - use - apiGroups: - - security.openshift.io - resources: - - securitycontextconstraints - resourceNames: - - privileged - serviceAccountName: rook-ceph-system - maintainers: - - name: The Rook Authors - email: cncf-rook-info@lists.cncf.io - provider: - name: The Rook Authors diff --git a/deploy/olm/assemble/rook-ceph.package.yaml b/deploy/olm/assemble/rook-ceph.package.yaml deleted file mode 100644 index d31d9d69573b..000000000000 --- a/deploy/olm/assemble/rook-ceph.package.yaml +++ /dev/null @@ -1,4 +0,0 @@ -packageName: rook-ceph -channels: -- name: beta - currentCSV: rook-ceph.v1.2.2 diff --git a/images/ceph/Dockerfile b/images/ceph/Dockerfile index 268926856e95..e1fdf1230b4f 100644 --- a/images/ceph/Dockerfile +++ b/images/ceph/Dockerfile @@ -32,7 +32,6 @@ RUN curl --fail -sSL -o /s5cmd.tar.gz https://github.com/peak/s5cmd/releases/dow COPY rook toolbox.sh set-ceph-debug-level /usr/local/bin/ COPY ceph-monitoring /etc/ceph-monitoring COPY rook-external /etc/rook-external/ -COPY ceph-csv-templates /etc/ceph-csv-templates RUN useradd rook -u 2016 # 2016 is the UID of the rook user and also the year of the first commit in the project USER 2016 ENTRYPOINT ["/usr/local/bin/rook"] diff --git a/images/ceph/Makefile b/images/ceph/Makefile index e93dab7d9fc8..041ef253b8c4 100755 --- a/images/ceph/Makefile +++ b/images/ceph/Makefile @@ -39,24 +39,6 @@ endif # ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH} # (see: https://sdk.operatorframework.io/docs/installation/) S5CMD_ARCH = Linux-64bit -ifeq ($(REAL_HOST_PLATFORM),linux_amd64) -OPERATOR_SDK_PLATFORM = x86_64-linux-gnu -INCLUDE_CSV_TEMPLATES = true -endif -ifeq ($(REAL_HOST_PLATFORM),linux_arm64) -OPERATOR_SDK_PLATFORM = aarch64-linux-gnu -INCLUDE_CSV_TEMPLATES = true -S5CMD_ARCH = Linux-arm64 -endif -ifeq ($(REAL_HOST_PLATFORM),darwin_amd64) -OPERATOR_SDK_PLATFORM = x86_64-apple-darwin -INCLUDE_CSV_TEMPLATES = true -endif -ifneq ($(INCLUDE_CSV_TEMPLATES),true) -$(info ) -$(info NOT INCLUDING OLM/CSV TEMPLATES!) -$(info ) -endif # s5cmd's version S5CMD_VERSION = 2.2.1 @@ -79,16 +61,6 @@ do.build: @mkdir -p $(BUILD_CONTEXT_DIR)/rook-external/test-data @cp $(MANIFESTS_DIR)/create-external-cluster-resources.* $(BUILD_CONTEXT_DIR)/rook-external/ @cp ../../tests/ceph-status-out $(BUILD_CONTEXT_DIR)/rook-external/test-data/ - -ifeq ($(INCLUDE_CSV_TEMPLATES),true) - @$(MAKE) csv - @cp -r ../../build/csv $(BUILD_CONTEXT_DIR)/ceph-csv-templates - @rm $(BUILD_CONTEXT_DIR)/ceph-csv-templates/csv-gen.sh - @$(MAKE) csv-clean - -else - mkdir $(BUILD_CONTEXT_DIR)/ceph-csv-templates -endif @cd $(BUILD_CONTEXT_DIR) && $(SED_IN_PLACE) 's|BASEIMAGE|$(BASEIMAGE)|g' Dockerfile @if [ -z "$(BUILD_CONTAINER_IMAGE)" ]; then\ $(DOCKERCMD) build $(BUILD_ARGS) \ @@ -119,18 +91,6 @@ $(OPERATOR_SDK): @chmod +x $(OPERATOR_SDK) @$(OPERATOR_SDK) version -csv: $(OPERATOR_SDK) $(YQv3) - @echo generate csv with latest operator-sdk - @mkdir -p ../../build/csv/ceph - @../../build/csv/csv-gen.sh - @# #adding 2>/dev/null since CI doesn't seems to be creating bundle.Dockerfile file - @rm bundle.Dockerfile 2> /dev/null || true - -csv-clean: ## Remove existing OLM files. - @rm -fr ../../build/csv/ceph/${go env GOARCH} - @rm -f ../../build/csv/rook-ceph.clusterserviceversion.yaml - @git restore $(MANIFESTS_DIR)/crds.yaml ../../deploy/charts/rook-ceph/templates/resources.yaml - # reading from a file and outputting to the same file can have undefined results, so use this intermediate IMAGE_TMP="/tmp/rook-ceph-image-list" list-image: ## Create a list of images for offline installation From 026d745f8a133f4726a4611199423a84cf0223f5 Mon Sep 17 00:00:00 2001 From: Travis Nielsen Date: Mon, 11 Mar 2024 11:06:59 -0600 Subject: [PATCH 37/65] core: set default ceph version to v18.2.2 With the release of Reef v18.2.2 we update the default recommended version of Ceph to fix the prometheus issues seen with v18.2.1 Signed-off-by: Travis Nielsen --- Documentation/CRDs/Cluster/ceph-cluster-crd.md | 12 ++++++------ Documentation/CRDs/Cluster/host-cluster.md | 6 +++--- Documentation/CRDs/Cluster/pvc-cluster.md | 6 +++--- Documentation/CRDs/Cluster/stretch-cluster.md | 2 +- Documentation/Upgrade/ceph-upgrade.md | 10 +++++----- deploy/charts/rook-ceph-cluster/values.yaml | 6 +++--- deploy/examples/cluster-external-management.yaml | 2 +- deploy/examples/cluster-on-local-pvc.yaml | 2 +- deploy/examples/cluster-on-pvc.yaml | 2 +- deploy/examples/cluster-stretched-aws.yaml | 2 +- deploy/examples/cluster-stretched.yaml | 2 +- deploy/examples/cluster.yaml | 4 ++-- deploy/examples/images.txt | 2 +- deploy/examples/toolbox.yaml | 2 +- design/ceph/ceph-cluster-cleanup.md | 2 +- images/ceph/Makefile | 4 ++-- pkg/operator/ceph/cluster/osd/deviceset_test.go | 4 ++-- tests/manifests/test-cluster-on-pvc-encrypted.yaml | 2 +- 18 files changed, 36 insertions(+), 36 deletions(-) diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index c6efd41fb3e1..7e82947873fb 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -26,7 +26,7 @@ Settings can be specified at the global level to apply to the cluster as a whole * `external`: * `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs. * `cephVersion`: The version information for launching the ceph daemons. - * `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.1`. For more details read the [container images section](#ceph-container-images). + * `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.2`. For more details read the [container images section](#ceph-container-images). For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/). To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version. Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v17` will be updated each time a new Quincy build is released. @@ -110,8 +110,8 @@ These are general purpose Ceph container with all necessary daemons and dependen | -------------------- | --------------------------------------------------------- | | vRELNUM | Latest release in this series (e.g., *v17* = Quincy) | | vRELNUM.Y | Latest stable release in this stable series (e.g., v17.2) | -| vRELNUM.Y.Z | A specific release (e.g., v18.2.1) | -| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.1-20240103) | +| vRELNUM.Y.Z | A specific release (e.g., v18.2.2) | +| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.2-20240311) | A specific will contain a specific release of Ceph as well as security fixes from the Operating System. @@ -414,7 +414,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 @@ -517,7 +517,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 @@ -645,7 +645,7 @@ kubectl -n rook-ceph get CephCluster -o yaml deviceClasses: - name: hdd version: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 version: 16.2.6-0 conditions: - lastHeartbeatTime: "2021-03-02T21:22:11Z" diff --git a/Documentation/CRDs/Cluster/host-cluster.md b/Documentation/CRDs/Cluster/host-cluster.md index b765c9ba5dfd..600cf9a3e873 100644 --- a/Documentation/CRDs/Cluster/host-cluster.md +++ b/Documentation/CRDs/Cluster/host-cluster.md @@ -22,7 +22,7 @@ metadata: spec: cephVersion: # see the "Cluster Settings" section below for more details on which image of ceph to run - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 @@ -49,7 +49,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 @@ -101,7 +101,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 diff --git a/Documentation/CRDs/Cluster/pvc-cluster.md b/Documentation/CRDs/Cluster/pvc-cluster.md index 25cb86d363d1..1883e6fb8b75 100644 --- a/Documentation/CRDs/Cluster/pvc-cluster.md +++ b/Documentation/CRDs/Cluster/pvc-cluster.md @@ -18,7 +18,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 @@ -72,7 +72,7 @@ spec: requests: storage: 10Gi cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: false dashboard: enabled: true @@ -128,7 +128,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 diff --git a/Documentation/CRDs/Cluster/stretch-cluster.md b/Documentation/CRDs/Cluster/stretch-cluster.md index 0e79c13161a4..f06477ec3325 100644 --- a/Documentation/CRDs/Cluster/stretch-cluster.md +++ b/Documentation/CRDs/Cluster/stretch-cluster.md @@ -34,7 +34,7 @@ spec: - name: b - name: c cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: true # Either storageClassDeviceSets or the storage section can be specified for creating OSDs. # This example uses all devices for simplicity. diff --git a/Documentation/Upgrade/ceph-upgrade.md b/Documentation/Upgrade/ceph-upgrade.md index 60f96fbcdc8a..fc8cdb860ced 100644 --- a/Documentation/Upgrade/ceph-upgrade.md +++ b/Documentation/Upgrade/ceph-upgrade.md @@ -50,7 +50,7 @@ Official Ceph container images can be found on [Quay](https://quay.io/repository These images are tagged in a few ways: -* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v18.2.1-20240103`). +* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v18.2.2-20240311`). These tags are recommended for production clusters, as there is no possibility for the cluster to be heterogeneous with respect to the version of Ceph running in containers. * Ceph major version tags (e.g., `v18`) are useful for development and test clusters so that the @@ -67,7 +67,7 @@ CephCluster CRD (`spec.cephVersion.image`). ```console ROOK_CLUSTER_NAMESPACE=rook-ceph -NEW_CEPH_IMAGE='quay.io/ceph/ceph:v18.2.1-20240103' +NEW_CEPH_IMAGE='quay.io/ceph/ceph:v18.2.2-20240311' kubectl -n $ROOK_CLUSTER_NAMESPACE patch CephCluster $ROOK_CLUSTER_NAMESPACE --type=merge -p "{\"spec\": {\"cephVersion\": {\"image\": \"$NEW_CEPH_IMAGE\"}}}" ``` @@ -79,7 +79,7 @@ employed by the new Rook operator release. Employing an outdated Ceph version wi in unexpected behaviour. ```console -kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v18.2.1-20240103 +kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v18.2.2-20240311 ``` #### **3. Wait for the pod updates** @@ -97,9 +97,9 @@ Confirm the upgrade is completed when the versions are all on the desired Ceph v kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"ceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}' | sort | uniq This cluster is not yet finished: ceph-version=v17.2.7-0 - ceph-version=v18.2.1-0 + ceph-version=v18.2.2-0 This cluster is finished: - ceph-version=v18.2.1-0 + ceph-version=v18.2.2-0 ``` #### **4. Verify cluster health** diff --git a/deploy/charts/rook-ceph-cluster/values.yaml b/deploy/charts/rook-ceph-cluster/values.yaml index ea139b662a57..ff52e617e047 100644 --- a/deploy/charts/rook-ceph-cluster/values.yaml +++ b/deploy/charts/rook-ceph-cluster/values.yaml @@ -25,7 +25,7 @@ toolbox: # -- Enable Ceph debugging pod deployment. See [toolbox](../Troubleshooting/ceph-toolbox.md) enabled: false # -- Toolbox image, defaults to the image used by the Ceph cluster - image: #quay.io/ceph/ceph:v18.2.1 + image: #quay.io/ceph/ceph:v18.2.2 # -- Toolbox tolerations tolerations: [] # -- Toolbox affinity @@ -92,9 +92,9 @@ cephClusterSpec: # v17 is Quincy, v18 is Reef. # RECOMMENDATION: In production, use a specific version tag instead of the general v18 flag, which pulls the latest release and could result in different # versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/. - # If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v18.2.1-20240103 + # If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v18.2.2-20240311 # This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 # Whether to allow unsupported versions of Ceph. Currently `quincy`, and `reef` are supported. # Future versions such as `squid` (v19) would require this to be set to `true`. # Do not set to true in production. diff --git a/deploy/examples/cluster-external-management.yaml b/deploy/examples/cluster-external-management.yaml index 8201e3b9e7ad..e4ad4d2b3513 100644 --- a/deploy/examples/cluster-external-management.yaml +++ b/deploy/examples/cluster-external-management.yaml @@ -19,4 +19,4 @@ spec: dataDirHostPath: /var/lib/rook # providing an image is required, if you want to create other CRs (rgw, mds, nfs) cephVersion: - image: quay.io/ceph/ceph:v18.2.1 # Should match external cluster version + image: quay.io/ceph/ceph:v18.2.2 # Should match external cluster version diff --git a/deploy/examples/cluster-on-local-pvc.yaml b/deploy/examples/cluster-on-local-pvc.yaml index af63f90b413d..25960b985a81 100644 --- a/deploy/examples/cluster-on-local-pvc.yaml +++ b/deploy/examples/cluster-on-local-pvc.yaml @@ -173,7 +173,7 @@ spec: requests: storage: 10Gi cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: false skipUpgradeChecks: false continueUpgradeAfterChecksEvenIfNotHealthy: false diff --git a/deploy/examples/cluster-on-pvc.yaml b/deploy/examples/cluster-on-pvc.yaml index f425f3360a89..ef3c178e6af3 100644 --- a/deploy/examples/cluster-on-pvc.yaml +++ b/deploy/examples/cluster-on-pvc.yaml @@ -33,7 +33,7 @@ spec: requests: storage: 10Gi cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: false skipUpgradeChecks: false continueUpgradeAfterChecksEvenIfNotHealthy: false diff --git a/deploy/examples/cluster-stretched-aws.yaml b/deploy/examples/cluster-stretched-aws.yaml index 20a3a1f9fb4a..3a6c773a3d5b 100644 --- a/deploy/examples/cluster-stretched-aws.yaml +++ b/deploy/examples/cluster-stretched-aws.yaml @@ -44,7 +44,7 @@ spec: mgr: count: 2 cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: true skipUpgradeChecks: false continueUpgradeAfterChecksEvenIfNotHealthy: false diff --git a/deploy/examples/cluster-stretched.yaml b/deploy/examples/cluster-stretched.yaml index adb19a347c3f..220656a1fe0b 100644 --- a/deploy/examples/cluster-stretched.yaml +++ b/deploy/examples/cluster-stretched.yaml @@ -38,7 +38,7 @@ spec: mgr: count: 2 cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 allowUnsupported: true skipUpgradeChecks: false continueUpgradeAfterChecksEvenIfNotHealthy: false diff --git a/deploy/examples/cluster.yaml b/deploy/examples/cluster.yaml index 57fc25790260..b753afbdad9a 100644 --- a/deploy/examples/cluster.yaml +++ b/deploy/examples/cluster.yaml @@ -19,9 +19,9 @@ spec: # v17 is Quincy, v18 is Reef. # RECOMMENDATION: In production, use a specific version tag instead of the general v17 flag, which pulls the latest release and could result in different # versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/. - # If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v18.2.1-20240103 + # If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v18.2.2-20240311 # This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 # Whether to allow unsupported versions of Ceph. Currently `quincy` and `reef` are supported. # Future versions such as `squid` (v19) would require this to be set to `true`. # Do not set to true in production. diff --git a/deploy/examples/images.txt b/deploy/examples/images.txt index 03353b01dcb8..4d4ae0f4913d 100644 --- a/deploy/examples/images.txt +++ b/deploy/examples/images.txt @@ -1,5 +1,5 @@ gcr.io/k8s-staging-sig-storage/objectstorage-sidecar/objectstorage-sidecar:v20230130-v0.1.0-24-gc0cf995 - quay.io/ceph/ceph:v18.2.1 + quay.io/ceph/ceph:v18.2.2 quay.io/ceph/cosi:v0.1.1 quay.io/cephcsi/cephcsi:v3.10.2 quay.io/csiaddons/k8s-sidecar:v0.8.0 diff --git a/deploy/examples/toolbox.yaml b/deploy/examples/toolbox.yaml index d90bb52c94fd..adcae195cf25 100644 --- a/deploy/examples/toolbox.yaml +++ b/deploy/examples/toolbox.yaml @@ -18,7 +18,7 @@ spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-ceph-tools - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 command: - /bin/bash - -c diff --git a/design/ceph/ceph-cluster-cleanup.md b/design/ceph/ceph-cluster-cleanup.md index 6e6047edc7a8..88901fb4f5a8 100644 --- a/design/ceph/ceph-cluster-cleanup.md +++ b/design/ceph/ceph-cluster-cleanup.md @@ -34,7 +34,7 @@ metadata: namespace: rook-ceph spec: cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 diff --git a/images/ceph/Makefile b/images/ceph/Makefile index e93dab7d9fc8..88f10dd95981 100755 --- a/images/ceph/Makefile +++ b/images/ceph/Makefile @@ -18,9 +18,9 @@ include ../image.mk # Image Build Options ifeq ($(GOARCH),amd64) -CEPH_VERSION ?= v18.2.1-20231215 +CEPH_VERSION ?= v18.2.2-20240311 else -CEPH_VERSION ?= v18.2.1-20231215 +CEPH_VERSION ?= v18.2.2-20240311 endif REGISTRY_NAME = quay.io BASEIMAGE = $(REGISTRY_NAME)/ceph/ceph-$(GOARCH):$(CEPH_VERSION) diff --git a/pkg/operator/ceph/cluster/osd/deviceset_test.go b/pkg/operator/ceph/cluster/osd/deviceset_test.go index 0dd2f4cf6d9e..cbffcbdc4ece 100644 --- a/pkg/operator/ceph/cluster/osd/deviceset_test.go +++ b/pkg/operator/ceph/cluster/osd/deviceset_test.go @@ -294,8 +294,8 @@ func TestPVCName(t *testing.T) { } func TestCreateValidImageVersionLabel(t *testing.T) { - image := "ceph/ceph:v18.2.1" - assert.Equal(t, "ceph_ceph_v18.2.1", createValidImageVersionLabel(image)) + image := "ceph/ceph:v18.2.2" + assert.Equal(t, "ceph_ceph_v18.2.2", createValidImageVersionLabel(image)) image = "rook/ceph:master" assert.Equal(t, "rook_ceph_master", createValidImageVersionLabel(image)) image = ".invalid_label" diff --git a/tests/manifests/test-cluster-on-pvc-encrypted.yaml b/tests/manifests/test-cluster-on-pvc-encrypted.yaml index b6637e4d1836..23da35bcf180 100644 --- a/tests/manifests/test-cluster-on-pvc-encrypted.yaml +++ b/tests/manifests/test-cluster-on-pvc-encrypted.yaml @@ -14,7 +14,7 @@ spec: requests: storage: 5Gi cephVersion: - image: quay.io/ceph/ceph:v18.2.1 + image: quay.io/ceph/ceph:v18.2.2 dashboard: enabled: false network: From fdacfd51c565c28438e3093e2d4ffe15b725cf77 Mon Sep 17 00:00:00 2001 From: Travis Nielsen Date: Tue, 27 Feb 2024 16:55:06 -0700 Subject: [PATCH 38/65] object: create an object store based on shared pools Until now, an object store would create all the necessary metadata pools and the data pool that were exclusively for its own object store. When isolation between object stores is necessary, this would cause many pools and PGs to be created in the cluster, which was not manageable. Now one set of pools can be created to be shared by any number of object stores. The metadata and data between each object store is isolated by RADOS namespaces, which by design will keep the data safe for multi-tenancy. Signed-off-by: Travis Nielsen --- .github/workflows/canary-integration-test.yml | 16 +- Documentation/CRDs/specification.md | 108 +++++++ .../Object-Storage-RGW/object-storage.md | 129 +++++++- PendingReleaseNotes.md | 4 +- .../charts/rook-ceph/templates/resources.yaml | 46 +++ deploy/examples/crds.yaml | 46 +++ deploy/examples/object-a.yaml | 78 +++++ deploy/examples/object-b.yaml | 78 +++++ deploy/examples/object-bucket-claim-a.yaml | 7 + deploy/examples/object-shared-pools-test.yaml | 48 +++ deploy/examples/object-shared-pools.yaml | 49 +++ deploy/examples/storageclass-bucket-a.yaml | 9 + design/ceph/object/store.md | 4 +- pkg/apis/ceph.rook.io/v1/types.go | 25 ++ pkg/operator/ceph/object/controller.go | 81 ++--- pkg/operator/ceph/object/objectstore.go | 257 +++++++++++++++- pkg/operator/ceph/object/objectstore_test.go | 282 +++++++++++++++++- pkg/operator/ceph/object/zone/controller.go | 31 +- tests/scripts/github-action-helper.sh | 4 +- tests/scripts/validate_cluster.sh | 11 +- 20 files changed, 1227 insertions(+), 86 deletions(-) create mode 100644 deploy/examples/object-a.yaml create mode 100644 deploy/examples/object-b.yaml create mode 100644 deploy/examples/object-bucket-claim-a.yaml create mode 100644 deploy/examples/object-shared-pools-test.yaml create mode 100644 deploy/examples/object-shared-pools.yaml create mode 100644 deploy/examples/storageclass-bucket-a.yaml diff --git a/.github/workflows/canary-integration-test.yml b/.github/workflows/canary-integration-test.yml index 54e6aeef3d44..d11bcb6a4db9 100644 --- a/.github/workflows/canary-integration-test.yml +++ b/.github/workflows/canary-integration-test.yml @@ -190,7 +190,7 @@ jobs: - name: validate-rgw-endpoint run: | - rgw_endpoint=$(kubectl get service -n rook-ceph | awk '/rgw/ {print $3":80"}') + rgw_endpoint=$(kubectl get service -n rook-ceph -l rgw=store-a | awk '/rgw/ {print $3":80"}') toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}') # pass the valid rgw-endpoint of same ceph cluster timeout 15 sh -c "until kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --rbd-data-pool-name replicapool --rgw-endpoint $rgw_endpoint 2> output.txt; do sleep 1 && echo 'waiting for the rgw endpoint to be validated'; done" @@ -264,7 +264,7 @@ jobs: with: name: ${{ github.job }}-${{ matrix.ceph-image }} - raw-disk: + raw-disk-with-object: runs-on: ubuntu-20.04 if: "!contains(github.event.pull_request.labels.*.name, 'skip-ci')" strategy: @@ -310,7 +310,13 @@ jobs: run: tests/scripts/github-action-helper.sh wait_for_prepare_pod 2 - name: wait for ceph to be ready - run: tests/scripts/github-action-helper.sh wait_for_ceph_to_be_ready osd 2 + run: | + tests/scripts/github-action-helper.sh wait_for_ceph_to_be_ready osd 2 + + - name: wait for object stores to be ready + run: | + tests/scripts/validate_cluster.sh rgw store-a + tests/scripts/validate_cluster.sh rgw store-b - name: test toolbox-operator-image pod run: | @@ -1018,7 +1024,7 @@ jobs: - name: wait for ceph to be ready run: | tests/scripts/github-action-helper.sh wait_for_ceph_to_be_ready osd 2 - tests/scripts/validate_cluster.sh rgw + tests/scripts/validate_cluster.sh rgw my-store kubectl -n rook-ceph get pods kubectl -n rook-ceph get secrets @@ -1039,7 +1045,7 @@ jobs: echo "wait for rgw pod to be deleted" kubectl wait --for=delete pod -l app=rook-ceph-rgw -n rook-ceph --timeout=100s kubectl create -f tests/manifests/test-object.yaml - tests/scripts/validate_cluster.sh rgw + tests/scripts/validate_cluster.sh rgw my-store tests/scripts/deploy-validate-vault.sh validate_rgw - name: collect common logs diff --git a/Documentation/CRDs/specification.md b/Documentation/CRDs/specification.md index f3795b8225c8..62ba4f53e5f0 100644 --- a/Documentation/CRDs/specification.md +++ b/Documentation/CRDs/specification.md @@ -1872,6 +1872,20 @@ PoolSpec +sharedPools
+ + +ObjectSharedPoolsSpec + + + + +(Optional) +

The pool information when configuring RADOS namespaces in existing pools.

+ + + + preservePoolsOnDelete
bool @@ -2219,6 +2233,20 @@ PoolSpec +sharedPools
+ + +ObjectSharedPoolsSpec + + + + +(Optional) +

The pool information when configuring RADOS namespaces in existing pools.

+ + + + customEndpoints
[]string @@ -8952,6 +8980,58 @@ PullSpec +

ObjectSharedPoolsSpec +

+

+(Appears on:ObjectStoreSpec, ObjectZoneSpec) +

+
+

ObjectSharedPoolsSpec represents object store pool info when configuring RADOS namespaces in existing pools.

+
+ + + + + + + + + + + + + + + + + + + + + +
FieldDescription
+metadataPoolName
+ +string + +
+

The metadata pool used for creating RADOS namespaces in the object store

+
+dataPoolName
+ +string + +
+

The data pool used for creating RADOS namespaces in the object store

+
+preserveRadosNamespaceDataOnDelete
+ +bool + +
+(Optional) +

Whether the RADOS namespaces should be preserved on deletion of the object store

+

ObjectStoreHostingSpec

@@ -9078,6 +9158,20 @@ PoolSpec +sharedPools
+ + +ObjectSharedPoolsSpec + + + + +(Optional) +

The pool information when configuring RADOS namespaces in existing pools.

+ + + + preservePoolsOnDelete
bool @@ -9748,6 +9842,20 @@ PoolSpec +sharedPools
+ + +ObjectSharedPoolsSpec + + + + +(Optional) +

The pool information when configuring RADOS namespaces in existing pools.

+ + + + customEndpoints
[]string diff --git a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md index 41b120fcb310..409913909417 100644 --- a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md +++ b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md @@ -10,17 +10,21 @@ This guide assumes a Rook cluster as explained in the [Quickstart](../../Getting ## Configure an Object Store -Rook has the ability to either deploy an object store in Kubernetes or to connect to an external RGW service. -Most commonly, the object store will be configured locally by Rook. -Alternatively, if you have an existing Ceph cluster with Rados Gateways, see the -[external section](#connect-to-an-external-object-store) to consume it from Rook. +Rook can configure the Ceph Object Store for several different scenarios. See each linked section for the configuration details. +1. Create a [local object store](#create-a-local-object-store) with dedicated Ceph pools. This option is recommended if a single object store is required, and is the simplest to get started. +2. Create [one or more object stores with shared Ceph pools](#create-local-object-stores-with-shared-pools). This option is recommended when multiple object stores are required. +3. Connect to an [RGW service in an external Ceph cluster](#connect-to-an-external-object-store), rather than create a local object store. +4. Configure [RGW Multisite](#object-multisite) to synchronize buckets between object stores in different clusters. + +!!! note + Updating the configuration of an object store between these types is not supported. ### Create a Local Object Store The below sample will create a `CephObjectStore` that starts the RGW service in the cluster with an S3 API. !!! note - This sample requires *at least 3 bluestore OSDs*, with each OSD located on a *different node*. + This sample requires *at least 3 OSDs*, with each OSD located on a *different node*. The OSDs must be located on different nodes, because the [`failureDomain`](../../CRDs/Block-Storage/ceph-block-pool-crd.md#spec) is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). @@ -39,6 +43,7 @@ spec: size: 3 dataPool: failureDomain: host + # For production it is recommended to use more chunks, such as 4+2 or 8+4 erasureCoded: dataChunks: 2 codingChunks: 1 @@ -64,6 +69,120 @@ To confirm the object store is configured, wait for the RGW pod(s) to start: kubectl -n rook-ceph get pod -l app=rook-ceph-rgw ``` +To consume the object store, continue below in the section to [Create a bucket](#create-a-bucket). + +### Create Local Object Store(s) with Shared Pools + +The below sample will create one or more object stores. Shared Ceph pools will be created, which reduces the overhead of additional Ceph pools for each additional object store. + +Data isolation is enforced between the object stores with the use of Ceph RADOS namespaces. The separate RADOS namespaces do not allow access of the data across object stores. + +!!! note + This sample requires *at least 3 OSDs*, with each OSD located on a *different node*. + +The OSDs must be located on different nodes, because the [`failureDomain`](../../CRDs/Block-Storage/ceph-block-pool-crd.md#spec) is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). + +#### Shared Pools + +Create the shared pools that will be used by each of the object stores. + +!!! note + If object stores have been previously created, the first pool below (`.rgw.root`) + does not need to be defined again as it would have already been created + with an existing object store. There is only one `.rgw.root` pool existing + to store metadata for all object stores. + +```yaml +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-root + namespace: rook-ceph # namespace:cluster +spec: + name: .rgw.root + failureDomain: host + replicated: + size: 3 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-meta-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: host + replicated: + size: 3 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-data-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: osd + erasureCoded: + # For production it is recommended to use more chunks, such as 4+2 or 8+4 + dataChunks: 2 + codingChunks: 1 + application: rgw +``` + +Create the shared pools: + +```console +kubectl create -f object-shared-pools.yaml +``` + +#### Create Each Object Store + +After the pools have been created above, create each object store to consume the shared pools. + +```yaml +apiVersion: ceph.rook.io/v1 +kind: CephObjectStore +metadata: + name: store-a + namespace: rook-ceph # namespace:cluster +spec: + sharedPools: + metadataPoolName: rgw-meta-pool + dataPoolName: rgw-data-pool + preserveRadosNamespaceDataOnDelete: true + gateway: + sslCertificateRef: + port: 80 + instances: 1 +``` + +Create the object store: + +```console +kubectl create -f object-a.yaml +``` + +To confirm the object store is configured, wait for the RGW pod(s) to start: + +```console +kubectl -n rook-ceph get pod -l rgw=store-a +``` + +Additional object stores can be created based on the same shared pools by simply changing the +`name` of the CephObjectStore. In the example manifests folder, two object store examples are +provided: `object-a.yaml` and `object-b.yaml`. + +To consume the object store, continue below in the section to [Create a bucket](#create-a-bucket). +Modify the default example object store name from `my-store` to the alternate name of the object store +such as `store-a` in this example. + ### Connect to an External Object Store Rook can connect to existing RGW gateways to work in conjunction with the external mode of the `CephCluster` CRD. First, create a `rgw-admin-ops-user` user in the Ceph cluster with the necessary caps: diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 8379ded52360..838182e0c9f7 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -4,13 +4,15 @@ - The removal of `CSI_ENABLE_READ_AFFINITY` option and its replacement with per-cluster read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https://github.com/rook/rook/pull/13665) -- Allow setting the Ceph `application` on a pool - updating `netNamespaceFilePath` for all clusterIDs in rook-ceph-csi-config configMap in [PR](https://github.com/rook/rook/pull/13613) - Issue: The netNamespaceFilePath isn't updated in the CSI config map for all the clusterIDs when `CSI_ENABLE_HOST_NETWORK` is set to false in `operator.yaml` - Impact: This results in the unintended network configurations, with pods using the host networking instead of pod networking. + ## Features - Kubernetes versions **v1.25** through **v1.29** are supported. - Ceph daemon pods using the `default` service account now use a new `rook-ceph-default` service account. +- Allow setting the Ceph `application` on a pool +- Create object stores with shared metadata and data pools. Isolation between object stores is enabled via RADOS namespaces. - The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. - Support for virtual style hosting for s3 buckets in the CephObjectStore. diff --git a/deploy/charts/rook-ceph/templates/resources.yaml b/deploy/charts/rook-ceph/templates/resources.yaml index 77336f3d1475..a98c6d201a16 100644 --- a/deploy/charts/rook-ceph/templates/resources.yaml +++ b/deploy/charts/rook-ceph/templates/resources.yaml @@ -10105,6 +10105,29 @@ spec: type: string type: object type: object + sharedPools: + description: The pool information when configuring RADOS namespaces in existing pools. + nullable: true + properties: + dataPoolName: + description: The data pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + description: The metadata pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + description: Whether the RADOS namespaces should be preserved on deletion of the object store + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object zone: description: The multisite info nullable: true @@ -10870,6 +10893,29 @@ spec: default: true description: Preserve pools on object zone deletion type: boolean + sharedPools: + description: The pool information when configuring RADOS namespaces in existing pools. + nullable: true + properties: + dataPoolName: + description: The data pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + description: The metadata pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + description: Whether the RADOS namespaces should be preserved on deletion of the object store + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object zoneGroup: description: The display name for the ceph users type: string diff --git a/deploy/examples/crds.yaml b/deploy/examples/crds.yaml index 6889c429ed8c..fadd17220540 100644 --- a/deploy/examples/crds.yaml +++ b/deploy/examples/crds.yaml @@ -10096,6 +10096,29 @@ spec: type: string type: object type: object + sharedPools: + description: The pool information when configuring RADOS namespaces in existing pools. + nullable: true + properties: + dataPoolName: + description: The data pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + description: The metadata pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + description: Whether the RADOS namespaces should be preserved on deletion of the object store + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object zone: description: The multisite info nullable: true @@ -10858,6 +10881,29 @@ spec: default: true description: Preserve pools on object zone deletion type: boolean + sharedPools: + description: The pool information when configuring RADOS namespaces in existing pools. + nullable: true + properties: + dataPoolName: + description: The data pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + description: The metadata pool used for creating RADOS namespaces in the object store + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + description: Whether the RADOS namespaces should be preserved on deletion of the object store + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object zoneGroup: description: The display name for the ceph users type: string diff --git a/deploy/examples/object-a.yaml b/deploy/examples/object-a.yaml new file mode 100644 index 000000000000..aea27bec8192 --- /dev/null +++ b/deploy/examples/object-a.yaml @@ -0,0 +1,78 @@ +################################################################################################################# +# Create an object store with settings for shared pools in a production environment. A minimum of 3 hosts with +# OSDs are required in this example. This example shows two object stores being created with the same +# shared metadata and data pools. The pool sharing will utilize RADOS namespaces to keep the object store +# data independent, while avoiding the growth of PGs in the cluster. +# kubectl create -f object-shared-pools.yaml +# kubectl create -f object-a.yaml -f object-b.yaml +################################################################################################################# +apiVersion: ceph.rook.io/v1 +kind: CephObjectStore +metadata: + name: store-a + namespace: rook-ceph # namespace:cluster +spec: + # Shared pools must be defined separately from the object store. + # For this example, the pools are defined in object-shared-pools.yaml. + # Multiple object stores can be created to share these pools. + sharedPools: + metadataPoolName: rgw-meta-pool + dataPoolName: rgw-data-pool + preserveRadosNamespaceDataOnDelete: true + # The gateway service configuration + gateway: + # A reference to the secret in the rook namespace where the ssl certificate is stored + # sslCertificateRef: + # A reference to the secret in the rook namespace where the ca bundle is stored + # caBundleRef: + # The port that RGW pods will listen on (http) + port: 80 + # The port that RGW pods will listen on (https). An ssl certificate is required. + # securePort: 443 + # The number of pods in the rgw deployment + instances: 1 + # The affinity rules to apply to the rgw deployment. + placement: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - rook-ceph-rgw + # topologyKey: */zone can be used to spread RGW across different AZ + # Use in k8s cluster if your cluster is v1.16 or lower + # Use in k8s cluster is v1.17 or upper + topologyKey: kubernetes.io/hostname + # A key/value list of annotations + # nodeAffinity: + # requiredDuringSchedulingIgnoredDuringExecution: + # nodeSelectorTerms: + # - matchExpressions: + # - key: role + # operator: In + # values: + # - rgw-node + # topologySpreadConstraints: + # tolerations: + # - key: rgw-node + # operator: Exists + # podAffinity: + # podAntiAffinity: + # A key/value list of annotations + annotations: + # key: value + # A key/value list of labels + labels: + # key: value + resources: + # The requests and limits set here, allow the object store gateway Pod(s) to use half of one CPU core and 1 gigabyte of memory + # limits: + # memory: "1024Mi" + # requests: + # cpu: "500m" + # memory: "1024Mi" + priorityClassName: system-cluster-critical diff --git a/deploy/examples/object-b.yaml b/deploy/examples/object-b.yaml new file mode 100644 index 000000000000..be0d83a14bca --- /dev/null +++ b/deploy/examples/object-b.yaml @@ -0,0 +1,78 @@ +################################################################################################################# +# Create an object store with settings for shared pools in a production environment. A minimum of 3 hosts with +# OSDs are required in this example. This example shows two object stores being created with the same +# shared metadata and data pools. The pool sharing will utilize RADOS namespaces to keep the object store +# data independent, while avoiding the growth of PGs in the cluster. +# kubectl create -f object-shared-pools.yaml +# kubectl create -f object-a.yaml -f object-b.yaml +################################################################################################################# +apiVersion: ceph.rook.io/v1 +kind: CephObjectStore +metadata: + name: store-b + namespace: rook-ceph # namespace:cluster +spec: + # Shared pools must be defined separately from the object store. + # For this example, the pools are defined in object-shared-pools.yaml. + # Multiple object stores can be created to share these pools. + sharedPools: + metadataPoolName: rgw-meta-pool + dataPoolName: rgw-data-pool + preserveRadosNamespaceDataOnDelete: true + # The gateway service configuration + gateway: + # A reference to the secret in the rook namespace where the ssl certificate is stored + # sslCertificateRef: + # A reference to the secret in the rook namespace where the ca bundle is stored + # caBundleRef: + # The port that RGW pods will listen on (http) + port: 80 + # The port that RGW pods will listen on (https). An ssl certificate is required. + # securePort: 443 + # The number of pods in the rgw deployment + instances: 1 + # The affinity rules to apply to the rgw deployment. + placement: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - rook-ceph-rgw + # topologyKey: */zone can be used to spread RGW across different AZ + # Use in k8s cluster if your cluster is v1.16 or lower + # Use in k8s cluster is v1.17 or upper + topologyKey: kubernetes.io/hostname + # A key/value list of annotations + # nodeAffinity: + # requiredDuringSchedulingIgnoredDuringExecution: + # nodeSelectorTerms: + # - matchExpressions: + # - key: role + # operator: In + # values: + # - rgw-node + # topologySpreadConstraints: + # tolerations: + # - key: rgw-node + # operator: Exists + # podAffinity: + # podAntiAffinity: + # A key/value list of annotations + annotations: + # key: value + # A key/value list of labels + labels: + # key: value + resources: + # The requests and limits set here, allow the object store gateway Pod(s) to use half of one CPU core and 1 gigabyte of memory + # limits: + # memory: "1024Mi" + # requests: + # cpu: "500m" + # memory: "1024Mi" + priorityClassName: system-cluster-critical diff --git a/deploy/examples/object-bucket-claim-a.yaml b/deploy/examples/object-bucket-claim-a.yaml new file mode 100644 index 000000000000..59f0a95d34eb --- /dev/null +++ b/deploy/examples/object-bucket-claim-a.yaml @@ -0,0 +1,7 @@ +apiVersion: objectbucket.io/v1alpha1 +kind: ObjectBucketClaim +metadata: + name: ceph-bucket-a +spec: + generateBucketName: ceph-bkt + storageClassName: rook-ceph-bucket-a diff --git a/deploy/examples/object-shared-pools-test.yaml b/deploy/examples/object-shared-pools-test.yaml new file mode 100644 index 000000000000..4e802d8921a2 --- /dev/null +++ b/deploy/examples/object-shared-pools-test.yaml @@ -0,0 +1,48 @@ +################################################################################################################# +# Create the pools that can be shared by multiple object stores. A single OSD is required +# in this example. This example shows two object stores being created with the same +# shared metadata and data pools. The pool sharing will utilize RADOS namespaces to keep the object store +# data independent, while avoiding the growth of PGs in the cluster. +# kubectl create -f object-shared-pools.yaml +# kubectl create -f object-a.yaml -f object-b.yaml +################################################################################################################# +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-root + namespace: rook-ceph # namespace:cluster +spec: + name: .rgw.root + failureDomain: host + replicated: + size: 1 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-meta-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: host + replicated: + size: 1 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-data-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: osd + replicated: + size: 1 + requireSafeReplicaSize: false + application: rgw diff --git a/deploy/examples/object-shared-pools.yaml b/deploy/examples/object-shared-pools.yaml new file mode 100644 index 000000000000..0ca505035a83 --- /dev/null +++ b/deploy/examples/object-shared-pools.yaml @@ -0,0 +1,49 @@ +################################################################################################################# +# Create the pools that can be shared by multiple object stores. A minimum of 3 hosts with +# OSDs are required in this example. This example shows two object stores being created with the same +# shared metadata and data pools. The pool sharing will utilize RADOS namespaces to keep the object store +# data independent, while avoiding the growth of PGs in the cluster. +# kubectl create -f object-shared-pools.yaml +# kubectl create -f object-a.yaml -f object-b.yaml +################################################################################################################# +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-root + namespace: rook-ceph # namespace:cluster +spec: + name: .rgw.root + failureDomain: host + replicated: + size: 3 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-meta-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: host + replicated: + size: 3 + requireSafeReplicaSize: false + parameters: + pg_num: "8" + application: rgw +--- +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: rgw-data-pool + namespace: rook-ceph # namespace:cluster +spec: + failureDomain: osd + erasureCoded: + # For production it is recommended to use more chunks, such as 4+2 or 8+4 + dataChunks: 2 + codingChunks: 1 + application: rgw diff --git a/deploy/examples/storageclass-bucket-a.yaml b/deploy/examples/storageclass-bucket-a.yaml new file mode 100644 index 000000000000..8654da65030b --- /dev/null +++ b/deploy/examples/storageclass-bucket-a.yaml @@ -0,0 +1,9 @@ +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: rook-ceph-bucket-a +provisioner: rook-ceph.ceph.rook.io/bucket # driver:namespace:cluster +reclaimPolicy: Delete +parameters: + objectStoreName: store-a + objectStoreNamespace: rook-ceph # namespace:cluster diff --git a/design/ceph/object/store.md b/design/ceph/object/store.md index 937f5a236166..fa70f041dbf3 100644 --- a/design/ceph/object/store.md +++ b/design/ceph/object/store.md @@ -82,11 +82,11 @@ spec: #### Pools shared by multiple CephObjectStore -If user want to use existing pools for metadata and data, the pools must be created before the object store is created. This will be useful if multiple objectstore can share same pools. The detail of pools need to shared in `radosNamespaces` settings in object-store CRD. Now the object stores can consume same pool isolated with different namespaces. Usually RGW server itself create different [namespaces](https://docs.ceph.com/en/latest/radosgw/layout/#appendix-compendium) on the pools. User can create via [Pool CRD](/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md), this is need to present before the object store is created. Similar to `preservePoolsOnDelete` setting, `preserveRadosNamespaceDataOnDelete` is used to preserve the data in the rados namespace when the object store is deleted. It is set to 'false' by default. +If user want to use existing pools for metadata and data, the pools must be created before the object store is created. This will be useful if multiple objectstore can share same pools. The detail of pools need to shared in `sharedPools` settings in object-store CRD. Now the object stores can consume same pool isolated with different namespaces. Usually RGW server itself create different [namespaces](https://docs.ceph.com/en/latest/radosgw/layout/#appendix-compendium) on the pools. User can create via [Pool CRD](/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md), this is need to present before the object store is created. Similar to `preservePoolsOnDelete` setting, `preserveRadosNamespaceDataOnDelete` is used to preserve the data in the rados namespace when the object store is deleted. It is set to 'false' by default. ```yaml spec: - radosNamespaces: + sharedPools: metadataPoolName: rgw-meta-pool dataPoolName: rgw-data-pool preserveRadosNamespaceDataOnDelete: true diff --git a/pkg/apis/ceph.rook.io/v1/types.go b/pkg/apis/ceph.rook.io/v1/types.go index d0c661d5a3f2..6a954e41fc00 100755 --- a/pkg/apis/ceph.rook.io/v1/types.go +++ b/pkg/apis/ceph.rook.io/v1/types.go @@ -1431,6 +1431,11 @@ type ObjectStoreSpec struct { // +nullable DataPool PoolSpec `json:"dataPool,omitempty"` + // The pool information when configuring RADOS namespaces in existing pools. + // +optional + // +nullable + SharedPools ObjectSharedPoolsSpec `json:"sharedPools"` + // Preserve pools on object store deletion // +optional PreservePoolsOnDelete bool `json:"preservePoolsOnDelete,omitempty"` @@ -1469,6 +1474,21 @@ type ObjectStoreSpec struct { Hosting *ObjectStoreHostingSpec `json:"hosting,omitempty"` } +// ObjectSharedPoolsSpec represents object store pool info when configuring RADOS namespaces in existing pools. +type ObjectSharedPoolsSpec struct { + // The metadata pool used for creating RADOS namespaces in the object store + // +kubebuilder:validation:XValidation:message="object store shared metadata pool is immutable",rule="self == oldSelf" + MetadataPoolName string `json:"metadataPoolName"` + + // The data pool used for creating RADOS namespaces in the object store + // +kubebuilder:validation:XValidation:message="object store shared data pool is immutable",rule="self == oldSelf" + DataPoolName string `json:"dataPoolName"` + + // Whether the RADOS namespaces should be preserved on deletion of the object store + // +optional + PreserveRadosNamespaceDataOnDelete bool `json:"preserveRadosNamespaceDataOnDelete"` +} + // ObjectHealthCheckSpec represents the health check of an object store type ObjectHealthCheckSpec struct { // livenessProbe field is no longer used @@ -1882,6 +1902,11 @@ type ObjectZoneSpec struct { // +nullable DataPool PoolSpec `json:"dataPool"` + // The pool information when configuring RADOS namespaces in existing pools. + // +optional + // +nullable + SharedPools ObjectSharedPoolsSpec `json:"sharedPools"` + // If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service // endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may // include the port in the definition. For example: "https://my-object-store.my-domain.net:443". diff --git a/pkg/operator/ceph/object/controller.go b/pkg/operator/ceph/object/controller.go index a5b8333f062b..2d2f3214a1a7 100644 --- a/pkg/operator/ceph/object/controller.go +++ b/pkg/operator/ceph/object/controller.go @@ -249,7 +249,8 @@ func (r *ReconcileCephObjectStore) reconcile(request reconcile.Request) (reconci } err = r.deleteCOSIUser(opsCtx) if err != nil { - return reconcile.Result{}, *cephObjectStore, errors.Wrapf(err, "failed to delete COSI user") + // Allow the object store removal to proceed even if the user deletion fails + logger.Warningf("failed to delete COSI user. %v", err) } deps, err := cephObjectStoreDependents(r.context, r.clusterInfo, cephObjectStore, objCtx, opsCtx) if err != nil { @@ -411,14 +412,14 @@ func (r *ReconcileCephObjectStore) reconcileCreateObjectStore(cephObjectStore *c logger.Info("reconciling object store deployments") // Reconcile realm/zonegroup/zone CRs & update their names - realmName, zoneGroupName, zoneName, zone, reconcileResponse, err := r.reconcileMultisiteCRs(cephObjectStore) + realmName, zoneGroupName, zoneName, zone, reconcileResponse, err := r.getMultisiteResourceNames(cephObjectStore) if err != nil { return reconcileResponse, err } - // Reconcile Ceph Zone if Multisite + // Reconcile Ceph Zone if Multisite to ensure it exists, or else requeue the request if cephObjectStore.Spec.IsMultisite() { - reconcileResponse, err := r.reconcileCephZone(cephObjectStore, zoneGroupName, realmName) + reconcileResponse, err := r.retrieveMultisiteZone(cephObjectStore, zoneGroupName, realmName) if err != nil { return reconcileResponse, err } @@ -427,7 +428,7 @@ func (r *ReconcileCephObjectStore) reconcileCreateObjectStore(cephObjectStore *c objContext.Realm = realmName objContext.ZoneGroup = zoneGroupName objContext.Zone = zoneName - logger.Debugf("realm for object-store is %q, zone group for object-store is %q, zone for object-store is %q", objContext.Realm, objContext.ZoneGroup, objContext.Zone) + logger.Debugf("realm is %q, zone group is %q, zone is %q, for object store %q", objContext.Realm, objContext.ZoneGroup, objContext.Zone, cephObjectStore.Name) // RECONCILE SERVICE logger.Debug("reconciling object store service") @@ -443,15 +444,15 @@ func (r *ReconcileCephObjectStore) reconcileCreateObjectStore(cephObjectStore *c // Reconcile Pool Creation if !cephObjectStore.Spec.IsMultisite() { logger.Info("reconciling object store pools") - err = CreatePools(objContext, r.clusterSpec, cephObjectStore.Spec.MetadataPool, cephObjectStore.Spec.DataPool) + err = ConfigurePools(objContext, r.clusterSpec, cephObjectStore.Spec.MetadataPool, cephObjectStore.Spec.DataPool, cephObjectStore.Spec.SharedPools) if err != nil { return r.setFailedStatus(k8sutil.ObservedGenerationNotAvailable, namespacedName, "failed to create object pools", err) } } - // Reconcile Multisite Creation - logger.Infof("setting multisite settings for object store %q", cephObjectStore.Name) - err = setMultisite(objContext, cephObjectStore, zone) + // Reconcile the object store + logger.Infof("configuring object store %q", cephObjectStore.Name) + err = configureObjectStore(objContext, cephObjectStore, zone) if err != nil && kerrors.IsNotFound(err) { return reconcile.Result{}, err } else if err != nil { @@ -470,7 +471,7 @@ func (r *ReconcileCephObjectStore) reconcileCreateObjectStore(cephObjectStore *c return r.reconcileCOSIUser(cephObjectStore) } -func (r *ReconcileCephObjectStore) reconcileCephZone(store *cephv1.CephObjectStore, zoneGroupName string, realmName string) (reconcile.Result, error) { +func (r *ReconcileCephObjectStore) retrieveMultisiteZone(store *cephv1.CephObjectStore, zoneGroupName string, realmName string) (reconcile.Result, error) { realmArg := fmt.Sprintf("--rgw-realm=%s", realmName) zoneGroupArg := fmt.Sprintf("--rgw-zonegroup=%s", zoneGroupName) zoneArg := fmt.Sprintf("--rgw-zone=%s", store.Spec.Zone.Name) @@ -490,43 +491,43 @@ func (r *ReconcileCephObjectStore) reconcileCephZone(store *cephv1.CephObjectSto return reconcile.Result{}, nil } -func (r *ReconcileCephObjectStore) reconcileMultisiteCRs(cephObjectStore *cephv1.CephObjectStore) (string, string, string, *cephv1.CephObjectZone, reconcile.Result, error) { - if cephObjectStore.Spec.IsMultisite() { - zoneName := cephObjectStore.Spec.Zone.Name - zone := &cephv1.CephObjectZone{} - err := r.client.Get(r.opManagerContext, types.NamespacedName{Name: zoneName, Namespace: cephObjectStore.Namespace}, zone) - if err != nil { - if kerrors.IsNotFound(err) { - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err - } - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectZone %q", cephObjectStore.Spec.Zone.Name) - } - logger.Debugf("CephObjectZone resource %s found", zone.Name) +func (r *ReconcileCephObjectStore) getMultisiteResourceNames(cephObjectStore *cephv1.CephObjectStore) (string, string, string, *cephv1.CephObjectZone, reconcile.Result, error) { + if !cephObjectStore.Spec.IsMultisite() { + return cephObjectStore.Name, cephObjectStore.Name, cephObjectStore.Name, nil, reconcile.Result{}, nil + } - zonegroup := &cephv1.CephObjectZoneGroup{} - err = r.client.Get(r.opManagerContext, types.NamespacedName{Name: zone.Spec.ZoneGroup, Namespace: cephObjectStore.Namespace}, zonegroup) - if err != nil { - if kerrors.IsNotFound(err) { - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err - } - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectZoneGroup %q", zone.Spec.ZoneGroup) + zoneName := cephObjectStore.Spec.Zone.Name + zone := &cephv1.CephObjectZone{} + err := r.client.Get(r.opManagerContext, types.NamespacedName{Name: zoneName, Namespace: cephObjectStore.Namespace}, zone) + if err != nil { + if kerrors.IsNotFound(err) { + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err } - logger.Debugf("CephObjectZoneGroup resource %s found", zonegroup.Name) + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectZone %q", cephObjectStore.Spec.Zone.Name) + } + logger.Debugf("CephObjectZone resource %s found", zone.Name) - realm := &cephv1.CephObjectRealm{} - err = r.client.Get(r.opManagerContext, types.NamespacedName{Name: zonegroup.Spec.Realm, Namespace: cephObjectStore.Namespace}, realm) - if err != nil { - if kerrors.IsNotFound(err) { - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err - } - return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectRealm %q", zonegroup.Spec.Realm) + zonegroup := &cephv1.CephObjectZoneGroup{} + err = r.client.Get(r.opManagerContext, types.NamespacedName{Name: zone.Spec.ZoneGroup, Namespace: cephObjectStore.Namespace}, zonegroup) + if err != nil { + if kerrors.IsNotFound(err) { + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err } - logger.Debugf("CephObjectRealm resource %s found", realm.Name) + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectZoneGroup %q", zone.Spec.ZoneGroup) + } + logger.Debugf("CephObjectZoneGroup resource %s found", zonegroup.Name) - return realm.Name, zonegroup.Name, zone.Name, zone, reconcile.Result{}, nil + realm := &cephv1.CephObjectRealm{} + err = r.client.Get(r.opManagerContext, types.NamespacedName{Name: zonegroup.Spec.Realm, Namespace: cephObjectStore.Namespace}, realm) + if err != nil { + if kerrors.IsNotFound(err) { + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, err + } + return "", "", "", nil, waitForRequeueIfObjectStoreNotReady, errors.Wrapf(err, "error getting CephObjectRealm %q", zonegroup.Spec.Realm) } + logger.Debugf("CephObjectRealm resource %s found", realm.Name) - return cephObjectStore.Name, cephObjectStore.Name, cephObjectStore.Name, nil, reconcile.Result{}, nil + return realm.Name, zonegroup.Name, zone.Name, zone, reconcile.Result{}, nil } func (r *ReconcileCephObjectStore) reconcileCOSIUser(cephObjectStore *cephv1.CephObjectStore) (reconcile.Result, error) { diff --git a/pkg/operator/ceph/object/objectstore.go b/pkg/operator/ceph/object/objectstore.go index 2781d716330e..722313e12a06 100644 --- a/pkg/operator/ceph/object/objectstore.go +++ b/pkg/operator/ceph/object/objectstore.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "os" + "path" "sort" "strconv" "strings" @@ -428,7 +429,7 @@ func createMultisiteConfigurations(objContext *Context, configType, configTypeAr return nil } -func createMultisite(objContext *Context, endpointArg string) error { +func createNonMultisiteStore(objContext *Context, endpointArg string, store *cephv1.CephObjectStore) error { logger.Debugf("creating realm, zone group, zone for object-store %v", objContext.Name) realmArg := fmt.Sprintf("--rgw-realm=%s", objContext.Realm) @@ -450,13 +451,19 @@ func createMultisite(objContext *Context, endpointArg string) error { return err } + logger.Infof("Object store %q: realm=%s, zonegroup=%s, zone=%s", objContext.Name, objContext.Realm, objContext.ZoneGroup, objContext.Zone) + + // Configure the zone for RADOS namespaces + err = ConfigureSharedPoolsForZone(objContext, store.Spec.SharedPools) + if err != nil { + return errors.Wrapf(err, "failed to configure rados namespaces for zone") + } + if err := commitConfigChanges(objContext); err != nil { nsName := fmt.Sprintf("%s/%s", objContext.clusterInfo.Namespace, objContext.Name) return errors.Wrapf(err, "failed to commit config changes after creating multisite config for CephObjectStore %q", nsName) } - logger.Infof("Multisite for object-store: realm=%s, zonegroup=%s, zone=%s", objContext.Realm, objContext.ZoneGroup, objContext.Zone) - return nil } @@ -544,7 +551,7 @@ func createSystemUser(objContext *Context, namespace string) error { return nil } -func setMultisite(objContext *Context, store *cephv1.CephObjectStore, zone *cephv1.CephObjectZone) error { +func configureObjectStore(objContext *Context, store *cephv1.CephObjectStore, zone *cephv1.CephObjectZone) error { logger.Debugf("setting multisite configuration for object-store %v", store.Name) if store.Spec.IsMultisite() { @@ -578,13 +585,13 @@ func setMultisite(objContext *Context, store *cephv1.CephObjectStore, zone *ceph if !store.Spec.Gateway.DisableMultisiteSyncTraffic { endpointArg = fmt.Sprintf("--endpoints=%s", objContext.Endpoint) } - err := createMultisite(objContext, endpointArg) + err := createNonMultisiteStore(objContext, endpointArg, store) if err != nil { return errorOrIsNotFound(err, "failed create ceph multisite for object-store %q", objContext.Name) } } - logger.Infof("multisite configuration for object-store %v is complete", store.Name) + logger.Infof("configuration for object-store %v is complete", store.Name) return nil } @@ -716,6 +723,7 @@ func allObjectPools(storeName string) []string { return poolsForThisStore } +// Detect if there are pools that do not exist for this object store func missingPools(context *Context) ([]string, error) { // list pools instead of querying each pool individually. querying each individually makes it // hard to determine if an error is because the pool does not exist or because of a connection @@ -740,7 +748,15 @@ func missingPools(context *Context) ([]string, error) { return missingPools, nil } -func CreatePools(context *Context, clusterSpec *cephv1.ClusterSpec, metadataPool, dataPool cephv1.PoolSpec) error { +func ConfigurePools(context *Context, cluster *cephv1.ClusterSpec, metadataPool, dataPool cephv1.PoolSpec, sharedPools cephv1.ObjectSharedPoolsSpec) error { + if sharedPoolsSpecified(sharedPools) { + if !EmptyPool(dataPool) || !EmptyPool(metadataPool) { + return fmt.Errorf("object store shared pools can only be specified if the metadata and data pools are not specified") + } + // Shared pools are configured elsewhere + return nil + } + if EmptyPool(dataPool) && EmptyPool(metadataPool) { logger.Info("no pools specified for the CR, checking for their existence...") missingPools, err := missingPools(context) @@ -765,17 +781,230 @@ func CreatePools(context *Context, clusterSpec *cephv1.ClusterSpec, metadataPool metadataPoolPGs = cephclient.DefaultPGCount } - if err := createSimilarPools(context, append(metadataPools, rootPool), clusterSpec, metadataPool, metadataPoolPGs); err != nil { + if err := createSimilarPools(context, append(metadataPools, rootPool), cluster, metadataPool, metadataPoolPGs); err != nil { return errors.Wrap(err, "failed to create metadata pools") } - if err := createSimilarPools(context, []string{dataPoolName}, clusterSpec, dataPool, cephclient.DefaultPGCount); err != nil { + if err := createSimilarPools(context, []string{dataPoolName}, cluster, dataPool, cephclient.DefaultPGCount); err != nil { return errors.Wrap(err, "failed to create data pool") } return nil } +func sharedPoolsSpecified(sharedPools cephv1.ObjectSharedPoolsSpec) bool { + return sharedPools.DataPoolName != "" && sharedPools.MetadataPoolName != "" +} + +func ConfigureSharedPoolsForZone(objContext *Context, sharedPools cephv1.ObjectSharedPoolsSpec) error { + if !sharedPoolsSpecified(sharedPools) { + logger.Debugf("no shared pools to configure for store %q", objContext.Name) + return nil + } + + if err := sharedPoolsExist(objContext, sharedPools); err != nil { + return errors.Wrapf(err, "object store cannot be configured until shared pools exist") + } + + // retrieve the zone config + logger.Infof("Retrieving zone %q", objContext.Zone) + realmArg := fmt.Sprintf("--rgw-realm=%s", objContext.Realm) + zoneGroupArg := fmt.Sprintf("--rgw-zonegroup=%s", objContext.ZoneGroup) + zoneArg := "--rgw-zone=" + objContext.Zone + args := []string{"zone", "get", realmArg, zoneGroupArg, zoneArg} + + output, err := RunAdminCommandNoMultisite(objContext, true, args...) + if err != nil { + return errors.Wrap(err, "failed to get zone") + } + + logger.Debugf("Zone config is currently:\n%s", output) + + var zoneConfig map[string]interface{} + err = json.Unmarshal([]byte(output), &zoneConfig) + if err != nil { + return errors.Wrap(err, "failed to unmarshal zone") + } + + metadataPrefix := fmt.Sprintf("%s:%s.", sharedPools.MetadataPoolName, objContext.Name) + dataPrefix := fmt.Sprintf("%s:%s.", sharedPools.DataPoolName, objContext.Name) + expectedDataPool := dataPrefix + "buckets.data" + if dataPoolIsExpected(objContext, zoneConfig, expectedDataPool) { + logger.Debugf("Data pool already set as expected to %q", expectedDataPool) + return nil + } + + logger.Infof("Updating rados namespace configuration for zone %q", objContext.Zone) + if err := applyExpectedRadosNamespaceSettings(zoneConfig, metadataPrefix, dataPrefix, expectedDataPool); err != nil { + return errors.Wrap(err, "failed to configure rados namespaces") + } + + configBytes, err := json.Marshal(zoneConfig) + if err != nil { + return errors.Wrap(err, "failed to serialize zone config") + } + logger.Debugf("Raw zone settings to apply: %s", string(configBytes)) + + configFilename := path.Join(objContext.Context.ConfigDir, objContext.Name+".zonecfg") + if err := os.WriteFile(configFilename, configBytes, 0600); err != nil { + return errors.Wrap(err, "failed to write zonfig config file") + } + defer os.Remove(configFilename) + + args = []string{"zone", "set", zoneArg, "--infile=" + configFilename, realmArg, zoneGroupArg} + output, err = RunAdminCommandNoMultisite(objContext, false, args...) + if err != nil { + return errors.Wrap(err, "failed to set zone config") + } + logger.Debugf("Zone set results=%s", output) + + if err = zoneUpdateWorkaround(objContext, output, expectedDataPool); err != nil { + return errors.Wrap(err, "failed to apply zone set workaround") + } + + logger.Infof("Successfully configured RADOS namespaces for object store %q", objContext.Name) + return nil +} + +func sharedPoolsExist(objContext *Context, sharedPools cephv1.ObjectSharedPoolsSpec) error { + existingPools, err := cephclient.ListPoolSummaries(objContext.Context, objContext.clusterInfo) + if err != nil { + return errors.Wrapf(err, "failed to list pools") + } + foundMetadataPool := false + foundDataPool := false + for _, pool := range existingPools { + if pool.Name == sharedPools.MetadataPoolName { + foundMetadataPool = true + } + if pool.Name == sharedPools.DataPoolName { + foundDataPool = true + } + } + + if !foundMetadataPool && !foundDataPool { + return fmt.Errorf("pools do not exist: %q and %q", sharedPools.MetadataPoolName, sharedPools.DataPoolName) + } + if !foundMetadataPool { + return fmt.Errorf("metadata pool does not exist: %q", sharedPools.MetadataPoolName) + } + if !foundDataPool { + return fmt.Errorf("data pool does not exist: %q", sharedPools.DataPoolName) + } + + logger.Info("verified shared pools exist") + return nil +} + +func applyExpectedRadosNamespaceSettings(zoneConfig map[string]interface{}, metadataPrefix, dataPrefix, dataPool string) error { + // Update the necessary fields for RAODS namespaces + zoneConfig["domain_root"] = metadataPrefix + "meta.root" + zoneConfig["control_pool"] = metadataPrefix + "control" + zoneConfig["gc_pool"] = metadataPrefix + "log.gc" + zoneConfig["lc_pool"] = metadataPrefix + "log.lc" + zoneConfig["log_pool"] = metadataPrefix + "log" + zoneConfig["intent_log_pool"] = metadataPrefix + "log.intent" + zoneConfig["usage_log_pool"] = metadataPrefix + "log.usage" + zoneConfig["roles_pool"] = metadataPrefix + "meta.roles" + zoneConfig["reshard_pool"] = metadataPrefix + "log.reshard" + zoneConfig["user_keys_pool"] = metadataPrefix + "meta.users.keys" + zoneConfig["user_email_pool"] = metadataPrefix + "meta.users.email" + zoneConfig["user_swift_pool"] = metadataPrefix + "meta.users.swift" + zoneConfig["user_uid_pool"] = metadataPrefix + "meta.users.uid" + zoneConfig["otp_pool"] = metadataPrefix + "otp" + zoneConfig["notif_pool"] = metadataPrefix + "log.notif" + + placementPools, ok := zoneConfig["placement_pools"].([]interface{}) + if !ok { + return fmt.Errorf("failed to parse placement_pools") + } + if len(placementPools) == 0 { + return fmt.Errorf("no placement pools") + } + placementPool, ok := placementPools[0].(map[string]interface{}) + if !ok { + return fmt.Errorf("failed to parse placement_pools[0]") + } + placementVals, ok := placementPool["val"].(map[string]interface{}) + if !ok { + return fmt.Errorf("failed to parse placement_pools[0].val") + } + placementVals["index_pool"] = metadataPrefix + "buckets.index" + placementVals["data_extra_pool"] = dataPrefix + "buckets.non-ec" + storageClasses, ok := placementVals["storage_classes"].(map[string]interface{}) + if !ok { + return fmt.Errorf("failed to parse storage_classes") + } + stdStorageClass, ok := storageClasses["STANDARD"].(map[string]interface{}) + if !ok { + return fmt.Errorf("failed to parse storage_classes.STANDARD") + } + stdStorageClass["data_pool"] = dataPool + return nil +} + +func dataPoolIsExpected(objContext *Context, zoneConfig map[string]interface{}, expectedDataPool string) bool { + placementPools, ok := zoneConfig["placement_pools"].([]interface{}) + if !ok { + return false + } + placementPool, ok := placementPools[0].(map[string]interface{}) + if !ok { + return false + } + placementVals, ok := placementPool["val"].(map[string]interface{}) + if !ok { + return false + } + storageClasses, ok := placementVals["storage_classes"].(map[string]interface{}) + if !ok { + return false + } + stdStorageClass, ok := storageClasses["STANDARD"].(map[string]interface{}) + if !ok { + return false + } + logger.Infof("data pool is currently set to %q", stdStorageClass["data_pool"]) + return stdStorageClass["data_pool"] == expectedDataPool +} + +// There was a radosgw-admin bug that was preventing the RADOS namespace from being applied +// for the data pool. The fix is included in Reef v18.2.3 or newer, and v19.2.0. +// The workaround is to run a "radosgw-admin zone placement modify" command to apply +// the desired data pool config. +// After Reef (v18) support is removed, this method will be dead code. +func zoneUpdateWorkaround(objContext *Context, zoneOutput, expectedDataPool string) error { + var zoneConfig map[string]interface{} + err := json.Unmarshal([]byte(zoneOutput), &zoneConfig) + if err != nil { + return errors.Wrap(err, "failed to unmarshal zone") + } + // Update the necessary fields for RAODS namespaces + // If the radosgw-admin fix is in the release, the data pool is already applied and we skip the workaround. + if dataPoolIsExpected(objContext, zoneConfig, expectedDataPool) { + logger.Infof("data pool was already set as expected to %q, workaround not needed", expectedDataPool) + return nil + } + + logger.Infof("Setting data pool to %q", expectedDataPool) + args := []string{"zone", "placement", "modify", + "--rgw-realm=" + objContext.Realm, + "--rgw-zonegroup=" + objContext.ZoneGroup, + "--rgw-zone=" + objContext.Name, + "--placement-id", "default-placement", + "--storage-class", "STANDARD", + "--data-pool=" + expectedDataPool, + } + + output, err := RunAdminCommandNoMultisite(objContext, false, args...) + if err != nil { + return errors.Wrap(err, "failed to set zone config") + } + logger.Debugf("zone placement modify output=%s", output) + logger.Info("zone placement for the data pool was applied successfully") + return nil +} + // Check if this is a recent release of ceph where the legacy rgw_rados_pool_pg_num_min // is no longer available. func rgwRadosPGNumIsNew(cephVer cephver.CephVersion) bool { @@ -792,7 +1021,7 @@ func configurePoolsConcurrently() bool { return true } -func createSimilarPools(ctx *Context, pools []string, clusterSpec *cephv1.ClusterSpec, poolSpec cephv1.PoolSpec, pgCount string) error { +func createSimilarPools(ctx *Context, pools []string, cluster *cephv1.ClusterSpec, poolSpec cephv1.PoolSpec, pgCount string) error { // We have concurrency if configurePoolsConcurrently() { waitGroup, _ := errgroup.WithContext(ctx.clusterInfo.Context) @@ -800,14 +1029,14 @@ func createSimilarPools(ctx *Context, pools []string, clusterSpec *cephv1.Cluste // Avoid the loop re-using the same value with a closure pool := pool - waitGroup.Go(func() error { return createRGWPool(ctx, clusterSpec, poolSpec, pgCount, pool) }) + waitGroup.Go(func() error { return createRGWPool(ctx, cluster, poolSpec, pgCount, pool) }) } return waitGroup.Wait() } // No concurrency! for _, pool := range pools { - err := createRGWPool(ctx, clusterSpec, poolSpec, pgCount, pool) + err := createRGWPool(ctx, cluster, poolSpec, pgCount, pool) if err != nil { return err } @@ -816,14 +1045,14 @@ func createSimilarPools(ctx *Context, pools []string, clusterSpec *cephv1.Cluste return nil } -func createRGWPool(ctx *Context, clusterSpec *cephv1.ClusterSpec, poolSpec cephv1.PoolSpec, pgCount, requestedName string) error { +func createRGWPool(ctx *Context, cluster *cephv1.ClusterSpec, poolSpec cephv1.PoolSpec, pgCount, requestedName string) error { // create the pool if it doesn't exist yet poolSpec.Application = rgwApplication pool := cephv1.NamedPoolSpec{ Name: poolName(ctx.Name, requestedName), PoolSpec: poolSpec, } - if err := cephclient.CreatePoolWithPGs(ctx.Context, ctx.clusterInfo, clusterSpec, pool, pgCount); err != nil { + if err := cephclient.CreatePoolWithPGs(ctx.Context, ctx.clusterInfo, cluster, pool, pgCount); err != nil { return errors.Wrapf(err, "failed to create pool %q", pool.Name) } // Set the pg_num_min if not the default so the autoscaler won't immediately increase the pg count diff --git a/pkg/operator/ceph/object/objectstore_test.go b/pkg/operator/ceph/object/objectstore_test.go index 625083ed78bd..96e7d63d58db 100644 --- a/pkg/operator/ceph/object/objectstore_test.go +++ b/pkg/operator/ceph/object/objectstore_test.go @@ -18,7 +18,9 @@ package object import ( "context" + "encoding/json" "fmt" + "strings" "syscall" "testing" "time" @@ -70,6 +72,47 @@ const ( "max_objects": -1 } }` + objectZoneJson = `{ + "id": "c1a20ed9-6370-4abd-b78c-bdf0da2a8dbb", + "name": "store-a", + "domain_root": "rgw-meta-pool:store-a.meta.root", + "control_pool": "rgw-meta-pool:store-a.control", + "gc_pool": "rgw-meta-pool:store-a.log.gc", + "lc_pool": "rgw-meta-pool:store-a.log.lc", + "log_pool": "rgw-meta-pool:store-a.log", + "intent_log_pool": "rgw-meta-pool:store-a.log.intent", + "usage_log_pool": "rgw-meta-pool:store-a.log.usage", + "roles_pool": "rgw-meta-pool:store-a.meta.roles", + "reshard_pool": "rgw-meta-pool:store-a.log.reshard", + "user_keys_pool": "rgw-meta-pool:store-a.meta.users.keys", + "user_email_pool": "rgw-meta-pool:store-a.meta.users.email", + "user_swift_pool": "rgw-meta-pool:store-a.meta.users.swift", + "user_uid_pool": "rgw-meta-pool:store-a.meta.users.uid", + "otp_pool": "rgw-meta-pool:store-a.otp", + "system_key": { + "access_key": "", + "secret_key": "" + }, + "placement_pools": [ + { + "key": "default-placement", + "val": { + "index_pool": "rgw-meta-pool:store-a.buckets.index", + "storage_classes": { + "STANDARD": { + "data_pool": "rgw-data-pool:store-a.buckets.data" + } + }, + "data_extra_pool": "rgw-data-pool:store-a.buckets.non-ec", + "index_type": 0, + "inline_data": true + } + } + ], + "realm_id": "e7f176c6-d207-459c-aa04-c3334300ddc6", + "notif_pool": "rgw-meta-pool:store-a.log.notif" + }` + //#nosec G101 -- The credentials are just for the unit tests access_key = "VFKF8SSU9L3L2UR03Z8C" //#nosec G101 -- The credentials are just for the unit tests @@ -98,14 +141,246 @@ func TestReconcileRealm(t *testing.T) { objContext := NewContext(context, &client.ClusterInfo{Namespace: "mycluster"}, storeName) // create the first realm, marked as default store := cephv1.CephObjectStore{} - err := setMultisite(objContext, &store, nil) + err := configureObjectStore(objContext, &store, nil) assert.Nil(t, err) // create the second realm, not marked as default - err = setMultisite(objContext, &store, nil) + err = configureObjectStore(objContext, &store, nil) assert.Nil(t, err) } +func TestApplyExpectedRadosNamespaceSettings(t *testing.T) { + dataPoolName := "testdatapool" + metaPrefix := "testmeta" + dataPrefix := "testdata" + var zoneConfig map[string]interface{} + + t.Run("fail when input empty", func(t *testing.T) { + input := map[string]interface{}{} + err := applyExpectedRadosNamespaceSettings(input, metaPrefix, dataPrefix, dataPoolName) + assert.Error(t, err) + assert.True(t, strings.Contains(err.Error(), "placement_pools")) + }) + t.Run("valid input", func(t *testing.T) { + assert.NoError(t, json.Unmarshal([]byte(objectZoneJson), &zoneConfig)) + assert.NoError(t, applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName)) + // validate a sampling of the updated fields + assert.Equal(t, metaPrefix+"log.notif", zoneConfig["notif_pool"]) + placementPools := zoneConfig["placement_pools"].([]interface{}) + placementPool := placementPools[0].(map[string]interface{}) + placementVals := placementPool["val"].(map[string]interface{}) + storageClasses := placementVals["storage_classes"].(map[string]interface{}) + stdStorageClass := storageClasses["STANDARD"].(map[string]interface{}) + assert.Equal(t, dataPoolName, stdStorageClass["data_pool"]) + }) + t.Run("placement pools empty", func(t *testing.T) { + // remove expected sections of the json and confirm that it returns an error without throwing an exception + emptyPlacementPoolsJson := `{ + "otp_pool": "rgw-meta-pool:store-a.otp", + "placement_pools": [] + }` + assert.NoError(t, json.Unmarshal([]byte(emptyPlacementPoolsJson), &zoneConfig)) + err := applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName) + assert.Error(t, err) + assert.True(t, strings.Contains(err.Error(), "no placement pools")) + }) + t.Run("placement pool value missing", func(t *testing.T) { + missingPoolValueJson := `{ + "otp_pool": "rgw-meta-pool:store-a.otp", + "placement_pools": [ + { + "key": "default-placement" + } + ] + }` + assert.NoError(t, json.Unmarshal([]byte(missingPoolValueJson), &zoneConfig)) + err := applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName) + assert.Error(t, err) + assert.Contains(t, err.Error(), "placement_pools[0].val") + }) + t.Run("storage classes missing", func(t *testing.T) { + storageClassesMissing := `{ + "otp_pool": "rgw-meta-pool:store-a.otp", + "placement_pools": [ + { + "key": "default-placement", + "val": { + "index_pool": "rgw-meta-pool:store-a.buckets.index" + } + } + ] + }` + assert.NoError(t, json.Unmarshal([]byte(storageClassesMissing), &zoneConfig)) + err := applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName) + assert.Error(t, err) + assert.Contains(t, err.Error(), "storage_classes") + }) + t.Run("standard storage class missing", func(t *testing.T) { + standardSCMissing := `{ + "otp_pool": "rgw-meta-pool:store-a.otp", + "placement_pools": [ + { + "key": "default-placement", + "val": { + "index_pool": "rgw-meta-pool:store-a.buckets.index", + "storage_classes": { + "BAD": { + "data_pool": "rgw-data-pool:store-a.buckets.data" + } + } + } + } + ] + }` + assert.NoError(t, json.Unmarshal([]byte(standardSCMissing), &zoneConfig)) + err := applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName) + assert.Error(t, err) + assert.Contains(t, err.Error(), "storage_classes.STANDARD") + }) + t.Run("no config missing", func(t *testing.T) { + nothingMissing := `{ + "otp_pool": "rgw-meta-pool:store-a.otp", + "placement_pools": [ + { + "key": "default-placement", + "val": { + "index_pool": "rgw-meta-pool:store-a.buckets.index", + "storage_classes": { + "STANDARD": { + "data_pool": "rgw-data-pool:store-a.buckets.data" + } + } + } + } + ] + }` + assert.NoError(t, json.Unmarshal([]byte(nothingMissing), &zoneConfig)) + err := applyExpectedRadosNamespaceSettings(zoneConfig, metaPrefix, dataPrefix, dataPoolName) + assert.NoError(t, err) + }) +} + +func TestSharedPoolsExist(t *testing.T) { + executor := &exectest.MockExecutor{} + poolJson := "" + mockExecutorFuncOutput := func(command string, args ...string) (string, error) { + logger.Infof("Command: %s %v", command, args) + if args[0] == "osd" && args[1] == "lspools" { + return poolJson, nil + } + return "", errors.Errorf("unexpected ceph command %q", args) + } + executor.MockExecuteCommandWithOutput = func(command string, args ...string) (string, error) { + return mockExecutorFuncOutput(command, args...) + } + context := &Context{Context: &clusterd.Context{Executor: executor}, Name: "myobj", clusterInfo: client.AdminTestClusterInfo("mycluster")} + sharedPools := cephv1.ObjectSharedPoolsSpec{ + MetadataPoolName: "metapool", + DataPoolName: "datapool", + } + poolJson = `[{"poolnum":1,"poolname":".mgr"},{"poolnum":13,"poolname":".rgw.root"}, + {"poolnum":14,"poolname":"rgw-meta-pool"},{"poolnum":15,"poolname":"rgw-data-pool"}]` + err := sharedPoolsExist(context, sharedPools) + assert.Error(t, err) + assert.Contains(t, err.Error(), "pools do not exist") + + sharedPools.MetadataPoolName = "rgw-meta-pool" + err = sharedPoolsExist(context, sharedPools) + assert.Error(t, err) + assert.Contains(t, err.Error(), "data pool does not exist") + + sharedPools.DataPoolName = "rgw-data-pool" + sharedPools.MetadataPoolName = "bad-pool" + err = sharedPoolsExist(context, sharedPools) + assert.Error(t, err) + assert.Contains(t, err.Error(), "metadata pool does not exist") + + sharedPools.MetadataPoolName = "rgw-meta-pool" + err = sharedPoolsExist(context, sharedPools) + assert.NoError(t, err) +} + +func TestConfigureStoreWithSharedPools(t *testing.T) { + dataPoolAlreadySet := "datapool:store-a.buckets.data" + zoneGetCalled := false + zoneSetCalled := false + placementModifyCalled := false + mockExecutorFuncOutput := func(command string, args ...string) (string, error) { + logger.Infof("Command: %s %v", command, args) + if args[0] == "osd" && args[1] == "lspools" { + return `[{"poolnum":14,"poolname":"test-meta"},{"poolnum":15,"poolname":"test-data"}]`, nil + } + return "", errors.Errorf("unexpected ceph command %q", args) + } + executorFuncTimeout := func(timeout time.Duration, command string, args ...string) (string, error) { + logger.Infof("CommandTimeout: %s %v", command, args) + if args[0] == "zone" { + if args[1] == "get" { + zoneGetCalled = true + replaceDataPool := "rgw-data-pool:store-a.buckets.data" + return strings.Replace(objectZoneJson, replaceDataPool, dataPoolAlreadySet, -1), nil + } else if args[1] == "set" { + zoneSetCalled = true + return objectZoneJson, nil + } else if args[1] == "placement" && args[2] == "modify" { + placementModifyCalled = true + return objectZoneJson, nil + } + } + return "", errors.Errorf("unexpected ceph command %q", args) + } + executor := &exectest.MockExecutor{ + MockExecuteCommandWithOutput: mockExecutorFuncOutput, + MockExecuteCommandWithCombinedOutput: mockExecutorFuncOutput, + MockExecuteCommandWithTimeout: executorFuncTimeout, + } + context := &Context{ + Context: &clusterd.Context{Executor: executor}, + Name: "myobj", + Realm: "myobj", + ZoneGroup: "myobj", + Zone: "myobj", + clusterInfo: client.AdminTestClusterInfo("mycluster"), + } + + t.Run("no shared pools", func(t *testing.T) { + // No shared pools specified, so skip the config + sharedPools := cephv1.ObjectSharedPoolsSpec{} + err := ConfigureSharedPoolsForZone(context, sharedPools) + assert.NoError(t, err) + assert.False(t, zoneGetCalled) + assert.False(t, zoneSetCalled) + assert.False(t, placementModifyCalled) + }) + t.Run("configure the zone", func(t *testing.T) { + sharedPools := cephv1.ObjectSharedPoolsSpec{ + MetadataPoolName: "test-meta", + DataPoolName: "test-data", + } + err := ConfigureSharedPoolsForZone(context, sharedPools) + assert.NoError(t, err) + assert.True(t, zoneGetCalled) + assert.True(t, zoneSetCalled) + assert.True(t, placementModifyCalled) + }) + t.Run("data pool already set", func(t *testing.T) { + // Simulate that the data pool has already been set and the zone update can be skipped + sharedPools := cephv1.ObjectSharedPoolsSpec{ + MetadataPoolName: "test-meta", + DataPoolName: "test-data", + } + dataPoolAlreadySet = fmt.Sprintf("%s:%s.buckets.data", sharedPools.DataPoolName, context.Zone) + zoneGetCalled = false + zoneSetCalled = false + placementModifyCalled = false + err := ConfigureSharedPoolsForZone(context, sharedPools) + assert.True(t, zoneGetCalled) + assert.False(t, zoneSetCalled) + assert.False(t, placementModifyCalled) + assert.NoError(t, err) + }) +} + func TestDeleteStore(t *testing.T) { deleteStore(t, "myobj", `"mystore","myobj"`, false) deleteStore(t, "myobj", `"myobj"`, true) @@ -652,7 +927,8 @@ func Test_createMultisite(t *testing.T) { objContext := NewContext(ctx, &client.ClusterInfo{Namespace: "my-cluster"}, "my-store") // assumption: endpointArg is sufficiently tested by integration tests - err := createMultisite(objContext, "") + store := &cephv1.CephObjectStore{} + err := createNonMultisiteStore(objContext, "", store) assert.Equal(t, tt.expectCommands.getRealm, calledGetRealm) assert.Equal(t, tt.expectCommands.createRealm, calledCreateRealm) assert.Equal(t, tt.expectCommands.getZoneGroup, calledGetZoneGroup) diff --git a/pkg/operator/ceph/object/zone/controller.go b/pkg/operator/ceph/object/zone/controller.go index 36ff48144074..50cdeeaae10c 100644 --- a/pkg/operator/ceph/object/zone/controller.go +++ b/pkg/operator/ceph/object/zone/controller.go @@ -268,20 +268,19 @@ func (r *ReconcileObjectZone) createorUpdateCephZone(zone *cephv1.CephObjectZone return reconcile.Result{}, nil } - if code, ok := exec.ExitStatus(err); ok && code == int(syscall.ENOENT) { - logger.Debugf("ceph zone %q not found, running `radosgw-admin zone create`", zone.Name) + if code, ok := exec.ExitStatus(err); ok && code != int(syscall.ENOENT) { + return reconcile.Result{}, errors.Wrapf(err, "radosgw-admin zone get failed with code %d for reason %q", code, output) + } + logger.Debugf("ceph zone %q not found, running `radosgw-admin zone create`", zone.Name) - zoneIsMaster := false - if zoneGroupJson.MasterZoneID == "" { - zoneIsMaster = true - } + zoneIsMaster := false + if zoneGroupJson.MasterZoneID == "" { + zoneIsMaster = true + } - err = r.createPoolsAndZone(objContext, zone, realmName, zoneIsMaster) - if err != nil { - return reconcile.Result{}, err - } - } else { - return reconcile.Result{}, errors.Wrapf(err, "radosgw-admin zone get failed with code %d for reason %q", code, output) + err = r.createPoolsAndZone(objContext, zone, realmName, zoneIsMaster) + if err != nil { + return reconcile.Result{}, err } return reconcile.Result{}, nil @@ -294,7 +293,7 @@ func (r *ReconcileObjectZone) createPoolsAndZone(objContext *object.Context, zon zoneGroupArg := fmt.Sprintf("--rgw-zonegroup=%s", zone.Spec.ZoneGroup) zoneArg := fmt.Sprintf("--rgw-zone=%s", zone.Name) - err := object.CreatePools(objContext, r.clusterSpec, zone.Spec.MetadataPool, zone.Spec.DataPool) + err := object.ConfigurePools(objContext, r.clusterSpec, zone.Spec.MetadataPool, zone.Spec.DataPool, zone.Spec.SharedPools) if err != nil { return errors.Wrapf(err, "failed to create pools for zone %v", zone.Name) } @@ -321,6 +320,12 @@ func (r *ReconcileObjectZone) createPoolsAndZone(objContext *object.Context, zon } logger.Debugf("created ceph zone %q", zone.Name) + // Configure the zone for RADOS namespaces + err = object.ConfigureSharedPoolsForZone(objContext, zone.Spec.SharedPools) + if err != nil { + return errors.Wrapf(err, "failed to configure rados namespaces for zone") + } + return nil } diff --git a/tests/scripts/github-action-helper.sh b/tests/scripts/github-action-helper.sh index 9af42bae4b99..6de8cf89e4f1 100755 --- a/tests/scripts/github-action-helper.sh +++ b/tests/scripts/github-action-helper.sh @@ -301,7 +301,9 @@ function deploy_cluster() { # create the cluster resources kubectl create -f cluster-test.yaml - kubectl create -f object-test.yaml + kubectl create -f object-shared-pools-test.yaml + kubectl create -f object-a.yaml + kubectl create -f object-b.yaml kubectl create -f pool-test.yaml kubectl create -f filesystem-test.yaml sed -i "/resources:/,/ # priorityClassName:/d" rbdmirror.yaml diff --git a/tests/scripts/validate_cluster.sh b/tests/scripts/validate_cluster.sh index 306a1eece17d..091e83caa70d 100755 --- a/tests/scripts/validate_cluster.sh +++ b/tests/scripts/validate_cluster.sh @@ -20,7 +20,14 @@ set -xEe if [ -z "$DAEMON_TO_VALIDATE" ]; then DAEMON_TO_VALIDATE=all fi -OSD_COUNT=$2 +# The second script arg is optional and depends on the daemon +if [ "$DAEMON_TO_VALIDATE" == "rgw" ]; then + export OBJECT_STORE_NAME=$2 +else + export OSD_COUNT=$2 + # default to the name of the object store from object-a.yaml + export OBJECT_STORE_NAME=store-a +fi ############# # FUNCTIONS # @@ -63,7 +70,7 @@ function test_demo_osd { function test_demo_rgw { timeout 360 bash -x <<-'EOF' - until [[ "$(kubectl -n rook-ceph get pods -l app=rook-ceph-rgw -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}')" == "True" ]]; do + until [[ "$(kubectl -n rook-ceph get pods -l rgw=$OBJECT_STORE_NAME -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}')" == "True" ]]; do echo "waiting for rgw pods to be ready" sleep 5 done From 9e55d0e02db0edf906ab47ddef568525f367695a Mon Sep 17 00:00:00 2001 From: Travis Nielsen Date: Tue, 12 Mar 2024 07:59:05 -0600 Subject: [PATCH 39/65] doc: add space for object store formatting The description of the object store types was improperly formatted due to a missing newline before the bullet points. Signed-off-by: Travis Nielsen --- .../Storage-Configuration/Object-Storage-RGW/object-storage.md | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md index 409913909417..87591920aad3 100644 --- a/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md +++ b/Documentation/Storage-Configuration/Object-Storage-RGW/object-storage.md @@ -11,6 +11,7 @@ This guide assumes a Rook cluster as explained in the [Quickstart](../../Getting ## Configure an Object Store Rook can configure the Ceph Object Store for several different scenarios. See each linked section for the configuration details. + 1. Create a [local object store](#create-a-local-object-store) with dedicated Ceph pools. This option is recommended if a single object store is required, and is the simplest to get started. 2. Create [one or more object stores with shared Ceph pools](#create-local-object-stores-with-shared-pools). This option is recommended when multiple object stores are required. 3. Connect to an [RGW service in an external Ceph cluster](#connect-to-an-external-object-store), rather than create a local object store. From ab5b269bc48a3b868b43f125a0d74607e5f4a71c Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Tue, 12 Mar 2024 17:20:52 +0100 Subject: [PATCH 40/65] doc: adding a doc example for ceph-exporter labels Signed-off-by: Redouane Kachach --- deploy/examples/cluster.yaml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/deploy/examples/cluster.yaml b/deploy/examples/cluster.yaml index 57fc25790260..69b02a3ffbc9 100644 --- a/deploy/examples/cluster.yaml +++ b/deploy/examples/cluster.yaml @@ -206,6 +206,8 @@ spec: # cleanup: # mgr: # prepareosd: + # These labels are applied to ceph-exporter servicemonitor only + # exporter: # monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator. # These labels can be passed as LabelSelector to Prometheus # monitoring: From f2085777ece4bbff76002ca1f9056fe0dc6e45bf Mon Sep 17 00:00:00 2001 From: parth-gr Date: Tue, 12 Mar 2024 16:33:44 +0530 Subject: [PATCH 41/65] build: add rbac for default sa rook csv doesnt contain the default service account recently we added default sa for most of the ceph daemons but it didnt have the rbacs, so added the rbacs to it so rook csv can generate default sa Signed-off-by: parth-gr (cherry picked from commit d27cfbde1fe1355b6f01c096c4f8c56d20c9b701) --- .../library/templates/_cluster-role.tpl | 10 ++++++++ .../templates/_cluster-rolebinding.tpl | 14 +++++++++++ deploy/examples/common.yaml | 24 +++++++++++++++++++ 3 files changed, 48 insertions(+) diff --git a/deploy/charts/library/templates/_cluster-role.tpl b/deploy/charts/library/templates/_cluster-role.tpl index fd79b7ce908e..3d13e12a142b 100644 --- a/deploy/charts/library/templates/_cluster-role.tpl +++ b/deploy/charts/library/templates/_cluster-role.tpl @@ -148,4 +148,14 @@ rules: - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "update", "delete", "list"] +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-default + namespace: {{ .Release.Namespace }} # namespace:cluster +rules: + - apiGroups: [""] + resources: [""] + verbs: [""] {{- end }} diff --git a/deploy/charts/library/templates/_cluster-rolebinding.tpl b/deploy/charts/library/templates/_cluster-rolebinding.tpl index dc5e05f29daf..01281929bd6a 100644 --- a/deploy/charts/library/templates/_cluster-rolebinding.tpl +++ b/deploy/charts/library/templates/_cluster-rolebinding.tpl @@ -105,4 +105,18 @@ subjects: - kind: ServiceAccount name: rook-ceph-purge-osd namespace: {{ .Release.Namespace }} # namespace:cluster +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-default + namespace: {{ .Release.Namespace }} # namespace:cluster +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: rook-ceph-default +subjects: + - kind: ServiceAccount + name: rook-ceph-default + namespace: {{ .Release.Namespace }} # namespace:cluster {{- end }} diff --git a/deploy/examples/common.yaml b/deploy/examples/common.yaml index ed523e8cb051..a9a1067b00e2 100644 --- a/deploy/examples/common.yaml +++ b/deploy/examples/common.yaml @@ -790,6 +790,16 @@ rules: - update - delete --- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-default + namespace: rook-ceph # namespace:cluster +rules: + - apiGroups: [""] + resources: [""] + verbs: [""] +--- # Aspects of ceph-mgr that operate within the cluster's namespace kind: Role apiVersion: rbac.authorization.k8s.io/v1 @@ -1052,6 +1062,20 @@ subjects: name: rook-ceph-cmd-reporter namespace: rook-ceph # namespace:cluster --- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-default + namespace: rook-ceph # namespace:cluster +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: rook-ceph-default +subjects: + - kind: ServiceAccount + name: rook-ceph-default + namespace: rook-ceph # namespace:cluster +--- # Allow the ceph mgr to access resources scoped to the CephCluster namespace necessary for mgr modules kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 From daec93eaaa65add60296a67c8fb3bdde8cbf9350 Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Wed, 13 Mar 2024 13:29:01 +0100 Subject: [PATCH 42/65] monitoring: increasing metrics scraping interval from 5s to 10s let's increase the monitoring interval to match the default interval used by Prometheus when deployed by Ceph: 10s Closes: https://github.com/rook/rook/issues/13883 Signed-off-by: Redouane Kachach --- pkg/operator/k8sutil/prometheus.go | 2 +- pkg/operator/k8sutil/prometheus_test.go | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pkg/operator/k8sutil/prometheus.go b/pkg/operator/k8sutil/prometheus.go index b257731c2a72..ea9600237547 100644 --- a/pkg/operator/k8sutil/prometheus.go +++ b/pkg/operator/k8sutil/prometheus.go @@ -63,7 +63,7 @@ func GetServiceMonitor(name string, namespace string, portName string) *monitori { Port: portName, Path: "/metrics", - Interval: "5s", + Interval: "10s", }, }, }, diff --git a/pkg/operator/k8sutil/prometheus_test.go b/pkg/operator/k8sutil/prometheus_test.go index 6802896c597a..6da17ca44a6d 100644 --- a/pkg/operator/k8sutil/prometheus_test.go +++ b/pkg/operator/k8sutil/prometheus_test.go @@ -27,7 +27,7 @@ func TestGetServiceMonitor(t *testing.T) { name := "rook-ceph-mgr" namespace := "rook-ceph" port := "http-metrics" - interval := monitoringv1.Duration("5s") + interval := monitoringv1.Duration("10s") servicemonitor := GetServiceMonitor(name, namespace, port) assert.Equal(t, name, servicemonitor.GetName()) assert.Equal(t, namespace, servicemonitor.GetNamespace()) From d262bf71499c0d8af7e21a906bf90203c38156e1 Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Tue, 13 Feb 2024 12:31:33 +0100 Subject: [PATCH 43/65] mgr: enable rook orchestrator mgr module by default previously, we had kept the Rook orchestrator manager module disabled so far because it was causing a bunch of issues and errors on the dashboard. But with the recent changes made in the cephv v18.2.1 release, we've fixed those issues and made some overall improvements to the dashboard user experience when Rook is enabled. With all that in mind, it's time to switch on the Rook orchestrator by default. Fixes: https://github.com/rook/rook/issues/13760 Signed-off-by: Redouane Kachach --- deploy/examples/cluster-on-pvc-minikube.yaml | 3 +++ deploy/examples/cluster-on-pvc.yaml | 3 +++ deploy/examples/cluster-test.yaml | 3 +++ deploy/examples/cluster.yaml | 4 ++-- 4 files changed, 11 insertions(+), 2 deletions(-) diff --git a/deploy/examples/cluster-on-pvc-minikube.yaml b/deploy/examples/cluster-on-pvc-minikube.yaml index 11560616f4be..3900fc3eff05 100644 --- a/deploy/examples/cluster-on-pvc-minikube.yaml +++ b/deploy/examples/cluster-on-pvc-minikube.yaml @@ -140,6 +140,9 @@ spec: storage: 10Gi mgr: count: 1 + modules: + - name: rook + enabled: true dashboard: enabled: true ssl: false diff --git a/deploy/examples/cluster-on-pvc.yaml b/deploy/examples/cluster-on-pvc.yaml index ef3c178e6af3..9efaf587aa4d 100644 --- a/deploy/examples/cluster-on-pvc.yaml +++ b/deploy/examples/cluster-on-pvc.yaml @@ -40,6 +40,9 @@ spec: mgr: count: 2 allowMultiplePerNode: false + modules: + - name: rook + enabled: true dashboard: enabled: true ssl: true diff --git a/deploy/examples/cluster-test.yaml b/deploy/examples/cluster-test.yaml index 4d25d02254b8..2f088c5d5705 100644 --- a/deploy/examples/cluster-test.yaml +++ b/deploy/examples/cluster-test.yaml @@ -23,6 +23,9 @@ spec: mgr: count: 1 allowMultiplePerNode: true + modules: + - name: rook + enabled: true dashboard: enabled: true crashCollector: diff --git a/deploy/examples/cluster.yaml b/deploy/examples/cluster.yaml index 3e35039087ba..6292cb3037cf 100644 --- a/deploy/examples/cluster.yaml +++ b/deploy/examples/cluster.yaml @@ -59,8 +59,8 @@ spec: modules: # List of modules to optionally enable or disable. # Note the "dashboard" and "monitoring" modules are already configured by other settings in the cluster CR. - # - name: rook - # enabled: true + - name: rook + enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true From 4b9e77af1e4dff3a890f57042b13f9c8de37f3fc Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Thu, 14 Mar 2024 15:15:30 +0100 Subject: [PATCH 44/65] monitoring: set metrics scraping interval to 10s in examples files Signed-off-by: Redouane Kachach --- Documentation/Helm-Charts/operator-chart.md | 2 +- deploy/charts/rook-ceph-cluster/values.yaml | 2 +- deploy/charts/rook-ceph/values.yaml | 2 +- deploy/examples/monitoring/exporter-service-monitor.yaml | 2 +- deploy/examples/monitoring/service-monitor.yaml | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index 45f930cbf8fa..c2f37d8141a4 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -119,7 +119,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.registrar.image` | Kubernetes CSI registrar image | `registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0` | | `csi.resizer.image` | Kubernetes CSI resizer image | `registry.k8s.io/sig-storage/csi-resizer:v1.10.0` | | `csi.serviceMonitor.enabled` | Enable ServiceMonitor for Ceph CSI drivers | `false` | -| `csi.serviceMonitor.interval` | Service monitor scrape interval | `"5s"` | +| `csi.serviceMonitor.interval` | Service monitor scrape interval | `"10s"` | | `csi.serviceMonitor.labels` | ServiceMonitor additional labels | `{}` | | `csi.serviceMonitor.namespace` | Use a different namespace for the ServiceMonitor | `nil` | | `csi.sidecarLogLevel` | Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. | `0` | diff --git a/deploy/charts/rook-ceph-cluster/values.yaml b/deploy/charts/rook-ceph-cluster/values.yaml index ff52e617e047..ed0ce51daaa5 100644 --- a/deploy/charts/rook-ceph-cluster/values.yaml +++ b/deploy/charts/rook-ceph-cluster/values.yaml @@ -61,7 +61,7 @@ monitoring: # externalMgrEndpoints: # externalMgrPrometheusPort: # Scrape interval for prometheus - # interval: 5s + # interval: 10s # allow adding custom labels and annotations to the prometheus rule prometheusRule: # -- Labels applied to PrometheusRule diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 66c4b1687ec1..1389fc2b94e1 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -448,7 +448,7 @@ csi: # -- Enable ServiceMonitor for Ceph CSI drivers enabled: false # -- Service monitor scrape interval - interval: 5s + interval: 10s # -- ServiceMonitor additional labels labels: {} # -- Use a different namespace for the ServiceMonitor diff --git a/deploy/examples/monitoring/exporter-service-monitor.yaml b/deploy/examples/monitoring/exporter-service-monitor.yaml index 847d756552ca..b24e0fc9f935 100644 --- a/deploy/examples/monitoring/exporter-service-monitor.yaml +++ b/deploy/examples/monitoring/exporter-service-monitor.yaml @@ -17,4 +17,4 @@ spec: endpoints: - port: ceph-exporter-http-metrics path: /metrics - interval: 5s + interval: 10s diff --git a/deploy/examples/monitoring/service-monitor.yaml b/deploy/examples/monitoring/service-monitor.yaml index f3563b157957..2bd28b694f14 100644 --- a/deploy/examples/monitoring/service-monitor.yaml +++ b/deploy/examples/monitoring/service-monitor.yaml @@ -16,4 +16,4 @@ spec: endpoints: - port: http-metrics path: /metrics - interval: 5s + interval: 10s From 4f555dbbcbdc3242d85d6fa1703054237544ee36 Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Mon, 4 Mar 2024 11:08:11 -0700 Subject: [PATCH 45/65] csi: allow force disabling holder pods Add new CSI_DISABLE_HOLDER_PODS option for rook-ceph-operator. This option will disable holder pods when set to "true". In the long term, Rook plans to deprecate the holder pods entirely. This new option will allow users to choose to migrate their clusters to non-holder clusters when they are ready and able, giving them time to gracefully migrate before the holders are permanently removed. This option is set to "false" by default so that upgrading users don't have their CSI pods modified unexpectedly. Example manifests are modified to set this value to true so that new clusters will not deploy holder pods. Migrating users are provided with documentation to instruct them about the new requirements they need to satisfy to successfully remove holder pods, a procedure for migrating pods from holder to non-holder mounts, and a way to delete holder pods once they are no longer in use. When users set CSI_DISABLE_HOLDER_PODS="true", the CSI controller will no longer deploy or update the holder pod Daemonsets, but it does not delete any existing Daemonsets. This allows already-attached PVCs to continue operating normally with their network connection continuing to exist in the current holder pod. This is critical to avoid causing ia cluster-wide storage outage. More info: https://github.com/rook/rook/issues/13055 Signed-off-by: Blaine Gardner --- .github/workflows/canary-integration-test.yml | 6 + .../CRDs/Cluster/network-providers.md | 412 +++++++++++++++++- Documentation/Helm-Charts/operator-chart.md | 1 + Documentation/Upgrade/rook-upgrade.md | 9 + PendingReleaseNotes.md | 4 + .../charts/rook-ceph/templates/configmap.yaml | 1 + deploy/charts/rook-ceph/values.yaml | 4 + deploy/examples/operator-openshift.yaml | 5 + deploy/examples/operator.yaml | 5 + pkg/operator/ceph/csi/cluster_config.go | 1 + pkg/operator/ceph/csi/cluster_config_test.go | 2 + pkg/operator/ceph/csi/controller.go | 62 ++- pkg/operator/ceph/csi/controller_test.go | 32 ++ pkg/operator/ceph/csi/spec.go | 18 +- 14 files changed, 530 insertions(+), 32 deletions(-) diff --git a/.github/workflows/canary-integration-test.yml b/.github/workflows/canary-integration-test.yml index d11bcb6a4db9..5df527162fdb 100644 --- a/.github/workflows/canary-integration-test.yml +++ b/.github/workflows/canary-integration-test.yml @@ -1533,6 +1533,9 @@ jobs: shell: bash --noprofile --norc -eo pipefail -x {0} run: tests/scripts/github-action-helper.sh build_rook + - name: allow holder pod deployment + run: sed -i "s|CSI_DISABLE_HOLDER_PODS|# CSI_DISABLE_HOLDER_PODS|g" "deploy/examples/operator.yaml" + - name: validate-yaml run: tests/scripts/github-action-helper.sh validate_yaml @@ -1598,6 +1601,9 @@ jobs: - name: setup cluster resources uses: ./.github/workflows/canary-test-config + - name: allow holder pod deployment + run: sed -i "s|CSI_DISABLE_HOLDER_PODS|# CSI_DISABLE_HOLDER_PODS|g" "deploy/examples/operator.yaml" + - name: set Ceph version in CephCluster manifest run: tests/scripts/github-action-helper.sh replace_ceph_image "deploy/examples/cluster-test.yaml" "${{ github.event.inputs.ceph-image }}" diff --git a/Documentation/CRDs/Cluster/network-providers.md b/Documentation/CRDs/Cluster/network-providers.md index a74e6c7a02fc..f98f44eea3dd 100644 --- a/Documentation/CRDs/Cluster/network-providers.md +++ b/Documentation/CRDs/Cluster/network-providers.md @@ -79,6 +79,27 @@ to or from host networking after you update this setting, you will need to [failover the mons](../../Storage-Configuration/Advanced/ceph-mon-health.md#failing-over-a-monitor) in order to have mons on the desired network configuration. +## CSI Host Networking + +Host networking for CSI pods is controlled independently from CephCluster networking. CSI can be +deployed with host networking or pod networking. CSI uses host networking by default, which is the +recommended configuration. CSI can be forced to use pod networking by setting the operator config +`CSI_ENABLE_HOST_NETWORK: "false"`. + +When CSI uses pod networking (`"false"` value), it is critical that `csi-rbdplugin`, +`csi-cephfsplugin`, and `csi-nfsplugin` pods are not deleted or updated without following a special +process outlined below. If one of these pods is deleted, it will cause all existing PVCs on the +pod's node to hang permanently until all application pods are restarted. + +The process for updating CSI plugin pods is to follow the following steps on each Kubernetes node +sequentially: +1. `cordon` and `drain` the node +2. When the node is drained, delete the plugin pod on the node (optionally, the node can be rebooted) +3. `uncordon` the node +4. Proceed to the next node when pods on the node rerun and stabilize + +For modifications, see [Modifying CSI Networking](#modifying-csi-networking). + ## Multus `network.provider: multus` @@ -91,6 +112,53 @@ While any CNI may be used, the intent is to allow use of CNIs which allow Ceph t specific host interfaces. This improves latency and bandwidth while preserving host-level network isolation. +### Multus Prerequisites + +These prerequisites apply when: +- CephCluster `network.selector['public']` is specified, AND +- Operator config `CSI_ENABLE_HOST_NETWORK` is `"true"` (or unspecified), AND +- Operator config `CSI_DISABLE_HOLDER_PODS` is `"true"` + +If any of the above do not apply, these prerequisites can be skipped. + +In order for host network-enabled Ceph-CSI to communicate with a Multus-enabled CephCluster, some +setup is required for Kubernetes hosts. + +These prerequisites require an understanding of how Multus networks are configured and how Rook uses +them. Following sections will help clarify questions that may arise here. + +Two basic requirements must be met: + +1. Kubernetes hosts must be able to route successfully to the Multus public network. +2. Pods on the Multus public network must be able to route successfully to Kubernetes hosts. + +These two requirements can be broken down further as follows: + +1. For routing Kubernetes hosts to the Multus public network, each host must ensure the following: + 1. the host must have an interface connected to the Multus public network (the "public-network-interface"). + 2. the "public-network-interface" must have an IP address. + 3. a route must exist to direct traffic destined for pods on the Multus public network through + the "public-network-interface". +2. For routing pods on the Multus public network to Kubernetes hosts, the public + NetworkAttachementDefinition must be configured to ensure the following: + 1. The definition must have its IP Address Management (IPAM) configured to route traffic destined + for nodes through the network. +3. To ensure routing between the two networks works properly, no IP address assigned to a node can + overlap with any IP address assigned to a pod on the Multus public network. + +These requirements require careful planning, but some methods are able to meet these requirements +more easily than others. [Examples are provided after the full document](#multus-examples) to help +understand and implement these requirements. + +!!! Tip + Keep in mind that there are often ten or more Rook/Ceph pods per host. The pod address space may + need to be an order of magnitude larger (or more) than the host address space to allow the + storage cluster to grow in the future. + +If these prerequisites are not achievable, the remaining option is to set the Rook operator config +`CSI_ENABLE_HOST_NETWORK: "false"` as documented in the [CSI Host Networking](#csi-host-networking) +section. + ### Multus Configuration Refer to [Multus documentation](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md) @@ -113,10 +181,9 @@ apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net - namespace: rook-ceph spec: config: '{ - "cniVersion": "0.3.0", + "cniVersion": "0.3.1", "type": "macvlan", "master": "eth0", "mode": "bridge", @@ -127,13 +194,16 @@ spec: }' ``` +* Note that the example above does not specify information that would route pods on the network to + Kubernetes hosts. * Ensure that `master` matches the network interface on hosts that you want to use. It must be the same across all hosts. -* CNI type `macvlan` is highly recommended. +* CNI type [macvlan](https://www.cni.dev/plugins/current/main/macvlan/) is highly recommended. It has less CPU and memory overhead compared to traditional Linux `bridge` configurations. -* IPAM type `whereabouts` is recommended because it ensures each pod gets an IP address unique - within the Kubernetes cluster. No DHCP server is required. If a DHCP server is present on the - network, ensure the IP range does not overlap with the DHCP server's range. +* IPAM type [whereabouts](https://github.com/k8snetworkplumbingwg/whereabouts) is recommended + because it ensures each pod gets an IP address unique within the Kubernetes cluster. No DHCP + server is required. If a DHCP server is present on the network, ensure the IP range does not + overlap with the DHCP server's range. NetworkAttachmentDefinitions are selected for the desired Ceph network using `selectors`. Selector values should include the namespace in which the NAD is present. `public` and `cluster` may be @@ -189,3 +259,333 @@ NAD is attached to the container, allowing the daemon to communicate with the re There is work in progress to fix this issue in the [multus-service](https://github.com/k8snetworkplumbingwg/multus-service) repository. At the time of writing it's unclear when this will be supported. + +### Multus examples + +#### Macvlan, Whereabouts, Node Dynamic IPs + +The network plan for this cluster will be as follows: +- The underlying network supporting the public network will be attached to hosts at `eth0` +- Macvlan will be used to attach pods to `eth0` +- Pods and nodes will have separate IP ranges +- Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 hosts) +- Nodes will have IPs assigned dynamically via DHCP + (DHCP configuration is outside the scope of this document) +- Pods will get the IP range 192.168.0.0/18 (this allows up to 16,384 Rook/Ceph pods) +- Whereabouts will be used to assign IPs to the Multus public network + +Node configuration must allow nodes to route to pods on the Multus public network. + +Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to +route between each other, the host must also be connected via Macvlan. + +Because the host IP range is different from the pod IP range, a route must be added to include the +pod range. + +Such a configuration should be equivalent to the following: + +```console +ip link add public-shim link eth0 type macvlan mode bridge +ip link set public-shim up +dhclient public-shim # gets IP in range 192.168.252.0/22 +ip route add 192.168.0.0/18 dev public-shim +``` + +The NetworkAttachmentDefinition for the public network would look like the following, using +Whereabouts' `exclude` option to simplify the `range` request. + +The Whereabouts `routes[].dst` option +([is not well documented](https://gist.github.com/dougbtv/b41c759e9b9aee6a3fe210f09da8e835)) +but allows adding routing pods to hosts via the Multus public network. + +```yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: public-net + # namespace: rook-ceph # (optional) operator namespace +spec: + config: '{ + "cniVersion": "0.3.1", + "type": "macvlan", + "master": "eth0", + "mode": "bridge", + "ipam": { + "type": "whereabouts", + "range": "1192.168.0.0/18", + "routes": [ + {"dst": "192.168.252.0/22"} + ] + } + }' +``` + +#### Macvlan, Whereabouts, Node Static IPs + +The network plan for this cluster will be as follows: +- The underlying network supporting the public network will be attached to hosts at `eth0` +- Macvlan will be used to attach pods to `eth0` +- Pods and nodes will share the IP range 192.168.0.0/16 +- Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 hosts) +- Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) +- Whereabouts will be used to assign IPs to the Multus public network +- Nodes will have IPs assigned statically via PXE configs + (PXE configuration and static IP management is outside the scope of this document) + +PXE configuration for the nodes must apply a configuration to nodes to allow nodes to route to pods +on the Multus public network. + +Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to +route between each other, the host must also be connected via Macvlan. + +Because the host IP range is a subset of the whole range, a route must be added to include the whole +range. + +Such a configuration should be equivalent to the following: + +```console +ip link add public-shim link eth0 type macvlan mode bridge +ip addr add 192.168.252.${STATIC_IP}/22 dev public-shim +ip link set public-shim up +ip route add 192.168.0.0/16 dev public-shim +``` + +The NetworkAttachmentDefinition for the public network would look like the following, using +Whereabouts' `exclude` option to simplify the `range` request. The Whereabouts `routes[].dst` option +ensures pods route to hosts via the Multus public network. + +```yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: public-net +spec: + config: '{ + "cniVersion": "0.3.1", + "type": "macvlan", + "master": "eth0", + "mode": "bridge", + "ipam": { + "type": "whereabouts", + "range": "192.168.0.0/16", + "exclude": [ + "192.168.252.0/22" + ], + "routes": [ + {"dst": "192.168.252.0/22"} + ] + } + }' +``` + +#### Macvlan, DHCP + +The network plan for this cluster will be as follows: +- The underlying network supporting the public network will be attached to hosts at `eth0` +- Macvlan will be used to attach pods to `eth0` +- Pods and nodes will share the IP range 192.168.0.0/16 +- DHCP will be used to ensure nodes and pods get unique IP addresses + +Node configuration must allow nodes to route to pods on the Multus public network. + +Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to +route between each other, the host must also be connected via Macvlan. + +Such a configuration should be equivalent to the following: + +```console +ip link add public-shim link eth0 type macvlan mode bridge +ip link set public-shim up +dhclient public-shim # gets IP in range 192.168.0.0/16 +``` + +The NetworkAttachmentDefinition for the public network would look like the following. + +```yaml +apiVersion: "k8s.cni.cncf.io/v1" +kind: NetworkAttachmentDefinition +metadata: + name: public-net +spec: + config: '{ + "cniVersion": "0.3.1", + "type": "macvlan", + "master": "eth0", + "mode": "bridge", + "ipam": { + "type": "dhcp", + } + }' +``` + +## Modifying CSI networking + +### Disabling Holder Pods with Multus and CSI Host Networking + +This migration section applies in the following scenarios: +- CephCluster `network.provider` is `"multus"`, AND +- Operator config `CSI_DISABLE_HOLDER_PODS` is changed to `"true"`, AND +- Operator config `CSI_ENABLE_HOST_NETWORK` is (or is modified to be) `"true"` + +If the scenario does not apply, skip ahead to the +[Disabling Holder Pods](#disabling-holder-pods) section below. + +**Step 1** +Before setting `CSI_ENABLE_HOST_NETWORK: "true"` and `CSI_DISABLE_HOLDER_PODS: "true"`, thoroughly +read through the [Multus Prerequisites section](#multus-prerequisites). Use the prerequisites +section to develop a plan for modifying host configurations as well as the public +NetworkAttachmentDefinition. + +Once the plan is developed, execute the plan by following the steps below. + +**Step 2** +First, modify the public NetworkAttachmentDefinition as needed. For example, it may be necessary to +add the `routes` directive to the Whereabouts IPAM configuration as in +[this example](#macvlan-whereabouts-node-static-ips). + +**Step 3** +Next, modify the host configurations in the host configuration system. The host configuration system +may be something like PXE, ignition config, cloud-init, Ansible, or any other such system. A node +reboot is likely necessary to apply configuration updates, but wait until the next step to reboot +nodes. + +**Step 4** +After the NetworkAttachmentDefinition is modified, OSD pods must be restarted. It is easiest to +complete this requirement at the same time nodes are being rebooted to apply configuration updates. + +For each node in the Kubernetes cluster: +1. `cordon` and `drain` the node +2. Wait for all pods to drain +3. Reboot the node, ensuring the new host configuration will be applied +4. `uncordon` and `undrain` the node +5. Wait for the node to be rehydrated and stable +6. Proceed to the next node + +By following this process, host configurations will be updated, and OSDs are also automatically +restarted as part of the `drain` and `undrain` process on each node. + +OSDs can be restarted manually if node configuration updates do not require reboot. + +**Step 5** +Once all nodes are running the new configuration and all OSDs have been restarted, check that the +new node and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each +node can `ping` OSD pods via the public network. + +Use the [toolbox](../../Troubleshooting/ceph-toolbox.md) or the +[kubectl plugin](../../Troubleshooting/kubectl-plugin.md) to list OSD IPs. + +The example below uses +the kubectl plugin, and the OSD public network is 192.168.20.0/24. +```console +$ kubectl rook-ceph ceph osd dump | grep 'osd\.' +osd.0 up in weight 1 up_from 7 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:192.168.20.19:6800/213587265,v1:192.168.20.19:6801/213587265] [v2:192.168.30.1:6800/213587265,v1:192.168.30.1:6801/213587265] exists,up 7ebbc19a-d45a-4b12-8fef-0f9423a59e78 +osd.1 up in weight 1 up_from 24 up_thru 24 down_at 20 last_clean_interval [8,23) [v2:192.168.20.20:6800/3144257456,v1:192.168.20.20:6801/3144257456] [v2:192.168.30.2:6804/3145257456,v1:192.168.30.2:6805/3145257456] exists,up 146b27da-d605-4138-9748-65603ed0dfa5 +osd.2 up in weight 1 up_from 21 up_thru 0 down_at 20 last_clean_interval [18,20) [v2:192.168.20.21:6800/1809748134,v1:192.168.20.21:6801/1809748134] [v2:192.168.30.3:6804/1810748134,v1:192.168.30.3:6805/1810748134] exists,up ff3d6592-634e-46fd-a0e4-4fe9fafc0386 +``` + +Now check that each node (NODE) can reach OSDs over the public network: +```console +$ ssh user@NODE +$ user@NODE $> ping -c3 192.168.20.19 +# [truncated, successful output] +$ user@NODE $> ping -c3 192.168.20.20 +# [truncated, successful output] +$ user@NODE $> ping -c3 192.168.20.21 +# [truncated, successful output] +``` + +If any node does not get a successful `ping` to a running OSD, it is not safe to proceed. A problem +may arise here for many reasons. Some reasons include: the host may not be properly attached to the +Multus public network, the public NetworkAttachmentDefinition may not be properly configured to +route back to the host, the host may have a firewall rule blocking the connection in either +direction, or the network switch may have a firewall rule blocking the connection. Diagnose and fix +the issue, then return to **Step 1**. + +**Step 6** +If the above check succeeds for all nodes, proceed with the +[Disabling Holder Pods](#disabling-holder-pods) steps below. + +### Disabling Holder Pods + +This migration section applies when `CSI_DISABLE_HOLDER_PODS` is changed to `"true"`. + +**Step 1** +If any CephClusters have Multus enabled (`network.provider: "multus"`), follow the +[Disabling Holder Pods with Multus and CSI Host Networking](#disabling-holder-pods-with-multus-and-csi-host-networking) +steps above before continuing. + +**Step 2** +Begin by setting `CSI_DISABLE_HOLDER_PODS: "true"` (and `CSI_ENABLE_HOST_NETWORK: "true"` if desired). + +After this, `csi-*plugin-*` pods will restart, and `csi-*plugin-holder-*` pods will remain running. + +**Step 3** +Check that CSI pods are using the correct host networking configuration using the example below as +guidance (in the example, `CSI_ENABLE_HOST_NETWORK` is `"true"`): +```console +$ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-rbdplugin | grep -i hostnetwork + hostNetwork: true +$ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-cephfsplugin | grep -i hostnetwork + hostNetwork: true +$ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-nfsplugin | grep -i hostnetwork + hostNetwork: true +``` + +**Step 4** +At this stage, PVCs for running applications are still using the holder pods. These PVCs must be +migrated from the holder to the new network. Follow the below process to do so. + +For each node in the Kubernetes cluster: +1. `cordon` and `drain` the node +2. Wait for all pods to drain +3. Delete all `csi-*plugin-holder*` pods on the node (a new holder will take it's place) +4. `uncordon` and `undrain` the node +5. Wait for the node to be rehydrated and stable +6. Proceed to the next node + +**Step 5** +After this process is done for all Kubernetes nodes, it is safe to delete the `csi-*plugin-holder*` +daemonsets. + +Delete the holder daemonsets using the example below as guidance: +```console +$ kubectl -n rook-ceph get daemonset -o name | grep plugin-holder +daemonset.apps/csi-cephfsplugin-holder-my-cluster +daemonset.apps/csi-rbdplugin-holder-my-cluster + +$ kubectl -n rook-ceph delete daemonset.apps/csi-cephfsplugin-holder-my-cluster +daemonset.apps "csi-cephfsplugin-holder-my-cluster" deleted + +$ kubectl -n rook-ceph delete daemonset.apps/csi-rbdplugin-holder-my-cluster +daemonset.apps "csi-rbdplugin-holder-my-cluster" deleted +``` + +**Step 6** +The migration is now complete! Congratulations! + +### Applying CSI Networking + +This migration section applies in the following scenario: +- `CSI_ENABLE_HOST_NETWORK` is modified, AND +- `CSI_DISABLE_HOLDER_PODS` is `"true"` + +**Step 1** +If `CSI_DISABLE_HOLDER_PODS` is unspecified or is `"false"`, follow the +[Disabling Holder Pods](#disabling-holder-pods) section first. + +**Step 2** +Begin by setting the desired `CSI_ENABLE_HOST_NETWORK` value. + +**Step 3** +At this stage, PVCs for running applications are still using the the old network. These PVCs must be +migrated to the new network. Follow the below process to do so. + +For each node in the Kubernetes cluster: +1. `cordon` and `drain` the node +2. Wait for all pods to drain +3. `uncordon` and `undrain` the node +4. Wait for the node to be rehydrated and stable +5. Proceed to the next node + +**Step 4** +The migration is now complete! Congratulations! diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index 352c5498e785..ad2a706ceb7e 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -80,6 +80,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.csiRBDPluginVolume` | The volume of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDPluginVolumeMount` | The volume mounts of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDProvisionerResource` | CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true` | see values.yaml | +| `csi.disableHolderPods` | Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network without needing hosts to the network. Holder pods are being deprecated. See issue for details: https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true". | `true` | | `csi.enableCSIEncryption` | Enable Ceph CSI PVC encryption support | `false` | | `csi.enableCSIHostNetwork` | Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance | `true` | | `csi.enableCephfsDriver` | Enable Ceph CSI CephFS driver | `true` | diff --git a/Documentation/Upgrade/rook-upgrade.md b/Documentation/Upgrade/rook-upgrade.md index 5ef74e162f7c..ad1b4aba6c8e 100644 --- a/Documentation/Upgrade/rook-upgrade.md +++ b/Documentation/Upgrade/rook-upgrade.md @@ -256,3 +256,12 @@ This cluster is finished: At this point, the Rook operator should be running version `rook/ceph:v1.13.0`. Verify the CephCluster health using the [health verification doc](health-verification.md). + +### **6. Disable CSI holder pods** +CSI "holder" pods are frequently reported objects of confusion and struggle in Rook. Because of +this, they are being deprecated and will be removed in Rook v1.16. + +If there are any CephClusters that use the non-default network setting `network.provider: "multus"`, +or if the operator config `CSI_ENABLE_HOST_NETWORK` is set to `"false"`, perform migration steps to +remove holder pods by setting `CSI_REMOVE_HOLDER_PODS: "true"` after following this migration guide: +[Modifying CSI Networking](./../CRDs/Cluster/network-providers.md#modifying-csi-networking) diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 838182e0c9f7..d7ce4118ff46 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -7,6 +7,10 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - updating `netNamespaceFilePath` for all clusterIDs in rook-ceph-csi-config configMap in [PR](https://github.com/rook/rook/pull/13613) - Issue: The netNamespaceFilePath isn't updated in the CSI config map for all the clusterIDs when `CSI_ENABLE_HOST_NETWORK` is set to false in `operator.yaml` - Impact: This results in the unintended network configurations, with pods using the host networking instead of pod networking. +- Rook is beginning the process of deprecating holder pods. This is especially important for Multus + users. Helm chart users should take care to set the new config `disableHolderPods: false` if they + are using Multus and still using the `rook-ceph` chart's default `values.yaml`. Special upgrade + docs for multus can be found [here](Documentation/CRDs/Cluster/network-providers.md#migrating-to-remove-multus-holder-pods). ## Features diff --git a/deploy/charts/rook-ceph/templates/configmap.yaml b/deploy/charts/rook-ceph/templates/configmap.yaml index 4ce7b75dc278..154e5f340f01 100644 --- a/deploy/charts/rook-ceph/templates/configmap.yaml +++ b/deploy/charts/rook-ceph/templates/configmap.yaml @@ -24,6 +24,7 @@ data: CSI_ENABLE_ENCRYPTION: {{ .Values.csi.enableCSIEncryption | quote }} CSI_ENABLE_OMAP_GENERATOR: {{ .Values.csi.enableOMAPGenerator | quote }} CSI_ENABLE_HOST_NETWORK: {{ .Values.csi.enableCSIHostNetwork | quote }} + CSI_DISABLE_HOLDER_PODS: {{ .Values.csi.disableHolderPods | quote }} CSI_ENABLE_METADATA: {{ .Values.csi.enableMetadata | quote }} CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: {{ .Values.csi.enableVolumeGroupSnapshot | quote }} {{- if .Values.csi.csiDriverNamePrefix }} diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index c01b4016dc3d..6af5017a8cbf 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -85,6 +85,10 @@ csi: # in some network configurations where the SDN does not provide access to an external cluster or # there is significant drop in read/write performance enableCSIHostNetwork: true + # -- Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network + # without needing hosts to the network. Holder pods are being deprecated. See issue for details: + # https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true". + disableHolderPods: true # -- Enable Snapshotter in CephFS provisioner pod enableCephfsSnapshotter: true # -- Enable Snapshotter in NFS provisioner pod diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index d269206da14f..2c1506fa2783 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -137,6 +137,11 @@ data: # there is significant drop in read/write performance. # CSI_ENABLE_HOST_NETWORK: "true" + # Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network + # without needing hosts to the network. Holder pods are being deprecated. See issue for details: + # https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true". + CSI_DISABLE_HOLDER_PODS: "true" + # Set logging level for cephCSI containers maintained by the cephCSI. # Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. # CSI_LOG_LEVEL: "0" diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index 7bb91ac1f065..481de81b41b8 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -44,6 +44,11 @@ data: # there is significant drop in read/write performance. # CSI_ENABLE_HOST_NETWORK: "true" + # Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network + # without needing hosts to the network. Holder pods are being deprecated. See issue for details: + # https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true". + CSI_DISABLE_HOLDER_PODS: "true" + # Set to true to enable adding volume metadata on the CephFS subvolume and RBD images. # Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. # Hence enable metadata is false by default. diff --git a/pkg/operator/ceph/csi/cluster_config.go b/pkg/operator/ceph/csi/cluster_config.go index 6b9a9ee59d2d..132ca5350760 100644 --- a/pkg/operator/ceph/csi/cluster_config.go +++ b/pkg/operator/ceph/csi/cluster_config.go @@ -341,6 +341,7 @@ func updateCSIDriverOptions(curr, clusterKey string, } } + updateNetNamespaceFilePath(clusterKey, cc) return formatCsiClusterConfig(cc) } diff --git a/pkg/operator/ceph/csi/cluster_config_test.go b/pkg/operator/ceph/csi/cluster_config_test.go index 3698e0fb1b0d..fefc036baacd 100644 --- a/pkg/operator/ceph/csi/cluster_config_test.go +++ b/pkg/operator/ceph/csi/cluster_config_test.go @@ -503,6 +503,8 @@ func TestUpdateCSIDriverOptions(t *testing.T) { clusterKey string csiDriverOptions *cephv1.CSIDriverSpec } + holderEnabled = true + tests := []struct { name string args args diff --git a/pkg/operator/ceph/csi/controller.go b/pkg/operator/ceph/csi/controller.go index e380d9ea4a6e..1c56183c2981 100644 --- a/pkg/operator/ceph/csi/controller.go +++ b/pkg/operator/ceph/csi/controller.go @@ -130,6 +130,9 @@ func (r *ReconcileCSI) Reconcile(context context.Context, request reconcile.Requ return reconcileResponse, err } +// allow overriding for unit tests +var reconcileSaveCSIDriverOptions = SaveCSIDriverOptions + func (r *ReconcileCSI) reconcile(request reconcile.Request) (reconcile.Result, error) { // reconcileResult is used to communicate the result of the reconciliation back to the caller var reconcileResult reconcile.Result @@ -192,6 +195,17 @@ func (r *ReconcileCSI) reconcile(request reconcile.Request) (reconcile.Result, e return reconcile.Result{}, errors.Wrap(err, "failed to parse value for 'CSI_ENABLE_HOST_NETWORK'") } + csiDisableHolders, err := strconv.ParseBool(k8sutil.GetValue(r.opConfig.Parameters, "CSI_DISABLE_HOLDER_PODS", "false")) + if err != nil { + return reconcile.Result{}, errors.Wrap(err, "failed to parse value for 'CSI_DISABLE_HOLDER_PODS'") + } + + // begin each reconcile with the assumption that holder pods won't be deployed + // the loop below will determine with each reconcile if they need deployed + // without this, holder pods won't be removed unless the operator is restarted + r.clustersWithHolder = []ClusterDetail{} + holderEnabled = false + for i, cluster := range cephClusters.Items { if !cluster.DeletionTimestamp.IsZero() { logger.Debugf("ceph cluster %q is being deleting, no need to reconcile the csi driver", request.NamespacedName) @@ -203,28 +217,40 @@ func (r *ReconcileCSI) reconcile(request reconcile.Request) (reconcile.Result, e return reconcile.Result{}, nil } - holderEnabled := !csiHostNetworkEnabled || cluster.Spec.Network.IsMultus() - // Do we have a multus cluster or csi host network disabled? - // If so deploy the plugin holder with the fsid attached - if holderEnabled { - // Load cluster info for later use in updating the ceph-csi configmap - clusterInfo, _, _, err := opcontroller.LoadClusterInfo(r.context, r.opManagerContext, cluster.Namespace, &cephClusters.Items[i].Spec) - if err != nil { - // This avoids a requeue with exponential backoff and allows the controller to reconcile - // more quickly when the cluster is ready. - if errors.Is(err, opcontroller.ClusterInfoNoClusterNoSecret) { - logger.Infof("cluster info for cluster %q is not ready yet, will retry in %s, proceeding with ready clusters", cluster.Name, opcontroller.WaitForRequeueIfCephClusterNotReady.RequeueAfter.String()) - reconcileResult = opcontroller.WaitForRequeueIfCephClusterNotReady - continue - } - return opcontroller.ImmediateRetryResult, errors.Wrapf(err, "failed to load cluster info for cluster %q", cluster.Name) + // Load cluster info for later use in updating the ceph-csi configmap + clusterInfo, _, _, err := opcontroller.LoadClusterInfo(r.context, r.opManagerContext, cluster.Namespace, &cephClusters.Items[i].Spec) + if err != nil { + // This avoids a requeue with exponential backoff and allows the controller to reconcile + // more quickly when the cluster is ready. + if errors.Is(err, opcontroller.ClusterInfoNoClusterNoSecret) { + logger.Infof("cluster info for cluster %q is not ready yet, will retry in %s, proceeding with ready clusters", cluster.Name, opcontroller.WaitForRequeueIfCephClusterNotReady.RequeueAfter.String()) + reconcileResult = opcontroller.WaitForRequeueIfCephClusterNotReady + continue } - clusterInfo.OwnerInfo = k8sutil.NewOwnerInfo(&cephClusters.Items[i], r.scheme) - logger.Debugf("cluster %q is running on multus or CSI_ENABLE_HOST_NETWORK is false, deploying the ceph-csi plugin holder", cluster.Name) + return opcontroller.ImmediateRetryResult, errors.Wrapf(err, "failed to load cluster info for cluster %q", cluster.Name) + } + clusterInfo.OwnerInfo = k8sutil.NewOwnerInfo(&cephClusters.Items[i], r.scheme) + + // is holder enabled for this cluster? + thisHolderEnabled := (!csiHostNetworkEnabled || cluster.Spec.Network.IsMultus()) && !csiDisableHolders + // Do we have a multus cluster or csi host network disabled? + // If so deploy the plugin holder with the fsid attached + if thisHolderEnabled { + logger.Debugf("cluster %q: deploying the ceph-csi plugin holder", cluster.Name) r.clustersWithHolder = append(r.clustersWithHolder, ClusterDetail{cluster: &cephClusters.Items[i], clusterInfo: clusterInfo}) + + // holder pods are enabled globally if any cluster needs a holder pod + holderEnabled = true } else { - logger.Debugf("not a multus cluster %q or CSI_ENABLE_HOST_NETWORK is true, not deploying the ceph-csi plugin holder", request.NamespacedName) + logger.Debugf("cluster %q: not deploying the ceph-csi plugin holder", request.NamespacedName) + } + + // if holder pods were disabled, the controller needs to update the configmap for each + // cephcluster to remove the net namespace file path + err = reconcileSaveCSIDriverOptions(r.context.Clientset, cluster.Namespace, clusterInfo) + if err != nil { + return opcontroller.ImmediateRetryResult, errors.Wrapf(err, "failed to update CSI driver options for cluster %q", cluster.Name) } } diff --git a/pkg/operator/ceph/csi/controller_test.go b/pkg/operator/ceph/csi/controller_test.go index 8f72eb265d6f..2860c6d673f4 100644 --- a/pkg/operator/ceph/csi/controller_test.go +++ b/pkg/operator/ceph/csi/controller_test.go @@ -26,6 +26,7 @@ import ( rookclient "github.com/rook/rook/pkg/client/clientset/versioned/fake" "github.com/rook/rook/pkg/client/clientset/versioned/scheme" "github.com/rook/rook/pkg/clusterd" + "github.com/rook/rook/pkg/daemon/ceph/client" "github.com/rook/rook/pkg/operator/ceph/controller" "github.com/rook/rook/pkg/operator/k8sutil" "github.com/rook/rook/pkg/operator/test" @@ -35,11 +36,20 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/kubernetes" "sigs.k8s.io/controller-runtime/pkg/client/fake" "sigs.k8s.io/controller-runtime/pkg/reconcile" ) func TestCephCSIController(t *testing.T) { + oldReconcileSaveCSIDriverOptions := reconcileSaveCSIDriverOptions + defer func() { reconcileSaveCSIDriverOptions = oldReconcileSaveCSIDriverOptions }() + saveCSIDriverOptionsCalledForClusterNS := []string{} + reconcileSaveCSIDriverOptions = func(clientset kubernetes.Interface, clusterNamespace string, clusterInfo *client.ClusterInfo) error { + saveCSIDriverOptionsCalledForClusterNS = append(saveCSIDriverOptionsCalledForClusterNS, clusterNamespace) + return nil + } + ctx := context.TODO() var ( name = "rook-ceph" @@ -129,8 +139,25 @@ func TestCephCSIController(t *testing.T) { }, }, } + // Mock clusterInfo + secrets := map[string][]byte{ + "fsid": []byte(name), + "mon-secret": []byte("monsecret"), + "admin-secret": []byte("adminsecret"), + } + secret := &v1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rook-ceph-mon", + Namespace: namespace, + }, + Data: secrets, + Type: k8sutil.RookType, + } + _, err = c.Clientset.CoreV1().Secrets(namespace).Create(ctx, secret, metav1.CreateOptions{}) + assert.NoError(t, err) s := scheme.Scheme s.AddKnownTypes(cephv1.SchemeGroupVersion, &cephv1.CephCluster{}, &cephv1.CephClusterList{}, &v1.ConfigMap{}) + saveCSIDriverOptionsCalledForClusterNS = []string{} object := []runtime.Object{ cephCluster, @@ -156,6 +183,8 @@ func TestCephCSIController(t *testing.T) { ds, err := c.Clientset.AppsV1().DaemonSets(namespace).List(ctx, metav1.ListOptions{}) assert.NoError(t, err) assert.Equal(t, 2, len(ds.Items), ds) + + assert.Equal(t, []string{namespace}, saveCSIDriverOptionsCalledForClusterNS) }) t.Run("success ceph csi deployment with multus", func(t *testing.T) { @@ -209,6 +238,7 @@ func TestCephCSIController(t *testing.T) { assert.NoError(t, err) s := scheme.Scheme s.AddKnownTypes(cephv1.SchemeGroupVersion, &cephv1.CephCluster{}, &cephv1.CephClusterList{}, &v1.ConfigMap{}) + saveCSIDriverOptionsCalledForClusterNS = []string{} object := []runtime.Object{ cephCluster, @@ -240,5 +270,7 @@ func TestCephCSIController(t *testing.T) { assert.Equal(t, "csi-rbdplugin", ds.Items[2].Name) assert.Equal(t, "csi-rbdplugin-holder-rook-ceph", ds.Items[3].Name) assert.Equal(t, `[{"name":"public-net","namespace":"rook-ceph"}]`, ds.Items[1].Spec.Template.Annotations["k8s.v1.cni.cncf.io/networks"], ds.Items[1].Spec.Template.Annotations) + + assert.Equal(t, []string{namespace}, saveCSIDriverOptionsCalledForClusterNS) }) } diff --git a/pkg/operator/ceph/csi/spec.go b/pkg/operator/ceph/csi/spec.go index 188a9ca85ba0..1a8c5bba8c19 100644 --- a/pkg/operator/ceph/csi/spec.go +++ b/pkg/operator/ceph/csi/spec.go @@ -20,6 +20,7 @@ import ( "context" _ "embed" "fmt" + "strconv" "strings" "time" @@ -419,14 +420,6 @@ func (r *ReconcileCSI) startDrivers(ver *version.Info, ownerInfo *k8sutil.OwnerI }) } - holderEnabled = !CSIParam.EnableCSIHostNetwork - - for i := range r.clustersWithHolder { - if r.clustersWithHolder[i].cluster.Spec.Network.IsMultus() { - holderEnabled = true - break - } - } // get common provisioner tolerations and node affinity provisionerTolerations := getToleration(r.opConfig.Parameters, provisionerTolerationsEnv, []corev1.Toleration{}) provisionerNodeAffinity := getNodeAffinity(r.opConfig.Parameters, provisionerNodeAffinityEnv, &corev1.NodeAffinity{}) @@ -946,6 +939,15 @@ func GenerateNetNamespaceFilePath(ctx context.Context, client client.Client, clu return "", errors.Wrap(err, "failed to get operator's configmap") } + // net namespace file path is empty string if holder pods are disabled + csiDisableHolders, err := strconv.ParseBool(k8sutil.GetValue(opConfig.Data, "CSI_DISABLE_HOLDER_PODS", "false")) + if err != nil { + return "", errors.Wrap(err, "failed to parse value for 'CSI_DISABLE_HOLDER_PODS'") + } + if csiDisableHolders { + return "", nil + } + switch driverName { case RBDDriverShortName: driverSuffix = rbdDriverSuffix From 2e0c015d42be9d618c24bfaacc6685aca1245fdd Mon Sep 17 00:00:00 2001 From: Hyeonki Hong Date: Fri, 15 Mar 2024 12:04:55 +0900 Subject: [PATCH 46/65] helm: use toYaml for discovery nodeAffinity this commit allow user to use requiredDuringSchedulingIgnoredDuringExecution map Signed-off-by: Hyeonki Hong --- deploy/charts/rook-ceph/templates/deployment.yaml | 2 +- deploy/charts/rook-ceph/values.yaml | 11 ++++++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/deploy/charts/rook-ceph/templates/deployment.yaml b/deploy/charts/rook-ceph/templates/deployment.yaml index a0d2be74fb91..b9f38d23300c 100644 --- a/deploy/charts/rook-ceph/templates/deployment.yaml +++ b/deploy/charts/rook-ceph/templates/deployment.yaml @@ -68,7 +68,7 @@ spec: {{- end }} {{- if .Values.discover.nodeAffinity }} - name: DISCOVER_AGENT_NODE_AFFINITY - value: {{ .Values.discover.nodeAffinity }} + value: {{ toYaml .Values.discover.nodeAffinity | quote }} {{- end }} {{- if .Values.discover.podLabels }} - name: DISCOVER_AGENT_POD_LABELS diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 66c4b1687ec1..b6ab66804a77 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -589,7 +589,16 @@ discover: # operator: Exists # effect: NoSchedule # -- The node labels for affinity of `discover-agent` [^1] - nodeAffinity: # key1=value1,value2; key2=value3 + nodeAffinity: + # key1=value1,value2; key2=value3 + # + # or + # + # requiredDuringSchedulingIgnoredDuringExecution: + # nodeSelectorTerms: + # - matchExpressions: + # - key: storage-node + # operator: Exists # -- Labels to add to the discover pods podLabels: # "key1=value1,key2=value2" # -- Add resources to discover daemon pods From 4efe3982b0533096a33429d0e825ea48862499b3 Mon Sep 17 00:00:00 2001 From: sp98 Date: Thu, 22 Feb 2024 13:59:43 +0530 Subject: [PATCH 47/65] core: azure kms support Add support for store OSD encryption Keys in Azure KMS Signed-off-by: sp98 --- .github/workflows/unit-test.yml | 2 + PendingReleaseNotes.md | 1 + go.mod | 26 +- go.sum | 215 ++++++++++++- pkg/apis/ceph.rook.io/v1/security.go | 4 + pkg/apis/go.mod | 13 +- pkg/apis/go.sum | 430 +++++++++++++++++++++++++- pkg/daemon/ceph/osd/kms/azure.go | 102 ++++++ pkg/daemon/ceph/osd/kms/azure_test.go | 85 +++++ pkg/daemon/ceph/osd/kms/envs.go | 2 +- pkg/daemon/ceph/osd/kms/kms.go | 103 +++++- pkg/daemon/ceph/osd/kms/kms_test.go | 40 +++ pkg/daemon/ceph/osd/kms/vault.go | 44 --- 13 files changed, 980 insertions(+), 87 deletions(-) create mode 100644 pkg/daemon/ceph/osd/kms/azure.go create mode 100644 pkg/daemon/ceph/osd/kms/azure_test.go diff --git a/.github/workflows/unit-test.yml b/.github/workflows/unit-test.yml index baba3230559e..a5b9aad4ae4a 100644 --- a/.github/workflows/unit-test.yml +++ b/.github/workflows/unit-test.yml @@ -53,6 +53,8 @@ jobs: - name: run unit tests run: | export ROOK_UNIT_JQ_PATH="$(which jq)" + # AZURE_EXTENSION_DIR is set in GH action runners and affects Azure KMS unit tests; unset it + unset AZURE_EXTENSION_DIR GOPATH=$(go env GOPATH) make -j $(nproc) test | tee output.txt - name: check mds liveness probe script ran successfully diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 0b46e4ecb2dc..ccea9e907bfd 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -17,3 +17,4 @@ read affinity setting in cephCluster CR (CSIDriverOptions section) in [PR](https - The feature support for VolumeSnapshotGroup has been added to the RBD and CephFS CSI driver. - Support for virtual style hosting for s3 buckets in the CephObjectStore. - Add option to specify prefix for the OBC provisioner. +- Support Azure Key Vault for storing OSD encryption keys. diff --git a/go.mod b/go.mod index 45880beaf16d..cc458f3b1d6b 100644 --- a/go.mod +++ b/go.mod @@ -2,7 +2,15 @@ module github.com/rook/rook go 1.21 -replace github.com/rook/rook/pkg/apis => ./pkg/apis +replace ( + github.com/googleapis/gnostic => github.com/googleapis/gnostic v0.4.1 + github.com/kubernetes-incubator/external-storage => github.com/libopenstorage/external-storage v0.20.4-openstorage-rc3 + + // TODO: remove this replace once https://github.com/libopenstorage/secrets/pull/83 is merged + github.com/libopenstorage/secrets => github.com/rook/secrets v0.0.0-20240315053144-3195f6906937 + github.com/portworx/sched-ops => github.com/portworx/sched-ops v0.20.4-openstorage-rc3 + github.com/rook/rook/pkg/apis => ./pkg/apis +) require ( github.com/IBM/keyprotect-go-client v0.12.2 @@ -45,6 +53,19 @@ require ( sigs.k8s.io/yaml v1.4.0 ) +require ( + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.1 // indirect + github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1 // indirect + github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.1 // indirect + github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0 // indirect + github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1 // indirect + github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1 // indirect + github.com/golang-jwt/jwt/v5 v5.2.0 // indirect + github.com/kylelemons/godebug v1.1.0 // indirect + github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect + github.com/portworx/sched-ops v1.20.4-rc1 // indirect +) + require ( emperror.dev/errors v0.8.1 // indirect github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect @@ -149,8 +170,7 @@ exclude ( github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 github.com/elazarl/goproxy v0.0.0-20181111060418-2ce16c963a8a - // portworx dependencies are a mess, and we don't use portworx code, so skip it - github.com/portworx/sched-ops v1.20.4-rc1 + github.com/kubernetes-incubator/external-storage v0.20.4-openstorage-rc2 // Exclude pre-go-mod kubernetes tags, because they are older // than v0.x releases but are picked when updating dependencies. k8s.io/client-go v1.4.0 diff --git a/go.sum b/go.sum index 24adc1da3fb2..3a4c709bf274 100644 --- a/go.sum +++ b/go.sum @@ -6,6 +6,7 @@ cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxK cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= +cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw= cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= @@ -40,6 +41,7 @@ cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= +cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= @@ -53,30 +55,51 @@ dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7 emperror.dev/errors v0.8.0/go.mod h1:YcRvLPh626Ubn2xqtoprejnA5nFha+TJ+2vew48kWuE= emperror.dev/errors v0.8.1 h1:UavXZ5cSX/4u9iyvH6aDcuGkVjeexUGJ7Ij7G4VfQT0= emperror.dev/errors v0.8.1/go.mod h1:YcRvLPh626Ubn2xqtoprejnA5nFha+TJ+2vew48kWuE= -github.com/Azure/azure-sdk-for-go v62.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.0.0/go.mod h1:uGG2W01BaETf0Ozp+QxxKJdMBNRWPdstHG0Fmdwn1/U= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.4.0/go.mod h1:ON4tFdPTwRcgWEaVDrN3584Ef+b7GgSJaXxe5fW9t4M= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.1 h1:lGlwhPtrX6EVml1hO0ivjkUxsSyl4dsiw9qcA1k/3IQ= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.1/go.mod h1:RKUqNu35KJYcVG/fqTRqmuXJZYNhYkBrnC/hX7yGbTA= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0/go.mod h1:bhXu1AjYL+wutSL/kpSq6s7733q2Rb0yuot9Zgfqa/0= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1 h1:sO0/P7g68FrryJzljemN+6GTssUXdANk6aJ7T1ZxnsQ= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1/go.mod h1:h8hyGFDsU5HMivxiS2iYFZsgDbU9OnnJ163x5UGVKYo= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.0/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.2.0/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.1 h1:6oNBlSdi1QqM1PNW7FPA6xOGA5UNsXnkaYZz9vdPGhA= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.1/go.mod h1:s4kgfzA0covAXNicZHDMN58jExvcng2mC/DepXiF1EI= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0 h1:xnO4sFyG8UH2fElBkcqLTOZsAajvKfnSlgBBW8dXYjw= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0/go.mod h1:XD3DIOOVgBCO03OleB1fHjgktVRFxlT++KwKgIOewdM= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1 h1:FbH3BbSb4bvGluTesZZ+ttN/MDsnMmQP36OSnDuSXqw= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1/go.mod h1:9V2j0jn9jDEkCkv8w/bKTNppX/d0FVA1ud77xCIP4KA= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8= github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630= +github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw= github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA= -github.com/Azure/go-autorest/autorest v0.11.27/go.mod h1:7l8ybrIdUmGqZMTD0sRtAr8NvbHjfofbf8RSP2q7w7U= github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q= +github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg= +github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A= github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M= -github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ= -github.com/Azure/go-autorest/autorest/adal v0.9.20/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ= github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74= github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= +github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= -github.com/Azure/go-autorest/autorest/mocks v0.4.2/go.mod h1:Vy7OitM9Kei0i1Oj+LvyAWMXJHeKH1MVlzFugfVrmyU= -github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE= -github.com/Azure/go-autorest/autorest/validation v0.3.1/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E= github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU= +github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1/go.mod h1:Vt9sXTKwMyGcOxSmLDMnGPgqsUg7m8pe215qMLrDXw4= +github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1 h1:DzHpqpoJVaCgOUdVHxE8QB52S6NiVdDQvGlny1qvPqA= +github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/IBM/keyprotect-go-client v0.5.1/go.mod h1:5TwDM/4FRJq1ZOlwQL1xFahLWQ3TveR88VmL1u3njyI= @@ -91,7 +114,9 @@ github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdko github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alessio/shellescape v1.2.2/go.mod h1:PZAiSCk0LJaZkiCSkPv8qIobYglO3FPpyFjDCtHLS30= github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= github.com/ansel1/merry v1.5.0/go.mod h1:wUy/yW0JX0ix9GYvUbciq+bi3jW/vlKPlbpI7qdZpOw= @@ -104,7 +129,9 @@ github.com/ansel1/merry/v2 v2.2.0 h1:UozCy11F6igadv9XQgOBFr9xguUWJGvdWwQydh0s7pc github.com/ansel1/merry/v2 v2.2.0/go.mod h1:nrJgBqVO1A8RnUXma1T8slt3wznjZcfbD8HzXaCoLwM= github.com/ansel1/vespucci/v4 v4.1.1/go.mod h1:zzdrO4IgBfgcGMbGTk/qNGL8JPslmW3nPpcBHKReFYY= github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= +github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= @@ -121,7 +148,9 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= +github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= +github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM= github.com/cenkalti/backoff/v3 v3.0.0/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4rc0ij+ULvLYs= github.com/cenkalti/backoff/v3 v3.2.2 h1:cfUAAO3yvKMYKPrvhDuHSwQnhZNk/RMHKdZqKTxfm6M= @@ -153,6 +182,7 @@ github.com/containernetworking/cni v1.1.2 h1:wtRGZVv7olUHMOqouPpn3cXJWpJgM6+EUl3 github.com/containernetworking/cni v1.1.2/go.mod h1:sDpYKmGVENF3s6uvMvGgldDWeG8dMxakj/u+i9ht9vw= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc= github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= @@ -181,13 +211,19 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1 github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= +github.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko= +github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI= +github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= +github.com/docker/spdystream v0.0.0-20181023171402-6480d4af844c/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= +github.com/elazarl/goproxy v0.0.0-20191011121108-aa519ddbe484/go.mod h1:Ro8st/ElPeALwNFlcTpWmkr6IoMFfkjXAvTHpevnDsM= +github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8= github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= @@ -244,6 +280,7 @@ github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxF github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= @@ -319,11 +356,14 @@ github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zV github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= +github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I= github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= -github.com/golang-jwt/jwt/v4 v4.3.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= +github.com/golang-jwt/jwt/v5 v5.0.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= +github.com/golang-jwt/jwt/v5 v5.2.0 h1:d/ix8ftRUorsN+5eMIlF4T6J8CAt9rch3My2winC1Jw= +github.com/golang-jwt/jwt/v5 v5.2.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20180513044358-24b0969c4cb7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -385,6 +425,7 @@ github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= @@ -416,6 +457,7 @@ github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= @@ -425,13 +467,9 @@ github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0 github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM= github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM= github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c= -github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= -github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= -github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU= github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg= -github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU= -github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA= github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= +github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= @@ -449,6 +487,8 @@ github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw= github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI= +github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q= +github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= @@ -458,6 +498,8 @@ github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/S github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs= github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= +github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= +github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= @@ -467,6 +509,7 @@ github.com/hashicorp/go-retryablehttp v0.7.0/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER github.com/hashicorp/go-retryablehttp v0.7.1/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY= github.com/hashicorp/go-retryablehttp v0.7.5 h1:bJj+Pj19UZMIweq/iie+1u5YCdGrnxCT9yvm0e+Nd5M= github.com/hashicorp/go-retryablehttp v0.7.5/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8= +github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU= github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc= github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6/go.mod h1:QmrqtbKuxxSWTN3ETMPuB+VtEiBJ/A9XhoYGv8E1uD8= @@ -475,15 +518,25 @@ github.com/hashicorp/go-secure-stdlib/parseutil v0.1.8/go.mod h1:aiJI+PIApBRQG7F github.com/hashicorp/go-secure-stdlib/strutil v0.1.1/go.mod h1:gKOamz3EwoIoJq7mlMIRBpVTAUn8qPCrEclOKKWhD3U= github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 h1:kes8mmyCpxJsI7FTwtzRqEy9CdjCtrXrXGuOpxEA7Ts= github.com/hashicorp/go-secure-stdlib/strutil v0.1.2/go.mod h1:Gou2R9+il93BqX25LAKCLuM+y9U2T4hlwvT1yprcna4= +github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= github.com/hashicorp/go-sockaddr v1.0.6 h1:RSG8rKU28VTUTvEKghe5gIhIQpv8evvNpnDEyqO4u9I= github.com/hashicorp/go-sockaddr v1.0.6/go.mod h1:uoUUmtwU7n9Dv3O4SNLeFvg0SxQ3lyjsj6+CCykpaxI= +github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= +github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= +github.com/hashicorp/golang-lru v0.0.0-20180201235237-0fb14efe8c47/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.1-vault-5 h1:kI3hhbbyzr4dldA8UdTb7ZlVVlI2DACdCfz31RPDgJM= github.com/hashicorp/hcl v1.0.1-vault-5/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06AItNorpy+MoQNM= +github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= +github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= +github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= +github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= github.com/hashicorp/vault/api v1.10.0/go.mod h1:jo5Y/ET+hNyz+JnKDt8XLAdKs+AM0G5W0Vp1IrFI8N8= github.com/hashicorp/vault/api v1.12.0 h1:meCpJSesvzQyao8FCOgk2fGdoADAnbDu2WPJN1lDLJ4= github.com/hashicorp/vault/api v1.12.0/go.mod h1:si+lJCYO7oGkIoNPAN8j3azBLTn9SjMGS+jFaHd1Cck= @@ -497,6 +550,8 @@ github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpO github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= +github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= +github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg= github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= @@ -512,6 +567,7 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= @@ -520,6 +576,7 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= +github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= github.com/k0kubun/pp v2.3.0+incompatible/go.mod h1:GWse8YhT0p8pT4ir3ZgBbfZild3tgzSScAn6HmfYukg= @@ -530,6 +587,7 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= @@ -544,11 +602,17 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1 h1:dQEHhTfi+bSIOSViQrKY9PqJvZenD6tFz+3lPzux58o= github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1/go.mod h1:my+EVjOJLeQ9lUR9uVkxRvNNkhO2saSGIgzV8GZT9HY= -github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1 h1:bPR1KJK9pbSYuDPQzx5zXOT7Exj+y/K/8lpGU2KfzRc= -github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1/go.mod h1:TB8PxROcwcNeaawFm45+XAj0lnZL2wRI3wTr/tZ3/bM= +github.com/kubernetes-csi/external-snapshotter/client/v4 v4.0.0/go.mod h1:YBCo4DoEeDndqvAn6eeu0vWM7QdXmHEeI9cFWplmBys= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/libopenstorage/autopilot-api v0.6.1-0.20210128210103-5fbb67948648/go.mod h1:6JLrPbR3ZJQFbUY/+QJMl/aF00YdIrLf8/GWAplgvJs= +github.com/libopenstorage/openstorage v8.0.0+incompatible/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc= +github.com/libopenstorage/operator v0.0.0-20200725001727-48d03e197117/go.mod h1:Qh+VXOB6hj60VmlgsmY+R1w+dFuHK246UueM4SAqZG0= +github.com/libopenstorage/stork v1.3.0-beta1.0.20200630005842-9255e7a98775/go.mod h1:qBSzYTJVHlOMg5RINNiHD1kBzlasnrc2uKLPZLgu1Qs= github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0= github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= @@ -575,20 +639,28 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg= github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k= github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI= github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= +github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= +github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg= +github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= +github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8= github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= +github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo= github.com/moby/term v0.0.0-20221205130635-1aeaba878587 h1:HfkjXDfhgVaN5rmueG8cL8KKeFNecRCXFhaJ2qZ5SKA= github.com/moby/term v0.0.0-20221205130635-1aeaba878587/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= @@ -598,8 +670,12 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0= github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4= +github.com/montanaflynn/stats v0.6.6/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= +github.com/montanaflynn/stats v0.7.0/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= @@ -631,6 +707,7 @@ github.com/onsi/ginkgo/v2 v2.6.0/go.mod h1:63DOGlLAH8+REH8jUGdL3YpCpu7JODesutUjd github.com/onsi/ginkgo/v2 v2.14.0 h1:vSmGj2Z5YPb9JwCWT6z6ihcUvDhuXLc3sJiqd3jMKAY= github.com/onsi/ginkgo/v2 v2.14.0/go.mod h1:JkUdW7JkN0V6rFvsHcJ478egV3XH9NxpD27Hal/PhZw= github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= +github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA= @@ -645,14 +722,23 @@ github.com/onsi/gomega v1.24.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2 github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM= github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= +github.com/openshift/api v0.0.0-20210105115604-44119421ec6b/go.mod h1:aqU5Cq+kqKKPbDMqxo9FojgDeSpNJI7iuskjXjtojDg= github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 h1:+S998xHiJApsJZjRAO8wyedU9GfqFd8mtwWly6LqHDo= github.com/openshift/api v0.0.0-20240301093301-ce10821dc999/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= +github.com/openshift/build-machinery-go v0.0.0-20200917070002-f171684f77ab/go.mod h1:b1BuldmJlbA/xYtdZvKi+7j5YGB44qJUJDZ9zwiNCfE= +github.com/openshift/client-go v0.0.0-20210112165513-ebc401615f47/go.mod h1:u7NRAjtYVAKokiI9LouzTv4mhds8P4S1TwdVAfbjKSk= +github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= +github.com/pborman/uuid v0.0.0-20170612153648-e790cca94e6c/go.mod h1:VyrYX9gd7irzKovcSS6BIIEwPRkP2Wm2m9ufcdFSJ34= github.com/pborman/uuid v1.2.0 h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= +github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ= +github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI= +github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ= +github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= @@ -663,17 +749,26 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/portworx/dcos-secrets v0.0.0-20180616013705-8e8ec3f66611/go.mod h1:4hklRW/4DQpLqkcXcjtNprbH2tz/sJaNtqinfPWl/LA= github.com/portworx/kvdb v0.0.0-20200929023115-b312c7519467/go.mod h1:Q8YyrNDvPp3DVF96BDcQuaC7fAYUCuUX+l58S7OnD2M= +github.com/portworx/sched-ops v0.20.4-openstorage-rc3 h1:tXnHsjZT2wZ2BCXf8avDoya7zGyCgLNUC8Upt+WEQrY= +github.com/portworx/sched-ops v0.20.4-openstorage-rc3/go.mod h1:DpRDDqXWQrReFJ5SHWWrURuZdzVKjrh2OxbAfwnrAyk= +github.com/portworx/talisman v0.0.0-20191007232806-837747f38224/go.mod h1:OjpMH9Uh5o9ntVGktm4FbjLNwubJ3ITih2OfYrAeWtA= github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= +github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g= github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.44.1/go.mod h1:3WYi4xqXxGGXWDdQIITnLNmuDzO5n6wYva9spVhR4fg= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.46.0/go.mod h1:3WYi4xqXxGGXWDdQIITnLNmuDzO5n6wYva9spVhR4fg= github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0 h1:9h7PxMhT1S8lOdadEKJnBh3ELMdO60XkoDV98grYjuM= github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.72.0/go.mod h1:4FiLCL664L4dNGeqZewiiD0NS7hhqi/CxyM4UOq5dfM= +github.com/prometheus-operator/prometheus-operator/pkg/client v0.46.0/go.mod h1:k4BrWlVQQsvBiTcDnKEMgyh/euRxyxgrHdur/ZX/sdA= github.com/prometheus-operator/prometheus-operator/pkg/client v0.72.0 h1:UQT8vi8NK8Nt/wYZXY0Asx5XcGAhiQ1SQG190Ei4Pto= github.com/prometheus-operator/prometheus-operator/pkg/client v0.72.0/go.mod h1:AYjK2t/SjtOmdEAi2CxQ/t/TOQ0j3zzuMhJ5WgM+Ok0= +github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= +github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.18.0 h1:HzFfmkOzH5Q8L8G+kSJKUx5dtG87sewO+FoDDqP5Tbk= github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= @@ -682,40 +777,53 @@ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1: github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw= github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= +github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= +github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM= github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY= +github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= +github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.8.1/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4nPKWu0nJ5d+o= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= +github.com/rook/secrets v0.0.0-20240315053144-3195f6906937 h1:1TpdIqF9mtQfhNfwOpXdpJTMhx66PonVCCvYcGWvu/I= +github.com/rook/secrets v0.0.0-20240315053144-3195f6906937/go.mod h1:jOxzr6jXuSz9UztMhEpcBi1/vPygUA4z9kFuFj+6zd8= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk= github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0= github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= +github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= @@ -724,16 +832,19 @@ github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE= +github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= github.com/spf13/cobra v1.8.0 h1:7aJaZx1B85qltLMc546zn58BxxfZdR/W22ej9CFoEf0= github.com/spf13/cobra v1.8.0/go.mod h1:WXLWApfZ71AjXPya3WOlMsY9yMs7YeiHhFVlvLyhcho= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= +github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= @@ -753,6 +864,7 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/sykesm/zap-logfmt v0.0.4 h1:U2WzRvmIWG1wDLCFY3sz8UeEmsdHQjHFNlIdmroVFaI= github.com/sykesm/zap-logfmt v0.0.4/go.mod h1:AuBd9xQjAe3URrWT1BBDk2v2onAZHkZkWRMiYZXiZWA= github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= @@ -775,7 +887,9 @@ github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1 github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= +go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ= go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg= +go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg= go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= @@ -806,12 +920,15 @@ go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95a go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA= +go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.12.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM= go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +golang.org/x/crypto v0.0.0-20180820150726-614d502a4dac/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= @@ -822,15 +939,18 @@ golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8U golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20201208171446-5f87f3452ae9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.0.0-20220511200225-c6db032c6c88/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= @@ -881,7 +1001,9 @@ golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73r golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -914,10 +1036,12 @@ golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/ golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= @@ -939,8 +1063,10 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -986,6 +1112,7 @@ golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5h golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -1008,6 +1135,7 @@ golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1026,15 +1154,18 @@ golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1045,6 +1176,7 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -1078,6 +1210,7 @@ golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -1085,6 +1218,7 @@ golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= @@ -1105,6 +1239,7 @@ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= @@ -1114,6 +1249,8 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= +golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= +golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= @@ -1128,6 +1265,7 @@ golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3 golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= @@ -1135,6 +1273,7 @@ golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgw golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -1143,6 +1282,7 @@ golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtn golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -1280,6 +1420,7 @@ google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6D google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= +google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= @@ -1391,11 +1532,13 @@ gopkg.in/h2non/gock.v1 v1.1.2 h1:jBbHXgGBK/AoPVfJh5x4r/WxIrElvbLel8TCZkkZJoY= gopkg.in/h2non/gock.v1 v1.1.2/go.mod h1:n7UGz/ckNChHiK05rDoiC4MYSunEC/lyaUm2WWaDva0= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA= gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= +gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= @@ -1403,6 +1546,7 @@ gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= @@ -1416,6 +1560,7 @@ gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= +gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= @@ -1423,44 +1568,73 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= +k8s.io/api v0.0.0-20190409021203-6e4e0e4f393b/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA= k8s.io/api v0.18.2/go.mod h1:SJCWI7OLzhZSvbY7U8zwNl9UA4o1fizoug34OV/2r78= +k8s.io/api v0.18.3/go.mod h1:UOaMwERbqJMfeeeHc8XJKawj4P9TgDRnViIqqBeH2QA= k8s.io/api v0.18.4/go.mod h1:lOIQAKYgai1+vz9J7YcDZwC26Z0zQewYOGWdyIPUUQ4= +k8s.io/api v0.19.0/go.mod h1:I1K45XlvTrDjmj5LoM5LuP/KYrhWbjUKT/SoPG0qTjw= +k8s.io/api v0.19.2/go.mod h1:IQpK0zFQ1xc5iNIQPqzgoOwuFugaYHK4iCknlAQP9nI= +k8s.io/api v0.20.0/go.mod h1:HyLC5l5eoS/ygQYl1BXBgFzWNlkHiAuyNAbevIn+FKg= +k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo= +k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ= k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8= k8s.io/api v0.26.0/go.mod h1:k6HDTaIFC8yn1i6pSClSqIwLABIcLV9l5Q4EcngKnQg= k8s.io/api v0.29.2 h1:hBC7B9+MU+ptchxEqTNW2DkUosJpp1P+Wn6YncZ474A= k8s.io/api v0.29.2/go.mod h1:sdIaaKuU7P44aoyyLlikSLayT6Vb7bvJNCX105xZXY0= +k8s.io/apiextensions-apiserver v0.0.0-20190409022649-727a075fdec8/go.mod h1:IxkesAMoaCRoLrPJdZNZUQp9NfZnzqaVzLhb2VEQzXE= k8s.io/apiextensions-apiserver v0.18.2/go.mod h1:q3faSnRGmYimiocj6cHQ1I3WpLqmDgJFlKL37fC4ZvY= +k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZMoIGB24m03+JE= k8s.io/apiextensions-apiserver v0.18.4/go.mod h1:NYeyeYq4SIpFlPxSAB6jHPIdvu3hL0pc36wuRChybio= +k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk= k8s.io/apiextensions-apiserver v0.29.2 h1:UK3xB5lOWSnhaCk0RFZ0LUacPZz9RY4wi/yt2Iu+btg= k8s.io/apiextensions-apiserver v0.29.2/go.mod h1:aLfYjpA5p3OwtqNXQFkhJ56TB+spV8Gc4wfMhUA3/b8= +k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0= k8s.io/apimachinery v0.18.2/go.mod h1:9SnR/e11v5IbyPCGbvJViimtJ0SwHG4nfZFjU77ftcA= +k8s.io/apimachinery v0.18.3/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko= k8s.io/apimachinery v0.18.4/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko= +k8s.io/apimachinery v0.19.0/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA= k8s.io/apimachinery v0.19.2/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA= +k8s.io/apimachinery v0.20.0/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= +k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= +k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM= k8s.io/apimachinery v0.26.0/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74= k8s.io/apimachinery v0.29.2 h1:EWGpfJ856oj11C52NRCHuU7rFDwxev48z+6DSlGNsV8= k8s.io/apimachinery v0.29.2/go.mod h1:6HVkd1FwxIagpYrHSwJlQqZI3G9LfYWRPAkUvLnXTKU= k8s.io/apiserver v0.18.2/go.mod h1:Xbh066NqrZO8cbsoenCwyDJ1OSi8Ag8I2lezeHxzwzw= +k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw= k8s.io/apiserver v0.18.4/go.mod h1:q+zoFct5ABNnYkGIaGQ3bcbUNdmPyOCoEBcg51LChY8= +k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU= k8s.io/cli-runtime v0.29.2 h1:smfsOcT4QujeghsNjECKN3lwyX9AwcFU0nvJ7sFN3ro= k8s.io/cli-runtime v0.29.2/go.mod h1:KLisYYfoqeNfO+MkTWvpqIyb1wpJmmFJhioA0xd4MW8= k8s.io/client-go v0.18.2/go.mod h1:Xcm5wVGXX9HAA2JJ2sSBUn3tCJ+4SVlCbl2MNNv+CIU= +k8s.io/client-go v0.18.3/go.mod h1:4a/dpQEvzAhT1BbuWW09qvIaGw6Gbu1gZYiQZIi1DMw= k8s.io/client-go v0.18.4/go.mod h1:f5sXwL4yAZRkAtzOxRWUhA/N8XzGCb+nPZI8PfobZ9g= +k8s.io/client-go v0.19.0/go.mod h1:H9E/VT95blcFQnlyShFgnFT9ZnJOAceiUHM3MlRC+mU= +k8s.io/client-go v0.19.2/go.mod h1:S5wPhCqyDNAlzM9CnEdgTGV4OqhsW3jGO1UM1epwfJA= +k8s.io/client-go v0.20.0/go.mod h1:4KWh/g+Ocd8KkCwKF8vUNnmqgv+EVnQDK4MBF4oB5tY= +k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y= k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4= k8s.io/client-go v0.29.2 h1:FEg85el1TeZp+/vYJM7hkDlSTFZ+c5nnK44DJ4FyoRg= k8s.io/client-go v0.29.2/go.mod h1:knlvFZE58VpqbQpJNbCbctTVXcd35mMyAAwBdpt4jrA= k8s.io/cloud-provider v0.29.2 h1:ghKNXoQmeP8Fj/YTJNR6xQOzNrKXt6YZyy6mOEEa3yg= k8s.io/cloud-provider v0.29.2/go.mod h1:KAp+07AUGmxcLnoLY5FndU4hj6158KMbiviNgctNRUk= k8s.io/code-generator v0.18.2/go.mod h1:+UHX5rSbxmR8kzS+FAv7um6dtYrZokQvjHpDSYRVkTc= +k8s.io/code-generator v0.18.3/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c= k8s.io/code-generator v0.18.4/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c= +k8s.io/code-generator v0.19.0/go.mod h1:moqLn7w0t9cMs4+5CQyxnfA/HV8MF6aAVENF+WZZhgk= +k8s.io/code-generator v0.20.0/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= k8s.io/code-generator v0.20.1/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= k8s.io/component-base v0.18.2/go.mod h1:kqLlMuhJNHQ9lz8Z7V5bxUUtjFZnrypArGl58gmDfUM= +k8s.io/component-base v0.18.3/go.mod h1:bp5GzGR0aGkYEfTj+eTY0AN/vXTgkJdQXjNTTVUaa3k= k8s.io/component-base v0.18.4/go.mod h1:7jr/Ef5PGmKwQhyAz/pjByxJbC58mhKAhiaDu0vXfPk= +k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk= k8s.io/component-base v0.29.2 h1:lpiLyuvPA9yV1aQwGLENYyK7n/8t6l3nn3zAtFTJYe8= k8s.io/component-base v0.29.2/go.mod h1:BfB3SLrefbZXiBfbM+2H1dlat21Uewg/5qtKOl8degM= k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= +k8s.io/gengo v0.0.0-20200428234225-8167cfdcfc14/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk= @@ -1474,6 +1648,7 @@ k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20180731170545-e3762e86a74c/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc= k8s.io/kube-openapi v0.0.0-20200121204235-bf4fb3bd569c/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E= k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E= k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o= @@ -1482,8 +1657,11 @@ k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lV k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= +k8s.io/utils v0.0.0-20190506122338-8fab8cb257d5/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= +k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= +k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= @@ -1494,6 +1672,8 @@ rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8 rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0= +sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg= +sigs.k8s.io/controller-runtime v0.2.2/go.mod h1:9dyohw3ZtoXQuV1e766PHUn+cmrRCIcBh6XIMFNMZ+I= sigs.k8s.io/controller-runtime v0.6.1/go.mod h1:XRYBPdbf5XJu9kpS84VJiZ7h/u1hF3gEORz0efEja7A= sigs.k8s.io/controller-runtime v0.17.2 h1:FwHwD1CTUemg0pW2otk7/U5/i5m2ymzvOXdbeGOUvw0= sigs.k8s.io/controller-runtime v0.17.2/go.mod h1:+MngTvIQQQhfXtwfdGw/UOQ/aIaqsYywfCINOtwMO/s= @@ -1517,6 +1697,7 @@ sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZa sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E= sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= +sigs.k8s.io/testing_frameworks v0.1.1/go.mod h1:VVBKrHmJ6Ekkfz284YKhQePcdycOzNH9qL6ht1zEr/U= sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o= sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8= diff --git a/pkg/apis/ceph.rook.io/v1/security.go b/pkg/apis/ceph.rook.io/v1/security.go index 45d0186c90af..d0dcd1073469 100644 --- a/pkg/apis/ceph.rook.io/v1/security.go +++ b/pkg/apis/ceph.rook.io/v1/security.go @@ -48,6 +48,10 @@ func (kms *KeyManagementServiceSpec) IsVaultKMS() bool { return getParam(kms.ConnectionDetails, "KMS_PROVIDER") == secrets.TypeVault } +func (kms *KeyManagementServiceSpec) IsAzureMS() bool { + return getParam(kms.ConnectionDetails, "KMS_PROVIDER") == secrets.TypeAzure +} + // IsIBMKeyProtectKMS return whether IBM Key Protect KMS is configured func (kms *KeyManagementServiceSpec) IsIBMKeyProtectKMS() bool { return getParam(kms.ConnectionDetails, "KMS_PROVIDER") == "ibmkeyprotect" diff --git a/pkg/apis/go.mod b/pkg/apis/go.mod index 9d0d013c23f4..7aa685a94700 100644 --- a/pkg/apis/go.mod +++ b/pkg/apis/go.mod @@ -2,6 +2,16 @@ module github.com/rook/rook/pkg/apis go 1.21 +replace ( + github.com/googleapis/gnostic => github.com/googleapis/gnostic v0.4.1 + github.com/kubernetes-incubator/external-storage => github.com/libopenstorage/external-storage v0.20.4-openstorage-rc3 + + // TODO: remove this replace once https://github.com/libopenstorage/secrets/pull/83 is merged + github.com/libopenstorage/secrets => github.com/rook/secrets v0.0.0-20240315053144-3195f6906937 + github.com/portworx/sched-ops => github.com/portworx/sched-ops v0.20.4-openstorage-rc3 + github.com/rook/rook/pkg/apis => ./pkg/apis +) + require ( github.com/hashicorp/vault/api v1.12.0 github.com/k8snetworkplumbingwg/network-attachment-definition-client v1.6.0 @@ -86,8 +96,7 @@ exclude ( github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 github.com/elazarl/goproxy v0.0.0-20181111060418-2ce16c963a8a - // portworx dependencies are a mess, and we don't use portworx code, so skip it - github.com/portworx/sched-ops v1.20.4-rc1 + github.com/kubernetes-incubator/external-storage v0.20.4-openstorage-rc2 // Exclude pre-go-mod kubernetes tags, because they are older // than v0.x releases but are picked when updating dependencies. k8s.io/client-go v1.4.0 diff --git a/pkg/apis/go.sum b/pkg/apis/go.sum index 9248d9cd3332..f9a9b7356a10 100644 --- a/pkg/apis/go.sum +++ b/pkg/apis/go.sum @@ -6,6 +6,7 @@ cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxK cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= +cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw= cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= @@ -40,6 +41,7 @@ cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= +cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= @@ -50,33 +52,75 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/Azure/azure-sdk-for-go v62.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.0.0/go.mod h1:uGG2W01BaETf0Ozp+QxxKJdMBNRWPdstHG0Fmdwn1/U= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.4.0/go.mod h1:ON4tFdPTwRcgWEaVDrN3584Ef+b7GgSJaXxe5fW9t4M= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.1/go.mod h1:RKUqNu35KJYcVG/fqTRqmuXJZYNhYkBrnC/hX7yGbTA= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0/go.mod h1:bhXu1AjYL+wutSL/kpSq6s7733q2Rb0yuot9Zgfqa/0= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1/go.mod h1:h8hyGFDsU5HMivxiS2iYFZsgDbU9OnnJ163x5UGVKYo= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.0/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.2/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.2.0/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.1/go.mod h1:s4kgfzA0covAXNicZHDMN58jExvcng2mC/DepXiF1EI= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets v0.12.0/go.mod h1:XD3DIOOVgBCO03OleB1fHjgktVRFxlT++KwKgIOewdM= +github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1/go.mod h1:9V2j0jn9jDEkCkv8w/bKTNppX/d0FVA1ud77xCIP4KA= +github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= +github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI= +github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630= +github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw= github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA= -github.com/Azure/go-autorest/autorest v0.11.27/go.mod h1:7l8ybrIdUmGqZMTD0sRtAr8NvbHjfofbf8RSP2q7w7U= +github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0= +github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q= +github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg= +github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A= github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M= -github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ= -github.com/Azure/go-autorest/autorest/adal v0.9.20/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ= +github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA= +github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g= github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74= +github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0= +github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM= +github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= -github.com/Azure/go-autorest/autorest/mocks v0.4.2/go.mod h1:Vy7OitM9Kei0i1Oj+LvyAWMXJHeKH1MVlzFugfVrmyU= -github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE= -github.com/Azure/go-autorest/autorest/validation v0.3.1/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E= +github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc= +github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= +github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk= github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU= +github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1/go.mod h1:Vt9sXTKwMyGcOxSmLDMnGPgqsUg7m8pe215qMLrDXw4= +github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/IBM/keyprotect-go-client v0.5.1/go.mod h1:5TwDM/4FRJq1ZOlwQL1xFahLWQ3TveR88VmL1u3njyI= github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= +github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= +github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= +github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM= +github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= +github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8= github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= +github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= +github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY= github.com/aws/aws-sdk-go v1.44.164/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= +github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= +github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= +github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= +github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= +github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM= github.com/cenkalti/backoff/v3 v3.0.0/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4rc0ij+ULvLYs= github.com/cenkalti/backoff/v3 v3.2.2 h1:cfUAAO3yvKMYKPrvhDuHSwQnhZNk/RMHKdZqKTxfm6M= @@ -97,14 +141,43 @@ github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWH github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= +github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8= github.com/containernetworking/cni v1.1.2 h1:wtRGZVv7olUHMOqouPpn3cXJWpJgM6+EUl31EQbXALQ= github.com/containernetworking/cni v1.1.2/go.mod h1:sDpYKmGVENF3s6uvMvGgldDWeG8dMxakj/u+i9ht9vw= +github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= +github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= +github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= +github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= +github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= +github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= +github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= +github.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko= +github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= +github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= +github.com/docker/spdystream v0.0.0-20181023171402-6480d4af844c/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= +github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= +github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= +github.com/elazarl/goproxy v0.0.0-20191011121108-aa519ddbe484/go.mod h1:Ro8st/ElPeALwNFlcTpWmkr6IoMFfkjXAvTHpevnDsM= +github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8= github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= @@ -121,6 +194,9 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.m github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= +github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= @@ -135,6 +211,8 @@ github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyT github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg= github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= +github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= +github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= @@ -142,40 +220,88 @@ github.com/go-jose/go-jose/v3 v3.0.0/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxF github.com/go-jose/go-jose/v3 v3.0.1/go.mod h1:RNkWWRld676jZEYoV3+XK8L2ZnNSvIsxFMht0mSX+u8= github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= +github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ= github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/zapr v0.1.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk= +github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI= +github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik= +github.com/go-openapi/analysis v0.18.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik= +github.com/go-openapi/analysis v0.19.2/go.mod h1:3P1osvZa9jKjb8ed2TPng3f0i/UY9snX6gxi44djMjk= +github.com/go-openapi/analysis v0.19.5/go.mod h1:hkEAkxagaIvIP7VTn8ygJNkd4kAYON2rCu0v0ObL0AU= +github.com/go-openapi/errors v0.17.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0= +github.com/go-openapi/errors v0.18.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0= +github.com/go-openapi/errors v0.19.2/go.mod h1:qX0BLWsyaKfvhluLejVpVNwNRdXZhEbTA4kxxpKBC94= +github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0= +github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M= +github.com/go-openapi/jsonpointer v0.18.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M= github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= github.com/go-openapi/jsonpointer v0.20.3 h1:jykzYWS/kyGtsHfRt6aV8JTB9pcQAXPIA7qlZ5aRlyk= github.com/go-openapi/jsonpointer v0.20.3/go.mod h1:c7l0rjoouAuIxCm8v/JWKRgMjDG/+/7UBWsXMrv6PsM= +github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg= +github.com/go-openapi/jsonreference v0.17.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I= +github.com/go-openapi/jsonreference v0.18.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I= github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8= github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo= github.com/go-openapi/jsonreference v0.20.5 h1:hutI+cQI+HbSQaIGSfsBsYI0pHk+CATf8Fk5gCSj0yI= github.com/go-openapi/jsonreference v0.20.5/go.mod h1:thAqAp31UABtI+FQGKAQfmv7DbFpKNUlva2UPCxKu2Y= +github.com/go-openapi/loads v0.17.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= +github.com/go-openapi/loads v0.18.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= +github.com/go-openapi/loads v0.19.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU= +github.com/go-openapi/loads v0.19.2/go.mod h1:QAskZPMX5V0C2gvfkGZzJlINuP7Hx/4+ix5jWFxsNPs= +github.com/go-openapi/loads v0.19.4/go.mod h1:zZVHonKd8DXyxyw4yfnVjPzBjIQcLt0CCsn0N0ZrQsk= +github.com/go-openapi/runtime v0.0.0-20180920151709-4f900dc2ade9/go.mod h1:6v9a6LTXWQCdL8k1AO3cvqx5OtZY/Y9wKTgaoP6YRfA= +github.com/go-openapi/runtime v0.19.0/go.mod h1:OwNfisksmmaZse4+gpV3Ne9AyMOlP1lt4sK4FXt0O64= +github.com/go-openapi/runtime v0.19.4/go.mod h1:X277bwSUBxVlCYR3r7xgZZGKVvBd/29gLDlFGtJ8NL4= +github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc= +github.com/go-openapi/spec v0.17.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI= +github.com/go-openapi/spec v0.18.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI= +github.com/go-openapi/spec v0.19.2/go.mod h1:sCxk3jxKgioEJikev4fgkNmwS+3kuYdJtcsZsD5zxMY= github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= +github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU= +github.com/go-openapi/strfmt v0.18.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU= +github.com/go-openapi/strfmt v0.19.0/go.mod h1:+uW+93UVvGGq2qGaZxdDeJqSAqBqBdl+ZPMF/cC8nDY= +github.com/go-openapi/strfmt v0.19.3/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU= +github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I= +github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= +github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.22.10 h1:4y86NVn7Z2yYd6pfS4Z+Nyh3aAUL3Nul+LMbhFKy0gA= github.com/go-openapi/swag v0.22.10/go.mod h1:Cnn8BYtRlx6BNE3DPN86f/xkapGIcLWzh3CLEb4C1jI= +github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4= +github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA= +github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4= +github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE= github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls= github.com/go-test/deep v1.0.2/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/go-test/deep v1.0.8 h1:TDsG77qcSprGbC6vTN8OuXp5g+J+b5Pcguhf7Zt61VM= github.com/go-test/deep v1.0.8/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE= +github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= +github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= +github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I= github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= -github.com/golang-jwt/jwt/v4 v4.3.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= +github.com/golang-jwt/jwt/v5 v5.0.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= +github.com/golang-jwt/jwt/v5 v5.2.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20180513044358-24b0969c4cb7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -189,6 +315,7 @@ github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8= github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= +github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= @@ -232,6 +359,7 @@ github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= @@ -261,6 +389,7 @@ github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= @@ -271,13 +400,22 @@ github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/Oth github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM= github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c= github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg= -github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU= -github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA= +github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= +github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= +github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= +github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= +github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= +github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= +github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= +github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= +github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI= +github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q= +github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= @@ -287,6 +425,8 @@ github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/S github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs= github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= +github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= +github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= @@ -295,6 +435,7 @@ github.com/hashicorp/go-retryablehttp v0.6.6/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER github.com/hashicorp/go-retryablehttp v0.7.1/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY= github.com/hashicorp/go-retryablehttp v0.7.5 h1:bJj+Pj19UZMIweq/iie+1u5YCdGrnxCT9yvm0e+Nd5M= github.com/hashicorp/go-retryablehttp v0.7.5/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8= +github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU= github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc= github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6/go.mod h1:QmrqtbKuxxSWTN3ETMPuB+VtEiBJ/A9XhoYGv8E1uD8= @@ -303,14 +444,24 @@ github.com/hashicorp/go-secure-stdlib/parseutil v0.1.8/go.mod h1:aiJI+PIApBRQG7F github.com/hashicorp/go-secure-stdlib/strutil v0.1.1/go.mod h1:gKOamz3EwoIoJq7mlMIRBpVTAUn8qPCrEclOKKWhD3U= github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 h1:kes8mmyCpxJsI7FTwtzRqEy9CdjCtrXrXGuOpxEA7Ts= github.com/hashicorp/go-secure-stdlib/strutil v0.1.2/go.mod h1:Gou2R9+il93BqX25LAKCLuM+y9U2T4hlwvT1yprcna4= +github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= github.com/hashicorp/go-sockaddr v1.0.6 h1:RSG8rKU28VTUTvEKghe5gIhIQpv8evvNpnDEyqO4u9I= github.com/hashicorp/go-sockaddr v1.0.6/go.mod h1:uoUUmtwU7n9Dv3O4SNLeFvg0SxQ3lyjsj6+CCykpaxI= +github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= +github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= +github.com/hashicorp/golang-lru v0.0.0-20180201235237-0fb14efe8c47/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.1-vault-5 h1:kI3hhbbyzr4dldA8UdTb7ZlVVlI2DACdCfz31RPDgJM= github.com/hashicorp/hcl v1.0.1-vault-5/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06AItNorpy+MoQNM= +github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= +github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= +github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= +github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= github.com/hashicorp/vault/api v1.10.0/go.mod h1:jo5Y/ET+hNyz+JnKDt8XLAdKs+AM0G5W0Vp1IrFI8N8= github.com/hashicorp/vault/api v1.12.0 h1:meCpJSesvzQyao8FCOgk2fGdoADAnbDu2WPJN1lDLJ4= github.com/hashicorp/vault/api v1.12.0/go.mod h1:si+lJCYO7oGkIoNPAN8j3azBLTn9SjMGS+jFaHd1Cck= @@ -324,22 +475,35 @@ github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpO github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= +github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= +github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= +github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= +github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= +github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= +github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/k8snetworkplumbingwg/network-attachment-definition-client v1.6.0 h1:BT3ghAY0q7lWib9rz+tVXDFkm27dJV6SLCn7TunZwo4= github.com/k8snetworkplumbingwg/network-attachment-definition-client v1.6.0/go.mod h1:wxt2YWRVItDtaQmVSmaN5ubE2L1c9CiNoHQwSJnM8Ko= +github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= @@ -353,8 +517,17 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1 h1:dQEHhTfi+bSIOSViQrKY9PqJvZenD6tFz+3lPzux58o= github.com/kube-object-storage/lib-bucket-provisioner v0.0.0-20221122204822-d1a8c34382f1/go.mod h1:my+EVjOJLeQ9lUR9uVkxRvNNkhO2saSGIgzV8GZT9HY= -github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1 h1:bPR1KJK9pbSYuDPQzx5zXOT7Exj+y/K/8lpGU2KfzRc= -github.com/libopenstorage/secrets v0.0.0-20231011182615-5f4b25ceede1/go.mod h1:TB8PxROcwcNeaawFm45+XAj0lnZL2wRI3wTr/tZ3/bM= +github.com/kubernetes-csi/external-snapshotter/client/v4 v4.0.0/go.mod h1:YBCo4DoEeDndqvAn6eeu0vWM7QdXmHEeI9cFWplmBys= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= +github.com/libopenstorage/autopilot-api v0.6.1-0.20210128210103-5fbb67948648/go.mod h1:6JLrPbR3ZJQFbUY/+QJMl/aF00YdIrLf8/GWAplgvJs= +github.com/libopenstorage/openstorage v8.0.0+incompatible/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc= +github.com/libopenstorage/operator v0.0.0-20200725001727-48d03e197117/go.mod h1:Qh+VXOB6hj60VmlgsmY+R1w+dFuHK246UueM4SAqZG0= +github.com/libopenstorage/stork v1.3.0-beta1.0.20200630005842-9255e7a98775/go.mod h1:qBSzYTJVHlOMg5RINNiHD1kBzlasnrc2uKLPZLgu1Qs= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= +github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= @@ -366,21 +539,32 @@ github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= +github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= +github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= +github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= +github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= +github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= +github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg= +github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= +github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= +github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= @@ -388,17 +572,25 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8= +github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8= +github.com/montanaflynn/stats v0.6.6/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= +github.com/montanaflynn/stats v0.7.0/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= github.com/nbio/st v0.0.0-20140626010706-e9e8d9816f32/go.mod h1:9wM+0iRr9ahx58uYLpLIr5fm8diHn0JbqRycJi6w0Ms= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= +github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo= github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= +github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY= github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0= @@ -414,6 +606,8 @@ github.com/onsi/ginkgo/v2 v2.6.0/go.mod h1:63DOGlLAH8+REH8jUGdL3YpCpu7JODesutUjd github.com/onsi/ginkgo/v2 v2.14.0 h1:vSmGj2Z5YPb9JwCWT6z6ihcUvDhuXLc3sJiqd3jMKAY= github.com/onsi/ginkgo/v2 v2.14.0/go.mod h1:JkUdW7JkN0V6rFvsHcJ478egV3XH9NxpD27Hal/PhZw= github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= +github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= +github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY= @@ -426,12 +620,23 @@ github.com/onsi/gomega v1.24.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2 github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM= github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8= github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ= +github.com/openshift/api v0.0.0-20210105115604-44119421ec6b/go.mod h1:aqU5Cq+kqKKPbDMqxo9FojgDeSpNJI7iuskjXjtojDg= github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 h1:+S998xHiJApsJZjRAO8wyedU9GfqFd8mtwWly6LqHDo= github.com/openshift/api v0.0.0-20240301093301-ce10821dc999/go.mod h1:CxgbWAlvu2iQB0UmKTtRu1YfepRg1/vJ64n2DlIEVz4= +github.com/openshift/build-machinery-go v0.0.0-20200917070002-f171684f77ab/go.mod h1:b1BuldmJlbA/xYtdZvKi+7j5YGB44qJUJDZ9zwiNCfE= +github.com/openshift/client-go v0.0.0-20210112165513-ebc401615f47/go.mod h1:u7NRAjtYVAKokiI9LouzTv4mhds8P4S1TwdVAfbjKSk= +github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= +github.com/pborman/uuid v0.0.0-20170612153648-e790cca94e6c/go.mod h1:VyrYX9gd7irzKovcSS6BIIEwPRkP2Wm2m9ufcdFSJ34= github.com/pborman/uuid v1.2.0 h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= +github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ= +github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI= +github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= @@ -439,27 +644,82 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/portworx/dcos-secrets v0.0.0-20180616013705-8e8ec3f66611/go.mod h1:4hklRW/4DQpLqkcXcjtNprbH2tz/sJaNtqinfPWl/LA= github.com/portworx/kvdb v0.0.0-20200929023115-b312c7519467/go.mod h1:Q8YyrNDvPp3DVF96BDcQuaC7fAYUCuUX+l58S7OnD2M= +github.com/portworx/sched-ops v0.20.4-openstorage-rc3/go.mod h1:DpRDDqXWQrReFJ5SHWWrURuZdzVKjrh2OxbAfwnrAyk= +github.com/portworx/talisman v0.0.0-20191007232806-837747f38224/go.mod h1:OjpMH9Uh5o9ntVGktm4FbjLNwubJ3ITih2OfYrAeWtA= github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= +github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= +github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.44.1/go.mod h1:3WYi4xqXxGGXWDdQIITnLNmuDzO5n6wYva9spVhR4fg= +github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.46.0/go.mod h1:3WYi4xqXxGGXWDdQIITnLNmuDzO5n6wYva9spVhR4fg= +github.com/prometheus-operator/prometheus-operator/pkg/client v0.46.0/go.mod h1:k4BrWlVQQsvBiTcDnKEMgyh/euRxyxgrHdur/ZX/sdA= +github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= +github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= +github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= +github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= +github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= +github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= +github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= +github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= +github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= +github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= +github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= +github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= +github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= +github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= +github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= +github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= github.com/rogpeppe/go-internal v1.8.1/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4nPKWu0nJ5d+o= github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= +github.com/rook/secrets v0.0.0-20240315053144-3195f6906937 h1:1TpdIqF9mtQfhNfwOpXdpJTMhx66PonVCCvYcGWvu/I= +github.com/rook/secrets v0.0.0-20240315053144-3195f6906937/go.mod h1:jOxzr6jXuSz9UztMhEpcBi1/vPygUA4z9kFuFj+6zd8= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk= github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= +github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= +github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= +github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= +github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= +github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= +github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= @@ -474,6 +734,15 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= +github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= +github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= +github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= +github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw= +github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= @@ -481,6 +750,14 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= +go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= +go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ= +go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg= +go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg= +go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= +go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= +go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= @@ -489,20 +766,36 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= +go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= +go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= +go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= +go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= +go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= +golang.org/x/crypto v0.0.0-20180820150726-614d502a4dac/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20201208171446-5f87f3452ae9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.0.0-20220511200225-c6db032c6c88/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4= +golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= @@ -546,12 +839,19 @@ golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2 golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= @@ -560,7 +860,9 @@ golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLL golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= @@ -576,10 +878,12 @@ golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/ golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201010224723-4f7140c49acb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= @@ -601,8 +905,10 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= +golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -641,26 +947,39 @@ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -675,14 +994,18 @@ golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -693,6 +1016,7 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -726,6 +1050,7 @@ golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -733,12 +1058,14 @@ golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8= golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= +golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -752,35 +1079,47 @@ golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= +golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= +golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190125232054-d66bd3c5d5a6/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -834,6 +1173,7 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8= +gomodules.xyz/jsonpatch/v2 v2.0.1/go.mod h1:IhYNNY4jnS53ZnfE4PAmpKtDpTCj1JFXc+3mwe7XcUU= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= @@ -914,6 +1254,7 @@ google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6D google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= +google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= @@ -960,6 +1301,7 @@ google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZi google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= @@ -1005,23 +1347,32 @@ google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqw google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I= google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/h2non/gock.v1 v1.0.15/go.mod h1:sX4zAkdYX1TRGJ2JY156cFspQn4yRWn6p9EMdODlynE= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= +gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= +gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= +gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= +gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= @@ -1032,6 +1383,8 @@ gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= +gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= @@ -1039,21 +1392,56 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= +k8s.io/api v0.0.0-20190409021203-6e4e0e4f393b/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA= +k8s.io/api v0.18.3/go.mod h1:UOaMwERbqJMfeeeHc8XJKawj4P9TgDRnViIqqBeH2QA= +k8s.io/api v0.19.0/go.mod h1:I1K45XlvTrDjmj5LoM5LuP/KYrhWbjUKT/SoPG0qTjw= +k8s.io/api v0.19.2/go.mod h1:IQpK0zFQ1xc5iNIQPqzgoOwuFugaYHK4iCknlAQP9nI= +k8s.io/api v0.20.0/go.mod h1:HyLC5l5eoS/ygQYl1BXBgFzWNlkHiAuyNAbevIn+FKg= +k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo= +k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ= k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8= k8s.io/api v0.26.0/go.mod h1:k6HDTaIFC8yn1i6pSClSqIwLABIcLV9l5Q4EcngKnQg= k8s.io/api v0.29.2 h1:hBC7B9+MU+ptchxEqTNW2DkUosJpp1P+Wn6YncZ474A= k8s.io/api v0.29.2/go.mod h1:sdIaaKuU7P44aoyyLlikSLayT6Vb7bvJNCX105xZXY0= +k8s.io/apiextensions-apiserver v0.0.0-20190409022649-727a075fdec8/go.mod h1:IxkesAMoaCRoLrPJdZNZUQp9NfZnzqaVzLhb2VEQzXE= +k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZMoIGB24m03+JE= +k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk= +k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0= +k8s.io/apimachinery v0.18.3/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko= +k8s.io/apimachinery v0.19.0/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA= +k8s.io/apimachinery v0.19.2/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA= +k8s.io/apimachinery v0.20.0/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= +k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= +k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM= k8s.io/apimachinery v0.26.0/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74= k8s.io/apimachinery v0.29.2 h1:EWGpfJ856oj11C52NRCHuU7rFDwxev48z+6DSlGNsV8= k8s.io/apimachinery v0.29.2/go.mod h1:6HVkd1FwxIagpYrHSwJlQqZI3G9LfYWRPAkUvLnXTKU= +k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw= +k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU= +k8s.io/client-go v0.18.3/go.mod h1:4a/dpQEvzAhT1BbuWW09qvIaGw6Gbu1gZYiQZIi1DMw= +k8s.io/client-go v0.19.0/go.mod h1:H9E/VT95blcFQnlyShFgnFT9ZnJOAceiUHM3MlRC+mU= +k8s.io/client-go v0.19.2/go.mod h1:S5wPhCqyDNAlzM9CnEdgTGV4OqhsW3jGO1UM1epwfJA= +k8s.io/client-go v0.20.0/go.mod h1:4KWh/g+Ocd8KkCwKF8vUNnmqgv+EVnQDK4MBF4oB5tY= +k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y= k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4= k8s.io/client-go v0.29.2 h1:FEg85el1TeZp+/vYJM7hkDlSTFZ+c5nnK44DJ4FyoRg= k8s.io/client-go v0.29.2/go.mod h1:knlvFZE58VpqbQpJNbCbctTVXcd35mMyAAwBdpt4jrA= +k8s.io/code-generator v0.18.3/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c= +k8s.io/code-generator v0.19.0/go.mod h1:moqLn7w0t9cMs4+5CQyxnfA/HV8MF6aAVENF+WZZhgk= +k8s.io/code-generator v0.20.0/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= k8s.io/code-generator v0.20.1/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= +k8s.io/component-base v0.18.3/go.mod h1:bp5GzGR0aGkYEfTj+eTY0AN/vXTgkJdQXjNTTVUaa3k= +k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk= +k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= +k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= +k8s.io/gengo v0.0.0-20200428234225-8167cfdcfc14/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= +k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk= +k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk= +k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I= k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE= k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= @@ -1062,11 +1450,18 @@ k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20180731170545-e3762e86a74c/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc= +k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E= +k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o= k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM= k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk= k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= +k8s.io/utils v0.0.0-20190506122338-8fab8cb257d5/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= +k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew= +k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= +k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= @@ -1076,15 +1471,22 @@ k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= +sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0= +sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg= +sigs.k8s.io/controller-runtime v0.2.2/go.mod h1:9dyohw3ZtoXQuV1e766PHUn+cmrRCIcBh6XIMFNMZ+I= sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs= sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= +sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw= +sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw= +sigs.k8s.io/structured-merge-diff/v4 v4.0.1/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw= sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw= sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4= sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E= sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= +sigs.k8s.io/testing_frameworks v0.1.1/go.mod h1:VVBKrHmJ6Ekkfz284YKhQePcdycOzNH9qL6ht1zEr/U= sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o= sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8= diff --git a/pkg/daemon/ceph/osd/kms/azure.go b/pkg/daemon/ceph/osd/kms/azure.go new file mode 100644 index 000000000000..4fffc8a84b6b --- /dev/null +++ b/pkg/daemon/ceph/osd/kms/azure.go @@ -0,0 +1,102 @@ +/* +Copyright 2024 The Rook Authors. All rights reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package kms + +import ( + "context" + "fmt" + "os" + + "github.com/libopenstorage/secrets" + "github.com/libopenstorage/secrets/azure" + "github.com/pkg/errors" + "github.com/rook/rook/pkg/clusterd" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +const ( + //#nosec G101 -- This is only the k8s secret name + azureClientCertSecretName = "AZURE_CERT_SECRET_NAME" +) + +var kmsAzureManadatoryConnectionDetails = []string{azure.AzureVaultURL, azure.AzureTenantID, azure.AzureClientID, azureClientCertSecretName} + +// IsAzure determines whether the configured KMS is Azure Key Vault +func (c *Config) IsAzure() bool { + return c.Provider == secrets.TypeAzure +} + +// InitAzure initializes azure key vault client +func InitAzure(ctx context.Context, context *clusterd.Context, namespace string, config map[string]string) (secrets.Secrets, error) { + azureKVConfig, removecertFiles, err := azureKVCert(ctx, context, namespace, config) + if err != nil { + return nil, errors.Wrapf(err, "failed to setup azure client cert authentication") + } + defer removecertFiles() + + // Convert map string to map interface + secretConfig := make(map[string]interface{}) + for key, value := range azureKVConfig { + secretConfig[key] = string(value) + } + + secrets, err := azure.New(secretConfig) + if err != nil { + return nil, errors.Wrapf(err, "failed to initialize azure client") + } + + return secrets, nil +} + +// azureKVCert retrivies azure client cert from the secret and stores that in a file +func azureKVCert(ctx context.Context, context *clusterd.Context, namespace string, config map[string]string) (newConfig map[string]string, removeCertFiles removeCertFilesFunction, retErr error) { + var filesToRemove []*os.File + defer func() { + removeCertFiles = getRemoveCertFiles(filesToRemove) + if retErr != nil { + removeCertFiles() + removeCertFiles = nil + } + }() + + clientCertSecretName := config[azureClientCertSecretName] + if clientCertSecretName == "" { + return nil, removeCertFiles, fmt.Errorf("azure cert secret name is not provided in the connection details") + } + + secret, err := context.Clientset.CoreV1().Secrets(namespace).Get(ctx, clientCertSecretName, v1.GetOptions{}) + if err != nil { + return nil, removeCertFiles, errors.Wrapf(err, "failed to fetch tls k8s secret %q", clientCertSecretName) + } + // Generate a temp file + file, err := createTmpFile("", "cert.pem") + if err != nil { + return nil, removeCertFiles, errors.Wrapf(err, "failed to generate temp file for k8s secret %q content", clientCertSecretName) + } + + err = os.WriteFile(file.Name(), secret.Data["CLIENT_CERT"], 0400) + if err != nil { + return nil, removeCertFiles, errors.Wrapf(err, "failed to write k8s secret %q content to a file", clientCertSecretName) + } + + // Update the env var with the path + config[azure.AzureClientCertPath] = file.Name() + + filesToRemove = append(filesToRemove, file) + + return config, removeCertFiles, nil +} diff --git a/pkg/daemon/ceph/osd/kms/azure_test.go b/pkg/daemon/ceph/osd/kms/azure_test.go new file mode 100644 index 000000000000..4045f3f07848 --- /dev/null +++ b/pkg/daemon/ceph/osd/kms/azure_test.go @@ -0,0 +1,85 @@ +/* +Copyright 2024 The Rook Authors. All rights reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package kms + +import ( + "context" + "testing" + + "github.com/libopenstorage/secrets/azure" + "github.com/rook/rook/pkg/clusterd" + "github.com/rook/rook/pkg/operator/test" + "github.com/stretchr/testify/assert" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +func Test_AzureKVCert(t *testing.T) { + ctx := context.TODO() + ns := "rook-ceph" + context := &clusterd.Context{Clientset: test.New(t, 3)} + t.Run("azure secret name not provided in config", func(t *testing.T) { + config := map[string]string{ + "KMS_PROVIDER": "azure-kv", + } + _, _, err := azureKVCert(ctx, context, ns, config) + assert.Error(t, err) + }) + + t.Run("invalid azure secret name in config", func(t *testing.T) { + config := map[string]string{ + "KMS_PROVIDER": "azure-kv", + azureClientCertSecretName: "invalid-name", + } + + cert := &v1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "azure-cert", + Namespace: ns, + }, + Data: map[string][]byte{"CLIENT_CERT": []byte("bar")}, + } + + _, err := context.Clientset.CoreV1().Secrets(ns).Create(ctx, cert, metav1.CreateOptions{}) + assert.NoError(t, err) + _, _, err = azureKVCert(ctx, context, ns, config) + assert.Error(t, err) + }) + + t.Run("valid azure cert secret is available", func(t *testing.T) { + config := map[string]string{ + "KMS_PROVIDER": "azure-kv", + azureClientCertSecretName: "azure-cert-2", + } + + cert := &v1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "azure-cert-2", + Namespace: ns, + }, + Data: map[string][]byte{"CLIENT_CERT": []byte("bar")}, + } + + _, err := context.Clientset.CoreV1().Secrets(ns).Create(ctx, cert, metav1.CreateOptions{}) + assert.NoError(t, err) + newConfig, removeCertFile, err := azureKVCert(ctx, context, ns, config) + assert.NoError(t, err) + assert.FileExists(t, newConfig[azure.AzureClientCertPath]) + removeCertFile() + assert.NoFileExists(t, newConfig[azure.AzureClientCertPath]) + }) +} diff --git a/pkg/daemon/ceph/osd/kms/envs.go b/pkg/daemon/ceph/osd/kms/envs.go index ed83ed473d7b..89d8c9924758 100644 --- a/pkg/daemon/ceph/osd/kms/envs.go +++ b/pkg/daemon/ceph/osd/kms/envs.go @@ -31,7 +31,7 @@ import ( var ( kmipKMSPrefix = "KMIP_" - knownKMSPrefix = []string{"VAULT_", "IBM_", kmipKMSPrefix} + knownKMSPrefix = []string{"VAULT_", "IBM_", kmipKMSPrefix, "AZURE_"} ) // VaultTokenEnvVarFromSecret returns the kms token secret value as an env var diff --git a/pkg/daemon/ceph/osd/kms/kms.go b/pkg/daemon/ceph/osd/kms/kms.go index 7a545b8dd438..de555e425320 100644 --- a/pkg/daemon/ceph/osd/kms/kms.go +++ b/pkg/daemon/ceph/osd/kms/kms.go @@ -70,6 +70,8 @@ func NewConfig(context *clusterd.Context, clusterSpec *cephv1.ClusterSpec, clust config.Provider = TypeIBM case TypeKMIP: config.Provider = TypeKMIP + case secrets.TypeAzure: + config.Provider = secrets.TypeAzure default: logger.Errorf("unsupported kms type %q", Provider) } @@ -94,7 +96,7 @@ func (c *Config) PutSecret(secretName, secretValue string) error { return errors.Wrap(err, "failed to init vault kms") } k := buildVaultKeyContext(c.clusterSpec.Security.KeyManagementService.ConnectionDetails) - err = put(v, GenerateOSDEncryptionSecretName(secretName), secretValue, k) + err = putSecret(v, GenerateOSDEncryptionSecretName(secretName), secretValue, k) if err != nil { return errors.Wrap(err, "failed to put secret in vault") } @@ -145,6 +147,17 @@ func (c *Config) PutSecret(secretName, secretValue string) error { } } + if c.IsAzure() { + v, err := InitAzure(c.ClusterInfo.Context, c.context, c.ClusterInfo.Namespace, c.clusterSpec.Security.KeyManagementService.ConnectionDetails) + if err != nil { + return errors.Wrap(err, "failed to init azure key vault") + } + err = putSecret(v, GenerateOSDEncryptionSecretName(secretName), secretValue, map[string]string{}) + if err != nil { + return errors.Wrap(err, "failed to put secret in azure key vault") + } + } + return nil } @@ -167,7 +180,7 @@ func (c *Config) GetSecret(secretName string) (string, error) { } k := buildVaultKeyContext(c.clusterSpec.Security.KeyManagementService.ConnectionDetails) - value, err = get(v, GenerateOSDEncryptionSecretName(secretName), k) + value, err = getSecret(v, GenerateOSDEncryptionSecretName(secretName), k) if err != nil { return "", errors.Wrap(err, "failed to get secret from vault") } @@ -199,6 +212,16 @@ func (c *Config) GetSecret(secretName string) (string, error) { return "", errors.Wrap(err, "failed to get key from kmip") } } + if c.IsAzure() { + v, err := InitAzure(c.ClusterInfo.Context, c.context, c.ClusterInfo.Namespace, c.clusterSpec.Security.KeyManagementService.ConnectionDetails) + if err != nil { + return "", errors.Wrap(err, "failed to init azure key vault") + } + value, err = getSecret(v, GenerateOSDEncryptionSecretName(secretName), map[string]string{}) + if err != nil { + return "", errors.Wrap(err, "failed to get secret from azure key vault") + } + } return value, nil } @@ -282,6 +305,18 @@ func (c *Config) DeleteSecret(secretName string) error { } } + if c.IsAzure() { + v, err := InitAzure(c.ClusterInfo.Context, c.context, c.ClusterInfo.Namespace, c.clusterSpec.Security.KeyManagementService.ConnectionDetails) + if err != nil { + return errors.Wrap(err, "failed to init azure key vault") + } + err = deleteSecret(v, GenerateOSDEncryptionSecretName(secretName), map[string]string{}) + if err != nil { + return errors.Wrap(err, "failed to delete secret from azure key vault") + } + + } + return nil } @@ -302,10 +337,12 @@ func ValidateConnectionDetails(ctx context.Context, clusterdContext *clusterd.Co } } - // A token must be specified if token-auth is used - if !kms.IsK8sAuthEnabled() && kms.TokenSecretName == "" { - if !kms.IsTokenAuthEnabled() { - return errors.New("failed to validate kms configuration (missing token in spec)") + // A token must be specified if token-auth is used for KMS other than Azure + if !kms.IsAzureMS() { + if !kms.IsK8sAuthEnabled() && kms.TokenSecretName == "" { + if !kms.IsTokenAuthEnabled() { + return errors.New("failed to validate kms configuration (missing token in spec)") + } } } @@ -390,6 +427,13 @@ func ValidateConnectionDetails(ctx context.Context, clusterdContext *clusterd.Co } } + case secrets.TypeAzure: + for _, config := range kmsAzureManadatoryConnectionDetails { + if GetParam(kms.ConnectionDetails, config) == "" { + return errors.Errorf("failed to validate kms config %q. cannot be empty", config) + } + } + default: return errors.Errorf("failed to validate kms provider connection details (provider %q not supported)", provider) } @@ -424,3 +468,50 @@ func SetTokenToEnvVar(ctx context.Context, clusterdContext *clusterd.Context, to return nil } + +func putSecret(v secrets.Secrets, secretName, secretValue string, keyContext map[string]string) error { + // First we must see if the key entry already exists, if it does we do nothing + key, err := getSecret(v, secretName, keyContext) + if err != nil && err != secrets.ErrInvalidSecretId && err != secrets.ErrSecretNotFound { + return errors.Wrapf(err, "failed to get secret %q in kms", secretName) + } + if key != "" { + logger.Debugf("key %q already exists in kms!", secretName) + if key != secretValue { + logger.Error("value for secret %q is not expected to be changed", secretName) + } + return nil + } + + // Build Secret + data := make(map[string]interface{}) + data[secretName] = secretValue + + //nolint:gosec // Write the encryption key in Vault + _, err = v.PutSecret(secretName, data, keyContext) + if err != nil { + return errors.Wrapf(err, "failed to put secret %q in kms", secretName) + } + + return nil +} + +func getSecret(v secrets.Secrets, secretName string, keyContext map[string]string) (string, error) { + //nolint:gosec // Write the encryption key in Vault + s, _, err := v.GetSecret(secretName, keyContext) + if err != nil { + return "", err + } + + return s[secretName].(string), nil +} + +func deleteSecret(v secrets.Secrets, secretName string, keyContext map[string]string) error { + //nolint:gosec // Write the encryption key in Vault + err := v.DeleteSecret(secretName, keyContext) + if err != nil { + return errors.Wrapf(err, "failed to delete secret %q in vault", secretName) + } + + return nil +} diff --git a/pkg/daemon/ceph/osd/kms/kms_test.go b/pkg/daemon/ceph/osd/kms/kms_test.go index 0d3bb7421eaf..bfa204156317 100644 --- a/pkg/daemon/ceph/osd/kms/kms_test.go +++ b/pkg/daemon/ceph/osd/kms/kms_test.go @@ -21,6 +21,8 @@ import ( "os" "testing" + "github.com/libopenstorage/secrets" + "github.com/libopenstorage/secrets/azure" cephv1 "github.com/rook/rook/pkg/apis/ceph.rook.io/v1" "github.com/rook/rook/pkg/clusterd" "github.com/rook/rook/pkg/operator/test" @@ -74,6 +76,11 @@ func TestValidateConnectionDetails(t *testing.T) { }, TokenSecretName: "kmip-token", } + azureKMSSpec := &cephv1.KeyManagementServiceSpec{ + ConnectionDetails: map[string]string{ + "KMS_PROVIDER": secrets.TypeAzure, + }, + } t.Run("no kms provider given", func(t *testing.T) { err := ValidateConnectionDetails(ctx, clusterdContext, kms, ns) @@ -197,6 +204,39 @@ func TestValidateConnectionDetails(t *testing.T) { // all the details assert.Equal(t, ibmKMSSpec.ConnectionDetails["IBM_KP_SERVICE_API_KEY"], "foo") }) + + t.Run("azure kms - vault URL is missing ", func(t *testing.T) { + err := ValidateConnectionDetails(ctx, clusterdContext, azureKMSSpec, ns) + assert.Error(t, err, "") + assert.EqualError(t, err, "failed to validate kms config \"AZURE_VAULT_URL\". cannot be empty") + }) + + t.Run("azure kms - tenant ID is missing ", func(t *testing.T) { + azureKMSSpec.ConnectionDetails[azure.AzureVaultURL] = "test" + err := ValidateConnectionDetails(ctx, clusterdContext, azureKMSSpec, ns) + assert.Error(t, err, "") + assert.EqualError(t, err, "failed to validate kms config \"AZURE_TENANT_ID\". cannot be empty") + }) + + t.Run("azure kms - client ID is missing ", func(t *testing.T) { + azureKMSSpec.ConnectionDetails[azure.AzureTenantID] = "test" + err := ValidateConnectionDetails(ctx, clusterdContext, azureKMSSpec, ns) + assert.Error(t, err, "") + assert.EqualError(t, err, "failed to validate kms config \"AZURE_CLIENT_ID\". cannot be empty") + }) + + t.Run("azure kms - cert secret is missing ", func(t *testing.T) { + azureKMSSpec.ConnectionDetails[azure.AzureClientID] = "test" + err := ValidateConnectionDetails(ctx, clusterdContext, azureKMSSpec, ns) + assert.Error(t, err, "") + assert.EqualError(t, err, "failed to validate kms config \"AZURE_CERT_SECRET_NAME\". cannot be empty") + }) + + t.Run("azure kms - success", func(t *testing.T) { + azureKMSSpec.ConnectionDetails[azureClientCertSecretName] = "test" + err := ValidateConnectionDetails(ctx, clusterdContext, azureKMSSpec, ns) + assert.NoError(t, err) + }) } func TestSetTokenToEnvVar(t *testing.T) { diff --git a/pkg/daemon/ceph/osd/kms/vault.go b/pkg/daemon/ceph/osd/kms/vault.go index 1c7b98d9aec0..bbfa98b67484 100644 --- a/pkg/daemon/ceph/osd/kms/vault.go +++ b/pkg/daemon/ceph/osd/kms/vault.go @@ -186,50 +186,6 @@ func getRemoveCertFilesFunc(filesToRemove []*os.File) removeCertFilesFunction { }) } -func put(v secrets.Secrets, secretName, secretValue string, keyContext map[string]string) error { - // First we must see if the key entry already exists, if it does we do nothing - key, err := get(v, secretName, keyContext) - if err != nil && err != secrets.ErrInvalidSecretId { - return errors.Wrapf(err, "failed to get secret %q in vault", secretName) - } - if key != "" { - logger.Debugf("key %q already exists in vault!", secretName) - return nil - } - - // Build Secret - data := make(map[string]interface{}) - data[secretName] = secretValue - - //nolint:gosec // Write the encryption key in Vault - _, err = v.PutSecret(secretName, data, keyContext) - if err != nil { - return errors.Wrapf(err, "failed to put secret %q in vault", secretName) - } - - return nil -} - -func get(v secrets.Secrets, secretName string, keyContext map[string]string) (string, error) { - //nolint:gosec // Write the encryption key in Vault - s, _, err := v.GetSecret(secretName, keyContext) - if err != nil { - return "", err - } - - return s[secretName].(string), nil -} - -func deleteSecret(v secrets.Secrets, secretName string, keyContext map[string]string) error { - //nolint:gosec // Write the encryption key in Vault - err := v.DeleteSecret(secretName, keyContext) - if err != nil { - return errors.Wrapf(err, "failed to delete secret %q in vault", secretName) - } - - return nil -} - func buildVaultKeyContext(config map[string]string) map[string]string { // Key context is just the Vault namespace, available in the enterprise version only keyContext := map[string]string{secrets.KeyVaultNamespace: config[api.EnvVaultNamespace]} From 308236d84135eabf1043e45443c15ffb068a0c35 Mon Sep 17 00:00:00 2001 From: Redouane Kachach Date: Wed, 13 Mar 2024 20:09:33 +0100 Subject: [PATCH 48/65] manifest: adding some missing RBAC roles to secondary cluster Signed-off-by: Redouane Kachach --- deploy/examples/common-second-cluster.yaml | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/deploy/examples/common-second-cluster.yaml b/deploy/examples/common-second-cluster.yaml index c19a618c2d55..13b43ca436c0 100644 --- a/deploy/examples/common-second-cluster.yaml +++ b/deploy/examples/common-second-cluster.yaml @@ -288,3 +288,17 @@ kind: ServiceAccount metadata: name: rook-ceph-rgw namespace: rook-ceph-secondary # namespace:cluster +--- +# Allow the ceph mgr to access cluster-wide resources necessary for the mgr modules +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: rook-ceph-mgr-secondary-cluster +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: rook-ceph-mgr-cluster +subjects: + - kind: ServiceAccount + name: rook-ceph-mgr + namespace: rook-ceph-secondary # namespace:cluster From 0aa431c7e1aba4631b67fed58bee271855aa1d4b Mon Sep 17 00:00:00 2001 From: parth-gr Date: Thu, 7 Mar 2024 18:03:03 +0530 Subject: [PATCH 49/65] external: add ci and design documet Signed-off-by: parth-gr --- .github/workflows/canary-integration-test.yml | 35 +++++- Documentation/CRDs/Cluster/.pages | 2 +- .../CRDs/Cluster/ceph-cluster-crd.md | 4 +- .../CRDs/Cluster/external-cluster/.pages | 3 + .../external-cluster.md | 18 ++- .../topology-for-external-mode.md | 118 ++++++++++++++++++ Documentation/Getting-Started/glossary.md | 2 +- ROADMAP.md | 1 + .../create-external-cluster-resources.py | 24 ++-- 9 files changed, 179 insertions(+), 28 deletions(-) create mode 100644 Documentation/CRDs/Cluster/external-cluster/.pages rename Documentation/CRDs/Cluster/{ => external-cluster}/external-cluster.md (93%) create mode 100644 Documentation/CRDs/Cluster/external-cluster/topology-for-external-mode.md diff --git a/.github/workflows/canary-integration-test.yml b/.github/workflows/canary-integration-test.yml index 54e6aeef3d44..6874c444079d 100644 --- a/.github/workflows/canary-integration-test.yml +++ b/.github/workflows/canary-integration-test.yml @@ -4,7 +4,7 @@ on: workflow_call: inputs: ceph_images: - description: 'JSON list of Ceph images for creating Ceph cluster' + description: "JSON list of Ceph images for creating Ceph cluster" default: '["quay.io/ceph/ceph:v18"]' type: string @@ -229,6 +229,39 @@ jobs: echo "script failed because wrong realm was passed" fi + - name: test topology flags + run: | + toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}') + # create 3 replica-1 pools + sed -i 's/replicapool/replica1a/' deploy/examples/pool-test.yaml + kubectl create -f deploy/examples/pool-test.yaml + sed -i 's/replica1a/replica1b/' deploy/examples/pool-test.yaml + kubectl create -f deploy/examples/pool-test.yaml + sed -i 's/replica1b/replica1c/' deploy/examples/pool-test.yaml + kubectl create -f deploy/examples/pool-test.yaml + # bring back the original file + sed -i 's/replica1c/replicapool/' deploy/examples/pool-test.yaml + + # check and wait for the pools to get ready + kubectl wait --for='jsonpath={.status.phase}=Ready' Cephblockpool/replica1a -nrook-ceph + kubectl wait --for='jsonpath={.status.phase}=Ready' Cephblockpool/replica1b -nrook-ceph + kubectl wait --for='jsonpath={.status.phase}=Ready' Cephblockpool/replica1c -nrook-ceph + + # pass correct flags + kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --rbd-data-pool-name replicapool --topology-pools replica1a,replica1b,replica1c --topology-failure-domain-label hostname --topology-failure-domain-values minikube,minikube-m02,minikube-m03 + # pass the pool which is not exists + if output=$(kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --rbd-data-pool-name replicapool --topology-pools ab,cd,ef --topology-failure-domain-label hostname --topology-failure-domain-values minikube,minikube-m02,minikube-m03); then + echo "script run completed with stderr error after passing the wrong pools: $output" + else + echo "script failed because wrong pools doesn't exist" + fi + # dont pass all topology flags + if output=$(kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --rbd-data-pool-name replicapool --topology-pools replica1a,replica1b,replica1c --topology-failure-domain-values minikube,minikube-m02,minikube-m03); then + echo "script run completed with stderr error after passing the wrong flags: $output" + else + echo "script failed because topology-failure-domain-label is missing" + fi + - name: test enable v2 mon port run: | toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}') diff --git a/Documentation/CRDs/Cluster/.pages b/Documentation/CRDs/Cluster/.pages index 001ac3924b7b..524ac4b14ea3 100644 --- a/Documentation/CRDs/Cluster/.pages +++ b/Documentation/CRDs/Cluster/.pages @@ -4,5 +4,5 @@ nav: - host-cluster.md - pvc-cluster.md - stretch-cluster.md - - external-cluster.md + - external-cluster - ... diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index c6efd41fb3e1..15c8a72d7d49 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -8,7 +8,7 @@ There are primarily four different modes in which to create your cluster. 1. [Host Storage Cluster](host-cluster.md): Consume storage from host paths and raw devices 2. [PVC Storage Cluster](pvc-cluster.md): Dynamically provision storage underneath Rook by specifying the storage class Rook should use to consume storage (via PVCs) 3. [Stretched Storage Cluster](stretch-cluster.md): Distribute Ceph mons across three zones, while storage (OSDs) is only configured in two zones -4. [External Ceph Cluster](external-cluster.md): Connect your K8s applications to an external Ceph cluster +4. [External Ceph Cluster](external-cluster/external-cluster.md): Connect your K8s applications to an external Ceph cluster See the separate topics for a description and examples of each of these scenarios. @@ -24,7 +24,7 @@ Settings can be specified at the global level to apply to the cluster as a whole ### Cluster Settings * `external`: - * `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs. + * `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster/external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs. * `cephVersion`: The version information for launching the ceph daemons. * `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.1`. For more details read the [container images section](#ceph-container-images). For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/). diff --git a/Documentation/CRDs/Cluster/external-cluster/.pages b/Documentation/CRDs/Cluster/external-cluster/.pages new file mode 100644 index 000000000000..5a3a6ca9e41c --- /dev/null +++ b/Documentation/CRDs/Cluster/external-cluster/.pages @@ -0,0 +1,3 @@ +nav: + - external-cluster.md + - topology-for-external-mode.md diff --git a/Documentation/CRDs/Cluster/external-cluster.md b/Documentation/CRDs/Cluster/external-cluster/external-cluster.md similarity index 93% rename from Documentation/CRDs/Cluster/external-cluster.md rename to Documentation/CRDs/Cluster/external-cluster/external-cluster.md index 417ff4400c31..bc0db2bb05e1 100644 --- a/Documentation/CRDs/Cluster/external-cluster.md +++ b/Documentation/CRDs/Cluster/external-cluster/external-cluster.md @@ -60,9 +60,9 @@ python3 create-external-cluster-resources.py --rbd-data-pool-name -- * `--upgrade`: (optional) Upgrades the cephCSIKeyrings(For example: client.csi-cephfs-provisioner) and client.healthchecker ceph users with new permissions needed for the new cluster version and older permission will still be applied. * `--restricted-auth-permission`: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are `--rbd-data-pool-name`, and `--k8s-cluster-name`. `--cephfs-filesystem-name` flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem. * `--v2-port-enable`: (optional) Enables the v2 mon port (3300) for mons. -* `--topology-pools`: (optional) comma-separated list of topology-constrained rbd pools -* `--topology-failure-domain-label`: (optional) k8s cluster failure domain label (example: zone,rack,host,etc) for the topology-pools that are matching the ceph domain -* `--topology-failure-domain-values`: (optional) comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the topology-pools list +* `--topology-pools`: (optional) Comma-separated list of topology-constrained rbd pools +* `--topology-failure-domain-label`: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain +* `--topology-failure-domain-values`: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list ### Multi-tenancy @@ -90,13 +90,11 @@ python3 create-external-cluster-resources.py --rbd-data-pool-name -- ### Topology Based Provisioning Enable Topology Based Provisioning for RBD pools by passing `--topology-pools`, `--topology-failure-domain-label` and `--topology-failure-domain-values` flags. -A new storageclass will be created by the import script named `ceph-rbd-topology` with `volumeBindingMode: WaitForFirstConsumer` -and will configure topologyConstrainedPools according the input provided. -Later use the storageclass to create a volume in the pool matching the topology of the pod scheduling. +A new storageclass named `ceph-rbd-topology` will be created by the import script with `volumeBindingMode: WaitForFirstConsumer`. +The storageclass is used to create a volume in the pool matching the topology where a pod is scheduled. + +For more details, see the [Topology-Based Provisioning](topology-for-external-mode.md) -```console -python3 create-external-cluster-resources.py --rbd-data-pool-name pool_name --topology-pools p,q,r --topology-failure-domain-label labelName --topology-failure-domain-values x,y,z --format bash -``` ### Upgrade Example @@ -248,7 +246,7 @@ Consume the S3 Storage, in two different ways: ``` !!! hint - For more details see the [Object Store topic](../../Storage-Configuration/Object-Storage-RGW/object-storage.md#connect-to-an-external-object-store) + For more details see the [Object Store topic](../../../Storage-Configuration/Object-Storage-RGW/object-storage.md#connect-to-an-external-object-store) ### Connect to v2 mon port diff --git a/Documentation/CRDs/Cluster/external-cluster/topology-for-external-mode.md b/Documentation/CRDs/Cluster/external-cluster/topology-for-external-mode.md new file mode 100644 index 000000000000..67fda8817a1a --- /dev/null +++ b/Documentation/CRDs/Cluster/external-cluster/topology-for-external-mode.md @@ -0,0 +1,118 @@ +# Topology-Based Provisioning + +## Scenario +Applications like Kafka will have a deployment with multiple running instances. Each service instance will create a new claim and is expected to be located in a different zone. Since the application has its own redundant instances, there is no requirement for redundancy at the data layer. A storage class is created that will provision storage from replica 1 Ceph pools that are located in each of the separate zones. + +## Configuration Flags + +Add the required flags to the script: `create-external-cluster-resources.py`: + +- `--topology-pools`: (optional) Comma-separated list of topology-constrained rbd pools + +- `--topology-failure-domain-label`: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain + +- `--topology-failure-domain-values`: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list + +The import script will then create a new storage class named `ceph-rbd-topology`. + +## Example Configuration + +### Ceph cluster + +Determine the names of the zones (or other failure domains) in the Ceph CRUSH map where each of the pools will have corresponding CRUSH rules. + +Create a zone-specific CRUSH rule for each of the pools. For example, this is a CRUSH rule for `zone-a`: + +``` +$ ceph osd crush rule create-replicated + { + "rule_id": 5, + "rule_name": "rule_host-zone-a-hdd", + "type": 1, + "steps": [ + { + "op": "take", + "item": -10, + "item_name": "zone-a~hdd" + }, + { + "op": "choose_firstn", + "num": 0, + "type": "osd" + }, + { + "op": "emit" + } + ] +} +``` + +Create replica-1 pools based on each of the CRUSH rules from the previous step. Each pool must be created with a CRUSH rule to limit the pool to OSDs in a specific zone. + +!!! note + Disable the ceph warning for replica-1 pools: `ceph config set global mon_allow_pool_size_one true` + +Determine the zones in the K8s cluster that correspond to each of the pools in the Ceph pool. The K8s nodes require labels as defined with the [OSD Topology labels](../ceph-cluster-crd.md#osd-topology). Some environments already have nodes labeled in zones. Set the topology labels on the nodes if not already present. + +Set the flags of the external cluster configuration script based on the pools and failure domains. + +--topology-pools=pool-a,pool-b,pool-c + +--topology-failure-domain-label=zone + +--topology-failure-domain-values=zone-a,zone-b,zone-c + +Then run the python script to generate the settings which will be imported to the Rook cluster: +``` + python3 create-external-cluster-resources.py --rbd-data-pool-name replicapool --topology-pools pool-a,pool-b,pool-c --topology-failure-domain-label zone --topology-failure-domain-values zone-a,zone-b,zone-c +``` + +Output: +``` +export ROOK_EXTERNAL_FSID=8f01d842-d4b2-11ee-b43c-0050568fb522 +.... +.... +.... +export TOPOLOGY_POOLS=pool-a,pool-b,pool-c +export TOPOLOGY_FAILURE_DOMAIN_LABEL=zone +export TOPOLOGY_FAILURE_DOMAIN_VALUES=zone-a,zone-b,zone-c +``` + +### Kubernetes Cluster + +Check the external cluster is created and connected as per the installation steps. +Review the new storage class: +``` +$ kubectl get sc ceph-rbd-topology -o yaml +allowVolumeExpansion: true +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + creationTimestamp: "2024-03-07T12:10:19Z" + name: ceph-rbd-topology + resourceVersion: "82502" + uid: 68448a14-3a78-42c5-ac29-261b6c3404af +parameters: + ... + ... + pool: replicapool + topologyConstrainedPools: | + [ + {"poolName":"pool-a", + "domainSegments":[ + {"domainLabel":"zone","value":"zone-a"}]}, + {"poolName":"pool-b", + "domainSegments":[ + {"domainLabel":"zone","value":"zone-b"}]}, + {"poolName":"pool-c", + "domainSegments":[ + {"domainLabel":"zone","value":"zone-c"}]}, + ] +provisioner: rook-ceph.rbd.csi.ceph.com +reclaimPolicy: Delete +volumeBindingMode: WaitForFirstConsumer +``` + +#### Create a Topology-Based PVC + +The topology-based storage class is ready to be consumed! Create a PVC from the `ceph-rbd-topology` storage class above, and watch the OSD usage to see how the data is spread only among the topology-based CRUSH buckets. diff --git a/Documentation/Getting-Started/glossary.md b/Documentation/Getting-Started/glossary.md index 5dfd032313d4..492491d3ced7 100644 --- a/Documentation/Getting-Started/glossary.md +++ b/Documentation/Getting-Started/glossary.md @@ -64,7 +64,7 @@ CephRBDMirror CRD is used by Rook to allow creation and updating rbd-mirror daem ### External Storage Cluster -An [external cluster](../CRDs/Cluster/external-cluster.md) is a Ceph configuration that is managed outside of the local K8s cluster. +An [external cluster](../CRDs/Cluster/external-cluster/external-cluster.md) is a Ceph configuration that is managed outside of the local K8s cluster. ### Host Storage Cluster diff --git a/ROADMAP.md b/ROADMAP.md index 377ddcf51a88..749bd0fa8867 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -24,6 +24,7 @@ The following high level features are targeted for Rook v1.14 (April 2024). For * Separate CSI image repository and tag for all images in the helm chart [#13585](https://github.com/rook/rook/issues/13585) * Ceph-CSI [v3.11](https://github.com/ceph/ceph-csi/issues?q=is%3Aopen+is%3Aissue+milestone%3Arelease-v3.11.0) * Add build support for Go 1.22 [#13738](https://github.com/rook/rook/pull/13738) +* Add topology based provisioning for external clusters [#13821](https://github.com/rook/rook/pull/13821) ## Kubectl Plugin diff --git a/deploy/examples/create-external-cluster-resources.py b/deploy/examples/create-external-cluster-resources.py index acd265dcd1f0..b4404f745370 100644 --- a/deploy/examples/create-external-cluster-resources.py +++ b/deploy/examples/create-external-cluster-resources.py @@ -484,13 +484,13 @@ def gen_arg_parser(cls, args_to_parse=None): "--topology-failure-domain-label", default="", required=False, - help="k8s cluster failure domain label (example: zone,rack,host,etc) for the topology-pools that are matching the ceph domain", + help="k8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain", ) output_group.add_argument( "--topology-failure-domain-values", default="", required=False, - help="comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the topology-pools list", + help="comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list", ) upgrade_group = argP.add_argument_group("upgrade") @@ -1518,7 +1518,7 @@ def validate_rgw_multisite(self, rgw_multisite_config_name, rgw_multisite_config return "-1" return "" - def convert_comma_seprated_to_array(self, value): + def convert_comma_separated_to_array(self, value): return value.split(",") def raise_exception_if_any_topology_flag_is_missing(self): @@ -1663,16 +1663,16 @@ def _gen_output_map(self): and self._arg_parser.topology_failure_domain_values != "" ): self.validate_topology_values( - self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]), - self.convert_comma_seprated_to_array( + self.convert_comma_separated_to_array(self.out_map["TOPOLOGY_POOLS"]), + self.convert_comma_separated_to_array( self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] ), ) self.validate_topology_rbd_pools( - self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]) + self.convert_comma_separated_to_array(self.out_map["TOPOLOGY_POOLS"]) ) self.init_topology_rbd_pools( - self.convert_comma_seprated_to_array(self.out_map["TOPOLOGY_POOLS"]) + self.convert_comma_separated_to_array(self.out_map["TOPOLOGY_POOLS"]) ) else: self.raise_exception_if_any_topology_flag_is_missing() @@ -1928,12 +1928,10 @@ def gen_json_out(self): "topologyFailureDomainLabel": self.out_map[ "TOPOLOGY_FAILURE_DOMAIN_LABEL" ], - "topologyFailureDomainValues": self.convert_comma_seprated_to_array( - self.out_map["TOPOLOGY_FAILURE_DOMAIN_VALUES"] - ), - "topologyPools": self.convert_comma_seprated_to_array( - self.out_map["TOPOLOGY_POOLS"] - ), + "topologyFailureDomainValues": self.out_map[ + "TOPOLOGY_FAILURE_DOMAIN_VALUES" + ], + "topologyPools": self.out_map["TOPOLOGY_POOLS"], "pool": self.out_map["RBD_POOL_NAME"], "csi.storage.k8s.io/provisioner-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", "csi.storage.k8s.io/controller-expand-secret-name": f"rook-{self.out_map['CSI_RBD_PROVISIONER_SECRET_NAME']}", From 3e54055ae9342c61dd27c953f2c2462c69c51b29 Mon Sep 17 00:00:00 2001 From: Travis Nielsen Date: Thu, 14 Mar 2024 12:33:49 -0600 Subject: [PATCH 50/65] security: operator and toolbox scc for default service account The default service account access is needed for the operator and the toolbox to run on openshift. This is a follow-up from PR 13362 that created a new default service account to use with all ceph or rook components that were relying on the default service account. Signed-off-by: Travis Nielsen --- .../templates/securityContextConstraints.yaml | 3 +-- deploy/examples/common-external.yaml | 6 ++++++ deploy/examples/direct-mount.yaml | 1 + deploy/examples/operator-openshift.yaml | 2 +- deploy/examples/toolbox-operator-image.yaml | 1 + deploy/examples/toolbox.yaml | 1 + 6 files changed, 11 insertions(+), 3 deletions(-) diff --git a/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml b/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml index f79bcef07f79..82a0bc363b6c 100644 --- a/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml +++ b/deploy/charts/rook-ceph-cluster/templates/securityContextConstraints.yaml @@ -37,9 +37,8 @@ volumes: - secret users: # A user needs to be added for each rook service account. - - system:serviceaccount:{{ .Release.Namespace }}:default + - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-default - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-mgr - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-osd - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-rgw - - system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-default {{- end }} diff --git a/deploy/examples/common-external.yaml b/deploy/examples/common-external.yaml index 51f1c5fbeb6c..03e7192d9257 100644 --- a/deploy/examples/common-external.yaml +++ b/deploy/examples/common-external.yaml @@ -57,6 +57,12 @@ metadata: name: rook-ceph-cmd-reporter namespace: rook-ceph-external # namespace:cluster --- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rook-ceph-default + namespace: rook-ceph-external # namespace:cluster +--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: diff --git a/deploy/examples/direct-mount.yaml b/deploy/examples/direct-mount.yaml index db4487eb51ac..2788c7fc6d81 100644 --- a/deploy/examples/direct-mount.yaml +++ b/deploy/examples/direct-mount.yaml @@ -16,6 +16,7 @@ spec: app: rook-direct-mount spec: dnsPolicy: ClusterFirstWithHostNet + serviceAccountName: rook-ceph-default containers: - name: rook-direct-mount image: rook/ceph:master diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 2c409e38591c..9243ae9f47c2 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -45,7 +45,7 @@ users: # This assumes running in the default sample "rook-ceph" namespace. # If other namespaces or service accounts are configured, they need to be updated here. - system:serviceaccount:rook-ceph:rook-ceph-system # serviceaccount:namespace:operator - - system:serviceaccount:rook-ceph:default # serviceaccount:namespace:cluster + - system:serviceaccount:rook-ceph:rook-ceph-default # serviceaccount:namespace:cluster - system:serviceaccount:rook-ceph:rook-ceph-mgr # serviceaccount:namespace:cluster - system:serviceaccount:rook-ceph:rook-ceph-osd # serviceaccount:namespace:cluster - system:serviceaccount:rook-ceph:rook-ceph-rgw # serviceaccount:namespace:cluster diff --git a/deploy/examples/toolbox-operator-image.yaml b/deploy/examples/toolbox-operator-image.yaml index 2bdc4adbe478..4e733c17664f 100644 --- a/deploy/examples/toolbox-operator-image.yaml +++ b/deploy/examples/toolbox-operator-image.yaml @@ -22,6 +22,7 @@ spec: app: rook-ceph-tools-operator-image spec: dnsPolicy: ClusterFirstWithHostNet + serviceAccountName: rook-ceph-default containers: - name: rook-ceph-tools-operator-image image: rook/ceph:master diff --git a/deploy/examples/toolbox.yaml b/deploy/examples/toolbox.yaml index adcae195cf25..ea62db8f208a 100644 --- a/deploy/examples/toolbox.yaml +++ b/deploy/examples/toolbox.yaml @@ -16,6 +16,7 @@ spec: app: rook-ceph-tools spec: dnsPolicy: ClusterFirstWithHostNet + serviceAccountName: rook-ceph-default containers: - name: rook-ceph-tools image: quay.io/ceph/ceph:v18.2.2 From a37c47641fa9a3ec0416c82d360ca209c0a6a698 Mon Sep 17 00:00:00 2001 From: sp98 Date: Mon, 11 Mar 2024 08:38:24 +0530 Subject: [PATCH 51/65] core: disable mirroring in image mode For image mode mirroring, if cephBlockPool.Pool.Spec.Mirroring.Enable is set to false, then remove the peer cluster and disable mirroring on all the pool if the user has disabled mirroring on all the pool images. If mirroring is not disabled on all the pool images, then reconcile will fail asking the users to manually disable mirroring on those images. Signed-off-by: sp98 --- pkg/daemon/ceph/client/mirror.go | 37 ++++++++++++++++- pkg/daemon/ceph/client/mirror_test.go | 25 +++++++++++- pkg/daemon/ceph/client/pool.go | 25 ------------ pkg/operator/ceph/object/controller_test.go | 7 ++++ pkg/operator/ceph/pool/controller.go | 45 +++++++++++++++++++++ pkg/operator/ceph/pool/controller_test.go | 17 +++++++- 6 files changed, 125 insertions(+), 31 deletions(-) diff --git a/pkg/daemon/ceph/client/mirror.go b/pkg/daemon/ceph/client/mirror.go index fc8a1aa71e1d..438f03d3db68 100644 --- a/pkg/daemon/ceph/client/mirror.go +++ b/pkg/daemon/ceph/client/mirror.go @@ -39,6 +39,16 @@ type PeerToken struct { Namespace string `json:"namespace"` } +type MirroredImages struct { + // Images is the list of mirrored images on a pool + Images *[]Images +} + +type Images struct { + // Name of the pool image + Name string +} + var ( rbdMirrorPeerCaps = []string{"mon", "profile rbd-mirror-peer", "osd", "profile rbd"} rbdMirrorPeerKeyringID = "rbd-mirror-peer" @@ -120,7 +130,7 @@ func enablePoolMirroring(context *clusterd.Context, clusterInfo *ClusterInfo, po } // disablePoolMirroring turns off mirroring on a pool -func disablePoolMirroring(context *clusterd.Context, clusterInfo *ClusterInfo, poolName string) error { +func DisablePoolMirroring(context *clusterd.Context, clusterInfo *ClusterInfo, poolName string) error { logger.Infof("disabling mirroring for pool %q", poolName) // Build command @@ -136,7 +146,7 @@ func disablePoolMirroring(context *clusterd.Context, clusterInfo *ClusterInfo, p return nil } -func removeClusterPeer(context *clusterd.Context, clusterInfo *ClusterInfo, poolName, peerUUID string) error { +func RemoveClusterPeer(context *clusterd.Context, clusterInfo *ClusterInfo, poolName, peerUUID string) error { logger.Infof("removing cluster peer with UUID %q for the pool %q", peerUUID, poolName) // Build command @@ -175,6 +185,29 @@ func GetPoolMirroringStatus(context *clusterd.Context, clusterInfo *ClusterInfo, return &poolMirroringStatus, nil } +// GetMirroredPoolImages returns a list of mirrored images for a given pool +func GetMirroredPoolImages(context *clusterd.Context, clusterInfo *ClusterInfo, poolName string) (*MirroredImages, error) { + logger.Debugf("retrieving mirrored images for pool %q", poolName) + + // Build command + args := []string{"mirror", "pool", "status", "--verbose", poolName} + cmd := NewRBDCommand(context, clusterInfo, args) + cmd.JsonOutput = true + + // Run command + buf, err := cmd.Run() + if err != nil { + return nil, errors.Wrapf(err, "failed to retrieve mirroring pool %q status", poolName) + } + + var mirroredImages MirroredImages + if err := json.Unmarshal([]byte(buf), &mirroredImages); err != nil { + return nil, errors.Wrap(err, "failed to unmarshal mirror pool status response") + } + + return &mirroredImages, nil +} + // GetPoolMirroringInfo prints the pool mirroring information func GetPoolMirroringInfo(context *clusterd.Context, clusterInfo *ClusterInfo, poolName string) (*cephv1.PoolMirroringInfo, error) { logger.Debugf("retrieving mirroring pool %q info", poolName) diff --git a/pkg/daemon/ceph/client/mirror_test.go b/pkg/daemon/ceph/client/mirror_test.go index e4b6a529d63d..22580cf7dfeb 100644 --- a/pkg/daemon/ceph/client/mirror_test.go +++ b/pkg/daemon/ceph/client/mirror_test.go @@ -29,6 +29,7 @@ import ( var ( bootstrapPeerToken = `eyJmc2lkIjoiYzZiMDg3ZjItNzgyOS00ZGJiLWJjZmMtNTNkYzM0ZTBiMzVkIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFBV1lsWmZVQ1Q2RGhBQVBtVnAwbGtubDA5YVZWS3lyRVV1NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMTExLjEwOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTA6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjEyOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTI6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjExOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTE6Njc4OV0ifQ==` //nolint:gosec // This is just a var name, not a real token mirrorStatus = `{"summary":{"health":"WARNING","daemon_health":"OK","image_health":"WARNING","states":{"starting_replay":1,"replaying":1}}}` + mirrorStatusVerbose = `{"summary":{"health":"WARNING","daemon_health":"OK","image_health":"WARNING","states":{"starting_replay":1,"replaying":1}}, "images":[{"name":"test","global_id":"ebad309d-4d8f-4c6f-afe0-46e02ace26ac","state":"up+stopped","description":"local image is primary","daemon_service":{"service_id":"4339","instance_id":"4644","daemon_id":"a","hostname":"fv-az1195-222"},"last_update":"2024-03-18 04:16:54","peer_sites":[{"site_name":"4c3e1cb8-8129-43ab-8d75-abc20154fd4a","mirror_uuids":"a71727e9-63a4-4386-a44b-4cf48dc77fa8","state":"up+replaying","description":"replaying, {\"bytes_per_second\":0.0,\"bytes_per_snapshot\":0.0,\"last_snapshot_bytes\":0,\"last_snapshot_sync_seconds\":0,\"remote_snapshot_timestamp\":1710734899,\"replay_state\":\"idle\"}","last_update":"2024-03-18 04:16:36"}]}]}` mirrorInfo = `{"mode":"image","site_name":"39074576-5884-4ef3-8a4d-8a0c5ed33031","peers":[{"uuid":"4a6983c0-3c9d-40f5-b2a9-2334a4659827","direction":"rx-tx","site_name":"ocs","mirror_uuid":"","client_name":"client.rbd-mirror-peer"}]}` snapshotScheduleStatus = `[{"schedule_time": "14:00:00-05:00", "image": "foo"}, {"schedule_time": "08:00:00-05:00", "image": "bar"}]` snapshotScheduleList = `[{"interval":"3d","start_time":""},{"interval":"1d","start_time":"14:00:00-05:00"}]` @@ -101,6 +102,26 @@ func TestGetPoolMirroringStatus(t *testing.T) { assert.Equal(t, "OK", poolMirrorStatus.Summary.DaemonHealth) } +func TestGetMirroredPoolImages(t *testing.T) { + pool := "pool-test" + executor := &exectest.MockExecutor{} + executor.MockExecuteCommandWithOutput = func(command string, args ...string) (string, error) { + if args[0] == "mirror" { + assert.Equal(t, "pool", args[1]) + assert.Equal(t, "status", args[2]) + assert.Equal(t, "--verbose", args[3]) + assert.Equal(t, pool, args[4]) + return mirrorStatusVerbose, nil + } + return "", errors.New("unknown command") + } + context := &clusterd.Context{Executor: executor} + + mirroredImages, err := GetMirroredPoolImages(context, AdminTestClusterInfo("mycluster"), pool) + assert.NoError(t, err) + assert.Equal(t, 1, len(*mirroredImages.Images)) +} + func TestImportRBDMirrorBootstrapPeer(t *testing.T) { pool := "pool-test" executor := &exectest.MockExecutor{} @@ -328,7 +349,7 @@ func TestDisableMirroring(t *testing.T) { } context := &clusterd.Context{Executor: executor} - err := disablePoolMirroring(context, AdminTestClusterInfo("mycluster"), pool) + err := DisablePoolMirroring(context, AdminTestClusterInfo("mycluster"), pool) assert.NoError(t, err) } @@ -349,6 +370,6 @@ func TestRemoveClusterPeer(t *testing.T) { } context := &clusterd.Context{Executor: executor} - err := removeClusterPeer(context, AdminTestClusterInfo("mycluster"), pool, peerUUID) + err := RemoveClusterPeer(context, AdminTestClusterInfo("mycluster"), pool, peerUUID) assert.NoError(t, err) } diff --git a/pkg/daemon/ceph/client/pool.go b/pkg/daemon/ceph/client/pool.go index 317d316fb50b..b7089f94baf7 100644 --- a/pkg/daemon/ceph/client/pool.go +++ b/pkg/daemon/ceph/client/pool.go @@ -332,32 +332,7 @@ func setCommonPoolProperties(context *clusterd.Context, clusterInfo *ClusterInfo return errors.Wrapf(err, "failed to enable snapshot scheduling for pool %q", pool.Name) } } - } else { - if pool.Mirroring.Mode == "pool" { - // Remove storage cluster peers - mirrorInfo, err := GetPoolMirroringInfo(context, clusterInfo, pool.Name) - if err != nil { - return errors.Wrapf(err, "failed to get mirroring info for the pool %q", pool.Name) - } - for _, peer := range mirrorInfo.Peers { - if peer.UUID != "" { - err := removeClusterPeer(context, clusterInfo, pool.Name, peer.UUID) - if err != nil { - return errors.Wrapf(err, "failed to remove cluster peer with UUID %q for the pool %q", peer.UUID, pool.Name) - } - } - } - - // Disable mirroring - err = disablePoolMirroring(context, clusterInfo, pool.Name) - if err != nil { - return errors.Wrapf(err, "failed to disable mirroring for pool %q", pool.Name) - } - } else if pool.Mirroring.Mode == "image" { - logger.Warningf("manually disable mirroring on images in the pool %q", pool.Name) - } } - // set maxSize quota if pool.Quotas.MaxSize != nil { // check for format errors diff --git a/pkg/operator/ceph/object/controller_test.go b/pkg/operator/ceph/object/controller_test.go index cb73d0920df5..1920a834b015 100644 --- a/pkg/operator/ceph/object/controller_test.go +++ b/pkg/operator/ceph/object/controller_test.go @@ -541,6 +541,13 @@ func TestCephObjectStoreController(t *testing.T) { {"poolnum":9,"poolname":"my-store.rgw.buckets.data"} ]`, nil } + if args[0] == "mirror" && args[2] == "info" { + return "{}", nil + } + if args[0] == "mirror" && args[2] == "disable" { + return "", nil + } + return "", nil }, MockExecuteCommandWithTimeout: func(timeout time.Duration, command string, args ...string) (string, error) { diff --git a/pkg/operator/ceph/pool/controller.go b/pkg/operator/ceph/pool/controller.go index d2ffefecd96a..394d2b4e7180 100644 --- a/pkg/operator/ceph/pool/controller.go +++ b/pkg/operator/ceph/pool/controller.go @@ -340,6 +340,11 @@ func (r *ReconcileCephBlockPool) reconcile(request reconcile.Request) (reconcile // If not mirrored there is no Status Info field to fulfil } else { + // disable mirroring + err := r.disableMirroring(cephBlockPool) + if err != nil { + return reconcile.Result{}, *cephBlockPool, errors.Wrapf(err, "failed to disable mirroring on pool %q", cephBlockPool.Name) + } // update ObservedGeneration in status at the end of reconcile // Set Ready status, we are done reconciling updateStatus(r.opManagerContext, r.client, request.NamespacedName, cephv1.ConditionReady, nil, observedGeneration) @@ -497,3 +502,43 @@ func (r *ReconcileCephBlockPool) cancelMirrorMonitoring(cephBlockPool *cephv1.Ce delete(r.blockPoolContexts, channelKey) } } + +func (r *ReconcileCephBlockPool) disableMirroring(pool *cephv1.CephBlockPool) error { + mirrorInfo, err := cephclient.GetPoolMirroringInfo(r.context, r.clusterInfo, pool.Name) + if err != nil { + return errors.Wrapf(err, "failed to get mirroring info for the pool %q", pool.Name) + } + + if mirrorInfo.Mode == "image" { + mirroredPools, err := cephclient.GetMirroredPoolImages(r.context, r.clusterInfo, pool.Name) + if err != nil { + return errors.Wrapf(err, "failed to list mirrored images for pool %q", pool.Name) + } + + if len(*mirroredPools.Images) > 0 { + msg := fmt.Sprintf("there are images in the pool %q. Please manually disable mirroring for each image", pool.Name) + logger.Errorf(msg) + return errors.New(msg) + } + } + + // Remove storage cluster peers + for _, peer := range mirrorInfo.Peers { + if peer.UUID != "" { + err := cephclient.RemoveClusterPeer(r.context, r.clusterInfo, pool.Name, peer.UUID) + if err != nil { + return errors.Wrapf(err, "failed to remove cluster peer with UUID %q for the pool %q", peer.UUID, pool.Name) + } + logger.Infof("successfully removed peer site %q for the pool %q", peer.UUID, pool.Name) + } + } + + // Disable mirroring on pool + err = cephclient.DisablePoolMirroring(r.context, r.clusterInfo, pool.Name) + if err != nil { + return errors.Wrapf(err, "failed to disable mirroring for pool %q", pool.Name) + } + logger.Infof("successfully disabled mirroring on the pool %q", pool.Name) + + return nil +} diff --git a/pkg/operator/ceph/pool/controller_test.go b/pkg/operator/ceph/pool/controller_test.go index 495537413f6c..96da48a8a690 100644 --- a/pkg/operator/ceph/pool/controller_test.go +++ b/pkg/operator/ceph/pool/controller_test.go @@ -75,7 +75,14 @@ func TestCreatePool(t *testing.T) { } } if command == "rbd" { - assert.Equal(t, []string{"pool", "init", p.Name}, args[0:3]) + if args[0] == "mirror" && args[2] == "info" { + return "{}", nil + } else if args[0] == "mirror" && args[2] == "disable" { + return "", nil + } else { + assert.Equal(t, []string{"pool", "init", p.Name}, args[0:3]) + } + } return "", nil }, @@ -335,8 +342,11 @@ func TestCephBlockPoolController(t *testing.T) { if args[0] == "config" && args[2] == "mgr" && args[3] == "mgr/prometheus/rbd_stats_pools" { return "", nil } - + if args[0] == "mirror" && args[1] == "pool" && args[2] == "info" { + return "{}", nil + } return "", nil + }, } c.Executor = executor @@ -426,6 +436,9 @@ func TestCephBlockPoolController(t *testing.T) { if args[0] == "mirror" && args[1] == "pool" && args[2] == "peer" && args[3] == "bootstrap" && args[4] == "create" { return `eyJmc2lkIjoiYzZiMDg3ZjItNzgyOS00ZGJiLWJjZmMtNTNkYzM0ZTBiMzVkIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFBV1lsWmZVQ1Q2RGhBQVBtVnAwbGtubDA5YVZWS3lyRVV1NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMTExLjEwOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTA6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjEyOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTI6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjExOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTE6Njc4OV0ifQ==`, nil } + if args[0] == "mirror" && args[1] == "pool" && args[2] == "info" { + return "{}", nil + } return "", nil }, } From 6572364aa0e107775dbb6b254d150c41e8ff04e4 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Fri, 8 Mar 2024 15:20:53 +0530 Subject: [PATCH 52/65] build: remove csv generation from build since we don't need the csv generation to be part of our build image let's remove that. And add a check in CI to validate modified csv files. Signed-off-by: subhamkrai --- .github/workflows/build.yml | 8 +++++- Makefile | 15 ++++------- deploy/examples/operator-openshift.yaml | 8 +++--- deploy/olm/assemble/metadata-common.yaml | 25 ++++++++---------- deploy/olm/assemble/metadata-ocp.yaml | 6 ++--- images/ceph/Dockerfile | 1 - images/ceph/Makefile | 11 +------- tests/scripts/validate_modified_files.sh | 33 ++++++++++++++---------- 8 files changed, 50 insertions(+), 57 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index f10d02689f7b..7554ada59b76 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -60,7 +60,7 @@ jobs: run: tests/scripts/validate_modified_files.sh modcheck - name: run crds-gen - run: make csv-clean && GOPATH=$(go env GOPATH) make crds + run: GOPATH=$(go env GOPATH) make crds - name: validate crds-gen run: tests/scripts/validate_modified_files.sh crd @@ -97,3 +97,9 @@ jobs: - name: build.all rook with go ${{ matrix.go-version }} run: | tests/scripts/github-action-helper.sh build_rook_all + + - name: run gen-csv + run: make csv-clean && GOPATH=$(go env GOPATH) make gen-csv + + - name: validate gen-csv + run: tests/scripts/validate_modified_files.sh gen-csv diff --git a/Makefile b/Makefile index 140093b5ffd4..574a585aed8f 100644 --- a/Makefile +++ b/Makefile @@ -118,11 +118,6 @@ build.version: @mkdir -p $(OUTPUT_DIR) @echo "$(VERSION)" > $(OUTPUT_DIR)/version -# Change how CRDs are generated for CSVs -ifneq ($(REAL_HOST_PLATFORM),darwin_arm64) -build.common: export NO_OB_OBC_VOL_GEN=true -build.common: export MAX_DESC_LEN=0 -endif build.common: export SKIP_GEN_CRD_DOCS=true build.common: build.version helm.build mod.check crds gen-rbac @$(MAKE) go.init @@ -134,7 +129,7 @@ do.build.platform.%: do.build.parallel: $(foreach p,$(PLATFORMS_TO_BUILD_FOR), do.build.platform.$(p)) -build: csv-clean build.common ## Only build for linux platform +build: build.common ## Only build for linux platform @$(MAKE) go.build PLATFORM=linux_$(GOHOSTARCH) @$(MAKE) -C images PLATFORM=linux_$(GOHOSTARCH) @@ -172,7 +167,7 @@ codegen: ${CODE_GENERATOR} ## Run code generators. mod.check: go.mod.check ## Check if any go modules changed. mod.update: go.mod.update ## Update all go modules. -clean: csv-clean ## Remove all files that are created by building. +clean: ## Remove all files that are created by building. @$(MAKE) go.mod.clean @$(MAKE) -C images clean @rm -fr $(OUTPUT_DIR) $(WORK_DIR) @@ -184,9 +179,9 @@ prune: ## Prune cached artifacts. @$(MAKE) -C images prune # Change how CRDs are generated for CSVs -csv: export MAX_DESC_LEN=0 # sets the description length to 0 since CSV cannot be bigger than 1MB -csv: export NO_OB_OBC_VOL_GEN=true -csv: csv-clean crds ## Generate a CSV file for OLM. +gen-csv: export MAX_DESC_LEN=0 # sets the description length to 0 since CSV cannot be bigger than 1MB +gen-csv: export NO_OB_OBC_VOL_GEN=true +gen-csv: csv-clean crds ## Generate a CSV file for OLM. $(MAKE) -C images/ceph csv csv-clean: ## Remove existing OLM files. diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 9243ae9f47c2..322e49c82cc9 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -658,10 +658,10 @@ spec: app: rook-ceph-operator spec: tolerations: - - effect: NoExecute - key: node.kubernetes.io/unreachable - operator: Exists - tolerationSeconds: 5 + - key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + effect: NoSchedule serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml index d590491a3d1d..af4a42c22a93 100644 --- a/deploy/olm/assemble/metadata-common.yaml +++ b/deploy/olm/assemble/metadata-common.yaml @@ -1,5 +1,4 @@ spec: - replaces: rook-ceph.v1.1.1 customresourcedefinitions: owned: - kind: CephCluster @@ -213,19 +212,10 @@ spec: "block storage", "shared filesystem", ] - minKubeVersion: 1.10.0 - labels: - alm-owner-etcd: rookoperator - operated-by: rookoperator - selector: - matchLabels: - alm-owner-etcd: rookoperator - operated-by: rookoperator + minKubeVersion: 1.16.0 links: - - name: Blog - url: https://blog.rook.io - - name: Documentation - url: https://rook.github.io/docs/rook/v1.0/ + - name: Source Code + url: https://github.com/red-hat-storage/rook icon: - base64data: PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPCEtLSBHZW5lcmF0b3I6IEFkb2JlIElsbHVzdHJhdG9yIDIzLjAuMiwgU1ZHIEV4cG9ydCBQbHVnLUluIC4gU1ZHIFZlcnNpb246IDYuMDAgQnVpbGQgMCkgIC0tPgo8c3ZnIHZlcnNpb249IjEuMSIgaWQ9IkxheWVyXzEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IgoJIHZpZXdCb3g9IjAgMCA3MCA3MCIgc3R5bGU9ImVuYWJsZS1iYWNrZ3JvdW5kOm5ldyAwIDAgNzAgNzA7IiB4bWw6c3BhY2U9InByZXNlcnZlIj4KPHN0eWxlIHR5cGU9InRleHQvY3NzIj4KCS5zdDB7ZmlsbDojMkIyQjJCO30KPC9zdHlsZT4KPGc+Cgk8Zz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTUwLjUsNjcuNkgxOS45Yy04LDAtMTQuNS02LjUtMTQuNS0xNC41VjI5LjJjMC0xLjEsMC45LTIuMSwyLjEtMi4xaDU1LjRjMS4xLDAsMi4xLDAuOSwyLjEsMi4xdjIzLjkKCQkJCUM2NSw2MS4xLDU4LjUsNjcuNiw1MC41LDY3LjZ6IE05LjYsMzEuMnYyMS45YzAsNS43LDQuNiwxMC4zLDEwLjMsMTAuM2gzMC42YzUuNywwLDEwLjMtNC42LDEwLjMtMTAuM1YzMS4ySDkuNnoiLz4KCQk8L2c+CgkJPGc+CgkJCTxwYXRoIGNsYXNzPSJzdDAiIGQ9Ik00Mi40LDU2LjdIMjhjLTEuMSwwLTIuMS0wLjktMi4xLTIuMXYtNy4yYzAtNS4xLDQuMi05LjMsOS4zLTkuM3M5LjMsNC4yLDkuMyw5LjN2Ny4yCgkJCQlDNDQuNSw1NS43LDQzLjYsNTYuNyw0Mi40LDU2Ljd6IE0zMCw1Mi41aDEwLjN2LTUuMmMwLTIuOS0yLjMtNS4yLTUuMi01LjJjLTIuOSwwLTUuMiwyLjMtNS4yLDUuMlY1Mi41eiIvPgoJCTwvZz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTYyLjksMjMuMkM2Mi45LDIzLjIsNjIuOSwyMy4yLDYyLjksMjMuMmwtMTEuMSwwYy0xLjEsMC0yLjEtMC45LTIuMS0yLjFjMC0xLjEsMC45LTIuMSwyLjEtMi4xCgkJCQljMCwwLDAsMCwwLDBsOS4xLDBWNi43aC02Ljl2My41YzAsMC41LTAuMiwxLjEtMC42LDEuNWMtMC40LDAuNC0wLjksMC42LTEuNSwwLjZsMCwwbC0xMS4xLDBjLTEuMSwwLTIuMS0wLjktMi4xLTIuMVY2LjdoLTYuOQoJCQkJdjMuNWMwLDEuMS0wLjksMi4xLTIuMSwyLjFsLTExLjEsMGMtMC41LDAtMS4xLTAuMi0xLjUtMC42Yy0wLjQtMC40LTAuNi0wLjktMC42LTEuNVY2LjdIOS42djEyLjRoOWMxLjEsMCwyLjEsMC45LDIuMSwyLjEKCQkJCXMtMC45LDIuMS0yLjEsMi4xaC0xMWMtMS4xLDAtMi4xLTAuOS0yLjEtMi4xVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDcsMFY0LjYKCQkJCWMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDYuOSwwVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMUM2NCwyLjYsNjUsMy41LDY1LDQuNnYxNi41CgkJCQljMCwwLjUtMC4yLDEuMS0wLjYsMS41QzY0LDIzLDYzLjQsMjMuMiw2Mi45LDIzLjJ6Ii8+CgkJPC9nPgoJPC9nPgo8L2c+Cjwvc3ZnPg== mediatype: image/svg+xml @@ -242,7 +232,7 @@ spec: metadata: annotations: tectonic-visibility: ocs - repository: https://github.com/rook/rook + repository: https://github.com/red-hat-storage/rook containerImage: "{{.RookOperatorImage}}" alm-examples: |- [ @@ -438,3 +428,10 @@ metadata: } } ] + relatedImages: + - image: docker.io/rook/ceph:master + name: rook-container + - image: quay.io/ceph/ceph:v18.2.0 + name: ceph-container + - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 + name: csiaddons-sidecar diff --git a/deploy/olm/assemble/metadata-ocp.yaml b/deploy/olm/assemble/metadata-ocp.yaml index 97472c92396e..e6f1971144de 100644 --- a/deploy/olm/assemble/metadata-ocp.yaml +++ b/deploy/olm/assemble/metadata-ocp.yaml @@ -13,7 +13,7 @@ spec: - privileged serviceAccountName: rook-ceph-system maintainers: - - name: Red Hat, Inc. - email: customerservice@redhat.com + - name: Red Hat Support + email: ocs-support@redhat.com provider: - name: Red Hat, Inc. + name: Red Hat diff --git a/images/ceph/Dockerfile b/images/ceph/Dockerfile index 268926856e95..e1fdf1230b4f 100644 --- a/images/ceph/Dockerfile +++ b/images/ceph/Dockerfile @@ -32,7 +32,6 @@ RUN curl --fail -sSL -o /s5cmd.tar.gz https://github.com/peak/s5cmd/releases/dow COPY rook toolbox.sh set-ceph-debug-level /usr/local/bin/ COPY ceph-monitoring /etc/ceph-monitoring COPY rook-external /etc/rook-external/ -COPY ceph-csv-templates /etc/ceph-csv-templates RUN useradd rook -u 2016 # 2016 is the UID of the rook user and also the year of the first commit in the project USER 2016 ENTRYPOINT ["/usr/local/bin/rook"] diff --git a/images/ceph/Makefile b/images/ceph/Makefile index 88f10dd95981..36d9f0aae25e 100755 --- a/images/ceph/Makefile +++ b/images/ceph/Makefile @@ -80,15 +80,6 @@ do.build: @cp $(MANIFESTS_DIR)/create-external-cluster-resources.* $(BUILD_CONTEXT_DIR)/rook-external/ @cp ../../tests/ceph-status-out $(BUILD_CONTEXT_DIR)/rook-external/test-data/ -ifeq ($(INCLUDE_CSV_TEMPLATES),true) - @$(MAKE) csv - @cp -r ../../build/csv $(BUILD_CONTEXT_DIR)/ceph-csv-templates - @rm $(BUILD_CONTEXT_DIR)/ceph-csv-templates/csv-gen.sh - @$(MAKE) csv-clean - -else - mkdir $(BUILD_CONTEXT_DIR)/ceph-csv-templates -endif @cd $(BUILD_CONTEXT_DIR) && $(SED_IN_PLACE) 's|BASEIMAGE|$(BASEIMAGE)|g' Dockerfile @if [ -z "$(BUILD_CONTAINER_IMAGE)" ]; then\ $(DOCKERCMD) build $(BUILD_ARGS) \ @@ -125,11 +116,11 @@ csv: $(OPERATOR_SDK) $(YQv3) @../../build/csv/csv-gen.sh @# #adding 2>/dev/null since CI doesn't seems to be creating bundle.Dockerfile file @rm bundle.Dockerfile 2> /dev/null || true + @git restore $(MANIFESTS_DIR)/crds.yaml ../../deploy/charts/rook-ceph/templates/resources.yaml csv-clean: ## Remove existing OLM files. @rm -fr ../../build/csv/ceph/${go env GOARCH} @rm -f ../../build/csv/rook-ceph.clusterserviceversion.yaml - @git restore $(MANIFESTS_DIR)/crds.yaml ../../deploy/charts/rook-ceph/templates/resources.yaml # reading from a file and outputting to the same file can have undefined results, so use this intermediate IMAGE_TMP="/tmp/rook-ceph-image-list" diff --git a/tests/scripts/validate_modified_files.sh b/tests/scripts/validate_modified_files.sh index 1b51d81ca9d7..c500c709c620 100755 --- a/tests/scripts/validate_modified_files.sh +++ b/tests/scripts/validate_modified_files.sh @@ -9,11 +9,12 @@ MOD_ERR="changes found by mod.check. You may need to run make clean" CRD_ERR="changes found by 'make crds'. please run 'make crds' locally and update your PR" BUILD_ERR="changes found by make build', please commit your go.sum or other changed files" HELM_ERR="changes found by 'make gen-rbac'. please run 'make gen-rbac' locally and update your PR" +CSV_ERR="changes found by make gen-csv',please run 'make gen-csv' locally and update your PR" ############# # FUNCTIONS # ############# -function validate(){ +function validate() { git=$(git status --porcelain) for file in $git; do if [ -n "$file" ]; then @@ -29,22 +30,26 @@ function validate(){ # MAIN # ######## case "$1" in - codegen) - validate "$CODEGEN_ERR" +codegen) + validate "$CODEGEN_ERR" ;; - modcheck) - validate "$MOD_ERR" +modcheck) + validate "$MOD_ERR" ;; - crd) - validate "$CRD_ERR" +crd) + validate "$CRD_ERR" ;; - build) - validate "$BUILD_ERR" +build) + validate "$BUILD_ERR" ;; - gen-rbac) - validate "$HELM_ERR" +gen-rbac) + validate "$HELM_ERR" + ;; +gen-csv) + validate "$CSV_ERR" + ;; +*) + echo $"Usage: $0 {codegen|modcheck|crd|build|gen-rbac}" + exit 1 ;; - *) - echo $"Usage: $0 {codegen|modcheck|crd|build|gen-rbac}" - exit 1 esac From d7332ad57fe340479660dc8fd684def0ee770da6 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Fri, 15 Mar 2024 15:03:42 +0530 Subject: [PATCH 53/65] build: add csv envs to rook deploy envs moving all the default envs which were part of ocs-op rook csv to operator deployment. Signed-off-by: subhamkrai --- deploy/examples/operator-openshift.yaml | 139 +++++++++++++++--------- 1 file changed, 87 insertions(+), 52 deletions(-) diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 322e49c82cc9..21a7a0340786 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -678,76 +678,111 @@ spec: name: default-config-dir env: - name: ROOK_CURRENT_NAMESPACE_ONLY + valueFrom: + configMapKeyRef: + key: ROOK_CURRENT_NAMESPACE_ONLY + name: ocs-operator-config + - name: CSI_REMOVE_HOLDER_PODS + valueFrom: + configMapKeyRef: + key: CSI_REMOVE_HOLDER_PODS + name: ocs-operator-config + - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS value: "false" - - # Whether to start pods as privileged that mount a host path, which includes the Ceph mon and osd pods. - # Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues. - # For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641 + - name: ROOK_LOG_LEVEL + value: INFO + - name: ROOK_CEPH_STATUS_CHECK_INTERVAL + value: 60s + - name: ROOK_MON_HEALTHCHECK_INTERVAL + value: 45s + - name: ROOK_MON_OUT_TIMEOUT + value: 600s + - name: ROOK_DISCOVER_DEVICES_INTERVAL + value: 60m - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED value: "true" - # Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+". - # In case of more than one regex, use comma to separate between them. - # Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" - # add regex expression after putting a comma to blacklist a disk - # If value is empty, the default regex will be used. - - name: DISCOVER_DAEMON_UDEV_BLACKLIST - value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+" - - # Whether to start machineDisruptionBudget and machineLabel controller to watch for the osd pods and MDBs. + - name: ROOK_ENABLE_SELINUX_RELABELING + value: "true" + - name: ROOK_ENABLE_FSGROUP + value: "true" + - name: ROOK_ENABLE_FLEX_DRIVER + value: "false" + - name: ROOK_ENABLE_DISCOVERY_DAEMON + value: "false" - name: ROOK_ENABLE_MACHINE_DISRUPTION_BUDGET value: "false" - - # - name: DISCOVER_DAEMON_RESOURCES - # value: | - # resources: - # limits: - # memory: 512Mi - # requests: - # cpu: 100m - # memory: 128Mi - - # Time to wait until the node controller will move Rook pods to other - # nodes after detecting an unreachable node. - # Pods affected by this setting are: - # mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox - # The value used in this variable replaces the default value of 300 secs - # added automatically by k8s as Toleration for - # - # The total amount of time to reschedule Rook pods in healthy nodes - # before detecting a condition will be the sum of: - # --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag) - # --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds - - name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS + - name: ROOK_DISABLE_DEVICE_HOTPLUG + value: "true" + - name: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION + value: "true" + - name: ROOK_DISABLE_ADMISSION_CONTROLLER + value: "true" + - name: ROOK_CSIADDONS_IMAGE + value: quay.io/csiaddons/k8s-sidecar:v0.6.0 + - name: ROOK_OBC_PROVISIONER_NAME_PREFIX + value: openshift-storage + - name: CSI_ENABLE_METADATA + value: "false" + - name: CSI_PLUGIN_PRIORITY_CLASSNAME + value: system-node-critical + - name: CSI_PROVISIONER_PRIORITY_CLASSNAME + value: system-cluster-critical + - name: CSI_CLUSTER_NAME + valueFrom: + configMapKeyRef: + key: CSI_CLUSTER_NAME + name: ocs-operator-config + - name: CSI_DRIVER_NAME_PREFIX + value: openshift-storage + - name: CSI_ENABLE_TOPOLOGY + valueFrom: + configMapKeyRef: + key: CSI_ENABLE_TOPOLOGY + name: ocs-operator-config + - name: CSI_TOPOLOGY_DOMAIN_LABELS + valueFrom: + configMapKeyRef: + key: CSI_TOPOLOGY_DOMAIN_LABELS + name: ocs-operator-config + - name: ROOK_CSI_ENABLE_NFS + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_NFS + name: ocs-operator-config + - name: CSI_PROVISIONER_TOLERATIONS + value: |2- + + - key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + effect: NoSchedule + - name: CSI_PLUGIN_TOLERATIONS + value: |2- + + - key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + effect: NoSchedule + - name: CSI_LOG_LEVEL value: "5" - - # The name of the node to pass with the downward API + - name: CSI_SIDECAR_LOG_LEVEL + value: "1" + - name: CSI_ENABLE_CSIADDONS + value: "true" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - # The pod name to pass with the downward API - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - # The pod namespace to pass with the downward API - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - - # Recommended resource requests and limits, if desired - #resources: - # limits: - # memory: 512Mi - # requests: - # cpu: 100m - # memory: 128Mi - - # Uncomment it to run lib bucket provisioner in multithreaded mode - #- name: LIB_BUCKET_PROVISIONER_THREADS - # value: "5" - + - name: ROOK_OBC_WATCH_OPERATOR_NAMESPACE + value: "true" volumes: - name: rook-config emptyDir: {} From cf15b59222710cfcd13a72df22285e779a6bf505 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Fri, 8 Mar 2024 15:26:42 +0530 Subject: [PATCH 54/65] build: these are auto-generated csv changes these are auto-generated csv changes Signed-off-by: subhamkrai --- ....rook.io_cephblockpoolradosnamespaces.yaml | 66 + .../csv/ceph/ceph.rook.io_cephblockpools.yaml | 304 ++ .../ceph.rook.io_cephbucketnotifications.yaml | 134 + .../ceph/ceph.rook.io_cephbuckettopics.yaml | 127 + build/csv/ceph/ceph.rook.io_cephclients.yaml | 70 + build/csv/ceph/ceph.rook.io_cephclusters.yaml | 3102 +++++++++++++++ .../ceph/ceph.rook.io_cephcosidrivers.yaml | 557 +++ .../ceph.rook.io_cephfilesystemmirrors.yaml | 592 +++ .../ceph/ceph.rook.io_cephfilesystems.yaml | 1198 ++++++ ...rook.io_cephfilesystemsubvolumegroups.yaml | 97 + build/csv/ceph/ceph.rook.io_cephnfses.yaml | 1701 +++++++++ .../ceph/ceph.rook.io_cephobjectrealms.yaml | 77 + .../ceph/ceph.rook.io_cephobjectstores.yaml | 1173 ++++++ .../ceph.rook.io_cephobjectstoreusers.yaml | 204 + .../ceph.rook.io_cephobjectzonegroups.yaml | 79 + .../ceph/ceph.rook.io_cephobjectzones.yaml | 364 ++ .../csv/ceph/ceph.rook.io_cephrbdmirrors.yaml | 610 +++ ...rization.k8s.io_v1_clusterrolebinding.yaml | 17 + ...c.authorization.k8s.io_v1_clusterrole.yaml | 49 + ...storage-provisioner_v1_serviceaccount.yaml | 9 + .../csv/rook-ceph.clusterserviceversion.yaml | 3378 +++++++++++++++++ 21 files changed, 13908 insertions(+) create mode 100644 build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephblockpools.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephclients.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephclusters.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephfilesystems.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephnfses.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephobjectstores.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephobjectzones.yaml create mode 100644 build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml create mode 100644 build/csv/ceph/objectstorage-provisioner-role-binding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml create mode 100644 build/csv/ceph/objectstorage-provisioner-role_rbac.authorization.k8s.io_v1_clusterrole.yaml create mode 100644 build/csv/ceph/objectstorage-provisioner_v1_serviceaccount.yaml create mode 100644 build/csv/rook-ceph.clusterserviceversion.yaml diff --git a/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml b/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml new file mode 100644 index 000000000000..673b7f106a23 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml @@ -0,0 +1,66 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephblockpoolradosnamespaces.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephBlockPoolRadosNamespace + listKind: CephBlockPoolRadosNamespaceList + plural: cephblockpoolradosnamespaces + singular: cephblockpoolradosnamespace + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + blockPoolName: + type: string + x-kubernetes-validations: + - message: blockPoolName is immutable + rule: self == oldSelf + name: + type: string + x-kubernetes-validations: + - message: name is immutable + rule: self == oldSelf + required: + - blockPoolName + type: object + status: + properties: + info: + additionalProperties: + type: string + nullable: true + type: object + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephblockpools.yaml b/build/csv/ceph/ceph.rook.io_cephblockpools.yaml new file mode 100644 index 000000000000..1c9e339e2093 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephblockpools.yaml @@ -0,0 +1,304 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephblockpools.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephBlockPool + listKind: CephBlockPoolList + plural: cephblockpools + singular: cephblockpool + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + name: + enum: + - .rgw.root + - .nfs + - .mgr + type: string + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + info: + additionalProperties: + type: string + nullable: true + type: object + mirroringInfo: + properties: + details: + type: string + lastChanged: + type: string + lastChecked: + type: string + mode: + type: string + peers: + items: + properties: + client_name: + type: string + direction: + type: string + mirror_uuid: + type: string + site_name: + type: string + uuid: + type: string + type: object + type: array + site_name: + type: string + type: object + mirroringStatus: + properties: + details: + type: string + lastChanged: + type: string + lastChecked: + type: string + summary: + properties: + daemon_health: + type: string + health: + type: string + image_health: + type: string + states: + nullable: true + properties: + error: + type: integer + replaying: + type: integer + starting_replay: + type: integer + stopped: + type: integer + stopping_replay: + type: integer + syncing: + type: integer + unknown: + type: integer + type: object + type: object + type: object + observedGeneration: + format: int64 + type: integer + phase: + type: string + snapshotScheduleStatus: + properties: + details: + type: string + lastChanged: + type: string + lastChecked: + type: string + snapshotSchedules: + items: + properties: + image: + type: string + items: + items: + properties: + interval: + type: string + start_time: + type: string + type: object + type: array + namespace: + type: string + pool: + type: string + type: object + nullable: true + type: array + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml b/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml new file mode 100644 index 000000000000..0e79c802e021 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml @@ -0,0 +1,134 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephbucketnotifications.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephBucketNotification + listKind: CephBucketNotificationList + plural: cephbucketnotifications + singular: cephbucketnotification + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + events: + items: + enum: + - s3:ObjectCreated:* + - s3:ObjectCreated:Put + - s3:ObjectCreated:Post + - s3:ObjectCreated:Copy + - s3:ObjectCreated:CompleteMultipartUpload + - s3:ObjectRemoved:* + - s3:ObjectRemoved:Delete + - s3:ObjectRemoved:DeleteMarkerCreated + type: string + type: array + filter: + properties: + keyFilters: + items: + properties: + name: + enum: + - prefix + - suffix + - regex + type: string + value: + type: string + required: + - name + - value + type: object + type: array + metadataFilters: + items: + properties: + name: + minLength: 1 + type: string + value: + type: string + required: + - name + - value + type: object + type: array + tagFilters: + items: + properties: + name: + minLength: 1 + type: string + value: + type: string + required: + - name + - value + type: object + type: array + type: object + topic: + minLength: 1 + type: string + required: + - topic + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml b/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml new file mode 100644 index 000000000000..7162ac7a524a --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml @@ -0,0 +1,127 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephbuckettopics.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephBucketTopic + listKind: CephBucketTopicList + plural: cephbuckettopics + singular: cephbuckettopic + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + endpoint: + properties: + amqp: + properties: + ackLevel: + default: broker + enum: + - none + - broker + - routeable + type: string + disableVerifySSL: + type: boolean + exchange: + minLength: 1 + type: string + uri: + minLength: 1 + type: string + required: + - exchange + - uri + type: object + http: + properties: + disableVerifySSL: + type: boolean + sendCloudEvents: + type: boolean + uri: + minLength: 1 + type: string + required: + - uri + type: object + kafka: + properties: + ackLevel: + default: broker + enum: + - none + - broker + type: string + disableVerifySSL: + type: boolean + uri: + minLength: 1 + type: string + useSSL: + type: boolean + required: + - uri + type: object + type: object + objectStoreName: + minLength: 1 + type: string + objectStoreNamespace: + minLength: 1 + type: string + opaqueData: + type: string + persistent: + type: boolean + required: + - endpoint + - objectStoreName + - objectStoreNamespace + type: object + status: + properties: + ARN: + nullable: true + type: string + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephclients.yaml b/build/csv/ceph/ceph.rook.io_cephclients.yaml new file mode 100644 index 000000000000..6ef82c90e768 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephclients.yaml @@ -0,0 +1,70 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephclients.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephClient + listKind: CephClientList + plural: cephclients + singular: cephclient + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + caps: + additionalProperties: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + name: + type: string + required: + - caps + type: object + status: + properties: + info: + additionalProperties: + type: string + nullable: true + type: object + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephclusters.yaml b/build/csv/ceph/ceph.rook.io_cephclusters.yaml new file mode 100644 index 000000000000..f08a3978107e --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephclusters.yaml @@ -0,0 +1,3102 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephclusters.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephCluster + listKind: CephClusterList + plural: cephclusters + singular: cephcluster + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: Directory used on the K8s nodes + jsonPath: .spec.dataDirHostPath + name: DataDirHostPath + type: string + - description: Number of MONs + jsonPath: .spec.mon.count + name: MonCount + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + - jsonPath: .status.phase + name: Phase + type: string + - description: Message + jsonPath: .status.message + name: Message + type: string + - description: Ceph Health + jsonPath: .status.ceph.health + name: Health + type: string + - jsonPath: .spec.external.enable + name: External + type: boolean + - description: Ceph FSID + jsonPath: .status.ceph.fsid + name: FSID + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + annotations: + additionalProperties: + additionalProperties: + type: string + type: object + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + cephConfig: + additionalProperties: + additionalProperties: + type: string + type: object + nullable: true + type: object + cephVersion: + nullable: true + properties: + allowUnsupported: + type: boolean + image: + type: string + imagePullPolicy: + enum: + - IfNotPresent + - Always + - Never + - "" + type: string + type: object + cleanupPolicy: + nullable: true + properties: + allowUninstallWithVolumes: + type: boolean + confirmation: + nullable: true + pattern: ^$|^yes-really-destroy-data$ + type: string + sanitizeDisks: + nullable: true + properties: + dataSource: + enum: + - zero + - random + type: string + iteration: + format: int32 + type: integer + method: + enum: + - complete + - quick + type: string + type: object + type: object + continueUpgradeAfterChecksEvenIfNotHealthy: + type: boolean + crashCollector: + nullable: true + properties: + daysToRetain: + type: integer + disable: + type: boolean + type: object + csi: + properties: + cephfs: + properties: + fuseMountOptions: + type: string + kernelMountOptions: + type: string + type: object + readAffinity: + properties: + crushLocationLabels: + items: + type: string + type: array + enabled: + type: boolean + type: object + type: object + dashboard: + nullable: true + properties: + enabled: + type: boolean + port: + maximum: 65535 + minimum: 0 + type: integer + prometheusEndpoint: + type: string + prometheusEndpointSSLVerify: + type: boolean + ssl: + type: boolean + urlPrefix: + type: string + type: object + dataDirHostPath: + pattern: ^/(\S+) + type: string + x-kubernetes-validations: + - message: DataDirHostPath is immutable + rule: self == oldSelf + disruptionManagement: + nullable: true + properties: + machineDisruptionBudgetNamespace: + type: string + manageMachineDisruptionBudgets: + type: boolean + managePodBudgets: + type: boolean + osdMaintenanceTimeout: + format: int64 + type: integer + pgHealthCheckTimeout: + format: int64 + type: integer + pgHealthyRegex: + type: string + type: object + external: + nullable: true + properties: + enable: + type: boolean + type: object + x-kubernetes-preserve-unknown-fields: true + healthCheck: + nullable: true + properties: + daemonHealth: + nullable: true + properties: + mon: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + osd: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + status: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + livenessProbe: + additionalProperties: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + type: object + startupProbe: + additionalProperties: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + type: object + type: object + labels: + additionalProperties: + additionalProperties: + type: string + type: object + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + logCollector: + nullable: true + properties: + enabled: + type: boolean + maxLogSize: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + periodicity: + pattern: ^$|^(hourly|daily|weekly|monthly|1h|24h|1d)$ + type: string + type: object + mgr: + nullable: true + properties: + allowMultiplePerNode: + type: boolean + count: + maximum: 5 + minimum: 0 + type: integer + modules: + items: + properties: + enabled: + type: boolean + name: + type: string + type: object + nullable: true + type: array + type: object + mon: + nullable: true + properties: + allowMultiplePerNode: + type: boolean + count: + maximum: 9 + minimum: 0 + type: integer + failureDomainLabel: + type: string + stretchCluster: + properties: + failureDomainLabel: + type: string + subFailureDomain: + type: string + zones: + items: + properties: + arbiter: + type: boolean + name: + type: string + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + nullable: true + type: array + type: object + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + zones: + items: + properties: + arbiter: + type: boolean + name: + type: string + volumeClaimTemplate: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + type: array + type: object + x-kubernetes-validations: + - message: zones must be less than or equal to count + rule: '!has(self.zones) || (has(self.zones) && (size(self.zones) + <= self.count))' + - message: stretchCluster zones must be equal to 3 + rule: '!has(self.stretchCluster) || (has(self.stretchCluster) && + (size(self.stretchCluster.zones) > 0) && (size(self.stretchCluster.zones) + == 3))' + monitoring: + nullable: true + properties: + enabled: + type: boolean + externalMgrEndpoints: + items: + properties: + hostname: + type: string + ip: + type: string + nodeName: + type: string + targetRef: + properties: + apiVersion: + type: string + fieldPath: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + resourceVersion: + type: string + uid: + type: string + type: object + x-kubernetes-map-type: atomic + required: + - ip + type: object + x-kubernetes-map-type: atomic + nullable: true + type: array + externalMgrPrometheusPort: + maximum: 65535 + minimum: 0 + type: integer + interval: + type: string + metricsDisabled: + type: boolean + port: + maximum: 65535 + minimum: 0 + type: integer + type: object + network: + nullable: true + properties: + addressRanges: + nullable: true + properties: + cluster: + items: + pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ + type: string + type: array + public: + items: + pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ + type: string + type: array + type: object + connections: + nullable: true + properties: + compression: + nullable: true + properties: + enabled: + type: boolean + type: object + encryption: + nullable: true + properties: + enabled: + type: boolean + type: object + requireMsgr2: + type: boolean + type: object + dualStack: + type: boolean + hostNetwork: + type: boolean + ipFamily: + enum: + - IPv4 + - IPv6 + nullable: true + type: string + multiClusterService: + properties: + clusterID: + type: string + enabled: + type: boolean + type: object + provider: + enum: + - "" + - host + - multus + nullable: true + type: string + x-kubernetes-validations: + - message: network provider must be disabled (reverted to empty + string) before a new provider is enabled + rule: self == '' || self == oldSelf + selectors: + additionalProperties: + type: string + nullable: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + x-kubernetes-validations: + - message: at least one network selector must be specified when using + multus + rule: '!has(self.provider) || (self.provider != ''multus'' || (self.provider + == ''multus'' && size(self.selectors) > 0))' + - message: the legacy hostNetwork setting can only be set if the network.provider + is set to the empty string + rule: '!has(self.hostNetwork) || self.hostNetwork == false || !has(self.provider) + || self.provider == ""' + placement: + additionalProperties: + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + priorityClassNames: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + removeOSDsIfOutAndSafeToRemove: + type: boolean + resources: + additionalProperties: + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + security: + nullable: true + properties: + keyRotation: + nullable: true + properties: + enabled: + default: false + type: boolean + schedule: + type: string + type: object + kms: + nullable: true + properties: + connectionDetails: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + tokenSecretName: + type: string + type: object + type: object + skipUpgradeChecks: + type: boolean + storage: + nullable: true + properties: + config: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + deviceFilter: + type: string + devicePathFilter: + type: string + devices: + items: + properties: + config: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + fullpath: + type: string + name: + type: string + type: object + nullable: true + type: array + x-kubernetes-preserve-unknown-fields: true + flappingRestartIntervalHours: + type: integer + nodes: + items: + properties: + config: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + deviceFilter: + type: string + devicePathFilter: + type: string + devices: + items: + properties: + config: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + fullpath: + type: string + name: + type: string + type: object + nullable: true + type: array + x-kubernetes-preserve-unknown-fields: true + name: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + useAllDevices: + type: boolean + volumeClaimTemplates: + items: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + type: array + type: object + nullable: true + type: array + onlyApplyOSDPlacement: + type: boolean + storageClassDeviceSets: + items: + properties: + config: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + count: + minimum: 1 + type: integer + encrypted: + type: boolean + name: + type: string + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + portable: + type: boolean + preparePlacement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + schedulerName: + type: string + tuneDeviceClass: + type: boolean + tuneFastDeviceClass: + type: boolean + volumeClaimTemplates: + items: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + type: array + required: + - count + - name + - volumeClaimTemplates + type: object + nullable: true + type: array + store: + properties: + type: + enum: + - bluestore + - bluestore-rdr + type: string + updateStore: + pattern: ^$|^yes-really-update-store$ + type: string + type: object + useAllDevices: + type: boolean + useAllNodes: + type: boolean + volumeClaimTemplates: + items: + properties: + metadata: + properties: + annotations: + additionalProperties: + type: string + type: object + finalizers: + items: + type: string + type: array + labels: + additionalProperties: + type: string + type: object + name: + type: string + namespace: + type: string + type: object + spec: + properties: + accessModes: + items: + type: string + type: array + dataSource: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + properties: + apiGroup: + type: string + kind: + type: string + name: + type: string + namespace: + type: string + required: + - kind + - name + type: object + resources: + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + selector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + type: string + volumeAttributesClassName: + type: string + volumeMode: + type: string + volumeName: + type: string + type: object + type: object + type: array + type: object + waitTimeoutForHealthyOSDInMinutes: + format: int64 + type: integer + type: object + status: + nullable: true + properties: + ceph: + properties: + capacity: + properties: + bytesAvailable: + format: int64 + type: integer + bytesTotal: + format: int64 + type: integer + bytesUsed: + format: int64 + type: integer + lastUpdated: + type: string + type: object + details: + additionalProperties: + properties: + message: + type: string + severity: + type: string + required: + - message + - severity + type: object + type: object + fsid: + type: string + health: + type: string + lastChanged: + type: string + lastChecked: + type: string + previousHealth: + type: string + versions: + properties: + cephfs-mirror: + additionalProperties: + type: integer + type: object + mds: + additionalProperties: + type: integer + type: object + mgr: + additionalProperties: + type: integer + type: object + mon: + additionalProperties: + type: integer + type: object + osd: + additionalProperties: + type: integer + type: object + overall: + additionalProperties: + type: integer + type: object + rbd-mirror: + additionalProperties: + type: integer + type: object + rgw: + additionalProperties: + type: integer + type: object + type: object + type: object + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + message: + type: string + observedGeneration: + format: int64 + type: integer + phase: + type: string + state: + type: string + storage: + properties: + deviceClasses: + items: + properties: + name: + type: string + type: object + type: array + osd: + properties: + storeType: + additionalProperties: + type: integer + type: object + type: object + type: object + version: + properties: + image: + type: string + version: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml b/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml new file mode 100644 index 000000000000..c1dfe68cddc5 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml @@ -0,0 +1,557 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephcosidrivers.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephCOSIDriver + listKind: CephCOSIDriverList + plural: cephcosidrivers + shortNames: + - cephcosi + singular: cephcosidriver + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + deploymentStrategy: + enum: + - Never + - Auto + - Always + type: string + image: + type: string + objectProvisionerImage: + type: string + placement: + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + resources: + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + type: object + required: + - metadata + - spec + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml new file mode 100644 index 000000000000..ce4c28aee072 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml @@ -0,0 +1,592 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephfilesystemmirrors.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephFilesystemMirror + listKind: CephFilesystemMirrorList + plural: cephfilesystemmirrors + singular: cephfilesystemmirror + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + annotations: + additionalProperties: + type: string + nullable: true + type: object + labels: + additionalProperties: + type: string + nullable: true + type: object + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + priorityClassName: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml new file mode 100644 index 000000000000..ec9ed34dcc70 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml @@ -0,0 +1,1198 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephfilesystems.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephFilesystem + listKind: CephFilesystemList + plural: cephfilesystems + singular: cephfilesystem + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: Number of desired active MDS daemons + jsonPath: .spec.metadataServer.activeCount + name: ActiveMDS + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + dataPools: + items: + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + name: + type: string + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + nullable: true + type: array + metadataPool: + nullable: true + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + metadataServer: + properties: + activeCount: + format: int32 + maximum: 50 + minimum: 1 + type: integer + activeStandby: + type: boolean + annotations: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + labels: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + livenessProbe: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + priorityClassName: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + startupProbe: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + required: + - activeCount + type: object + mirroring: + nullable: true + properties: + enabled: + type: boolean + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotRetention: + items: + properties: + duration: + type: string + path: + type: string + type: object + type: array + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + preserveFilesystemOnDelete: + type: boolean + preservePoolsOnDelete: + type: boolean + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - dataPools + - metadataPool + - metadataServer + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + info: + additionalProperties: + type: string + nullable: true + type: object + mirroringStatus: + properties: + daemonsStatus: + items: + properties: + daemon_id: + type: integer + filesystems: + items: + properties: + directory_count: + type: integer + filesystem_id: + type: integer + name: + type: string + peers: + items: + properties: + remote: + properties: + client_name: + type: string + cluster_name: + type: string + fs_name: + type: string + type: object + stats: + properties: + failure_count: + type: integer + recovery_count: + type: integer + type: object + uuid: + type: string + type: object + type: array + type: object + type: array + type: object + nullable: true + type: array + details: + type: string + lastChanged: + type: string + lastChecked: + type: string + type: object + observedGeneration: + format: int64 + type: integer + phase: + type: string + snapshotScheduleStatus: + properties: + details: + type: string + lastChanged: + type: string + lastChecked: + type: string + snapshotSchedules: + items: + properties: + fs: + type: string + path: + type: string + rel_path: + type: string + retention: + properties: + active: + type: boolean + created: + type: string + created_count: + type: integer + first: + type: string + last: + type: string + last_pruned: + type: string + pruned_count: + type: integer + start: + type: string + type: object + schedule: + type: string + subvol: + type: string + type: object + nullable: true + type: array + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml new file mode 100644 index 000000000000..5b4260610f62 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml @@ -0,0 +1,97 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephfilesystemsubvolumegroups.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephFilesystemSubVolumeGroup + listKind: CephFilesystemSubVolumeGroupList + plural: cephfilesystemsubvolumegroups + singular: cephfilesystemsubvolumegroup + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + filesystemName: + type: string + x-kubernetes-validations: + - message: filesystemName is immutable + rule: self == oldSelf + name: + type: string + x-kubernetes-validations: + - message: name is immutable + rule: self == oldSelf + pinning: + properties: + distributed: + maximum: 1 + minimum: 0 + nullable: true + type: integer + export: + maximum: 256 + minimum: -1 + nullable: true + type: integer + random: + maximum: 1 + minimum: 0 + nullable: true + type: number + type: object + x-kubernetes-validations: + - message: only one pinning type should be set + rule: (has(self.export) && !has(self.distributed) && !has(self.random)) + || (!has(self.export) && has(self.distributed) && !has(self.random)) + || (!has(self.export) && !has(self.distributed) && has(self.random)) + || (!has(self.export) && !has(self.distributed) && !has(self.random)) + required: + - filesystemName + type: object + status: + properties: + info: + additionalProperties: + type: string + nullable: true + type: object + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephnfses.yaml b/build/csv/ceph/ceph.rook.io_cephnfses.yaml new file mode 100644 index 000000000000..ae57b3860f11 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephnfses.yaml @@ -0,0 +1,1701 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephnfses.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephNFS + listKind: CephNFSList + plural: cephnfses + shortNames: + - nfs + singular: cephnfs + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + rados: + nullable: true + properties: + namespace: + type: string + pool: + type: string + type: object + security: + nullable: true + properties: + kerberos: + nullable: true + properties: + configFiles: + properties: + volumeSource: + properties: + configMap: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + emptyDir: + properties: + medium: + type: string + sizeLimit: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + hostPath: + properties: + path: + type: string + type: + type: string + required: + - path + type: object + persistentVolumeClaim: + properties: + claimName: + type: string + readOnly: + type: boolean + required: + - claimName + type: object + projected: + properties: + defaultMode: + format: int32 + type: integer + sources: + items: + properties: + clusterTrustBundle: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + name: + type: string + optional: + type: boolean + path: + type: string + signerName: + type: string + required: + - path + type: object + configMap: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + properties: + items: + items: + properties: + fieldRef: + properties: + apiVersion: + type: string + fieldPath: + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + format: int32 + type: integer + path: + type: string + resourceFieldRef: + properties: + containerName: + type: string + divisor: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + type: object + secret: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + properties: + audience: + type: string + expirationSeconds: + format: int64 + type: integer + path: + type: string + required: + - path + type: object + type: object + type: array + type: object + secret: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + optional: + type: boolean + secretName: + type: string + type: object + type: object + type: object + domainName: + type: string + keytabFile: + properties: + volumeSource: + properties: + configMap: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + emptyDir: + properties: + medium: + type: string + sizeLimit: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + hostPath: + properties: + path: + type: string + type: + type: string + required: + - path + type: object + persistentVolumeClaim: + properties: + claimName: + type: string + readOnly: + type: boolean + required: + - claimName + type: object + projected: + properties: + defaultMode: + format: int32 + type: integer + sources: + items: + properties: + clusterTrustBundle: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + name: + type: string + optional: + type: boolean + path: + type: string + signerName: + type: string + required: + - path + type: object + configMap: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + properties: + items: + items: + properties: + fieldRef: + properties: + apiVersion: + type: string + fieldPath: + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + format: int32 + type: integer + path: + type: string + resourceFieldRef: + properties: + containerName: + type: string + divisor: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + type: object + secret: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + properties: + audience: + type: string + expirationSeconds: + format: int64 + type: integer + path: + type: string + required: + - path + type: object + type: object + type: array + type: object + secret: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + optional: + type: boolean + secretName: + type: string + type: object + type: object + type: object + principalName: + default: nfs + type: string + type: object + sssd: + nullable: true + properties: + sidecar: + properties: + additionalFiles: + items: + properties: + subPath: + minLength: 1 + pattern: ^[^:]+$ + type: string + volumeSource: + properties: + configMap: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + emptyDir: + properties: + medium: + type: string + sizeLimit: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + hostPath: + properties: + path: + type: string + type: + type: string + required: + - path + type: object + persistentVolumeClaim: + properties: + claimName: + type: string + readOnly: + type: boolean + required: + - claimName + type: object + projected: + properties: + defaultMode: + format: int32 + type: integer + sources: + items: + properties: + clusterTrustBundle: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + name: + type: string + optional: + type: boolean + path: + type: string + signerName: + type: string + required: + - path + type: object + configMap: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + properties: + items: + items: + properties: + fieldRef: + properties: + apiVersion: + type: string + fieldPath: + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + format: int32 + type: integer + path: + type: string + resourceFieldRef: + properties: + containerName: + type: string + divisor: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + type: object + secret: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + properties: + audience: + type: string + expirationSeconds: + format: int64 + type: integer + path: + type: string + required: + - path + type: object + type: object + type: array + type: object + secret: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + optional: + type: boolean + secretName: + type: string + type: object + type: object + required: + - subPath + - volumeSource + type: object + type: array + debugLevel: + maximum: 10 + minimum: 0 + type: integer + image: + minLength: 1 + type: string + resources: + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + sssdConfigFile: + properties: + volumeSource: + properties: + configMap: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + emptyDir: + properties: + medium: + type: string + sizeLimit: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + hostPath: + properties: + path: + type: string + type: + type: string + required: + - path + type: object + persistentVolumeClaim: + properties: + claimName: + type: string + readOnly: + type: boolean + required: + - claimName + type: object + projected: + properties: + defaultMode: + format: int32 + type: integer + sources: + items: + properties: + clusterTrustBundle: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + name: + type: string + optional: + type: boolean + path: + type: string + signerName: + type: string + required: + - path + type: object + configMap: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + properties: + items: + items: + properties: + fieldRef: + properties: + apiVersion: + type: string + fieldPath: + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + format: int32 + type: integer + path: + type: string + resourceFieldRef: + properties: + containerName: + type: string + divisor: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + type: object + secret: + properties: + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + name: + type: string + optional: + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + properties: + audience: + type: string + expirationSeconds: + format: int64 + type: integer + path: + type: string + required: + - path + type: object + type: object + type: array + type: object + secret: + properties: + defaultMode: + format: int32 + type: integer + items: + items: + properties: + key: + type: string + mode: + format: int32 + type: integer + path: + type: string + required: + - key + - path + type: object + type: array + optional: + type: boolean + secretName: + type: string + type: object + type: object + type: object + required: + - image + type: object + type: object + type: object + server: + properties: + active: + type: integer + annotations: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + hostNetwork: + nullable: true + type: boolean + labels: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + livenessProbe: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + logLevel: + type: string + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + priorityClassName: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - active + type: object + required: + - server + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml b/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml new file mode 100644 index 000000000000..46c9d8e42311 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml @@ -0,0 +1,77 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephobjectrealms.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephObjectRealm + listKind: CephObjectRealmList + plural: cephobjectrealms + singular: cephobjectrealm + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + nullable: true + properties: + pull: + properties: + endpoint: + pattern: ^https*:// + type: string + type: object + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml b/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml new file mode 100644 index 000000000000..b9d864abef78 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml @@ -0,0 +1,1173 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephobjectstores.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephObjectStore + listKind: CephObjectStoreList + plural: cephobjectstores + singular: cephobjectstore + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + allowUsersInNamespaces: + items: + type: string + type: array + dataPool: + nullable: true + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + gateway: + nullable: true + properties: + annotations: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + caBundleRef: + nullable: true + type: string + dashboardEnabled: + nullable: true + type: boolean + x-kubernetes-preserve-unknown-fields: true + disableMultisiteSyncTraffic: + type: boolean + externalRgwEndpoints: + items: + properties: + hostname: + type: string + ip: + type: string + type: object + x-kubernetes-map-type: atomic + nullable: true + type: array + hostNetwork: + nullable: true + type: boolean + x-kubernetes-preserve-unknown-fields: true + instances: + format: int32 + nullable: true + type: integer + labels: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + port: + format: int32 + type: integer + priorityClassName: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + securePort: + format: int32 + maximum: 65535 + minimum: 0 + nullable: true + type: integer + service: + nullable: true + properties: + annotations: + additionalProperties: + type: string + type: object + type: object + sslCertificateRef: + nullable: true + type: string + type: object + healthCheck: + nullable: true + properties: + readinessProbe: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + startupProbe: + properties: + disabled: + type: boolean + probe: + properties: + exec: + properties: + command: + items: + type: string + type: array + type: object + failureThreshold: + format: int32 + type: integer + grpc: + properties: + port: + format: int32 + type: integer + service: + type: string + required: + - port + type: object + httpGet: + properties: + host: + type: string + httpHeaders: + items: + properties: + name: + type: string + value: + type: string + required: + - name + - value + type: object + type: array + path: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + type: string + required: + - port + type: object + initialDelaySeconds: + format: int32 + type: integer + periodSeconds: + format: int32 + type: integer + successThreshold: + format: int32 + type: integer + tcpSocket: + properties: + host: + type: string + port: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + format: int64 + type: integer + timeoutSeconds: + format: int32 + type: integer + type: object + type: object + type: object + hosting: + properties: + dnsNames: + items: + type: string + type: array + type: object + metadataPool: + nullable: true + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + preservePoolsOnDelete: + type: boolean + security: + nullable: true + properties: + keyRotation: + nullable: true + properties: + enabled: + default: false + type: boolean + schedule: + type: string + type: object + kms: + nullable: true + properties: + connectionDetails: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + tokenSecretName: + type: string + type: object + s3: + nullable: true + properties: + connectionDetails: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + tokenSecretName: + type: string + type: object + type: object + sharedPools: + nullable: true + properties: + dataPoolName: + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object + zone: + nullable: true + properties: + name: + type: string + required: + - name + type: object + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + endpoints: + properties: + insecure: + items: + type: string + nullable: true + type: array + secure: + items: + type: string + nullable: true + type: array + type: object + info: + additionalProperties: + type: string + nullable: true + type: object + message: + type: string + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml b/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml new file mode 100644 index 000000000000..2a9e5c1adef4 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml @@ -0,0 +1,204 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephobjectstoreusers.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephObjectStoreUser + listKind: CephObjectStoreUserList + plural: cephobjectstoreusers + shortNames: + - rcou + - objectuser + singular: cephobjectstoreuser + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + capabilities: + nullable: true + properties: + amz-cache: + enum: + - '*' + - read + - write + - read, write + type: string + bilog: + enum: + - '*' + - read + - write + - read, write + type: string + bucket: + enum: + - '*' + - read + - write + - read, write + type: string + buckets: + enum: + - '*' + - read + - write + - read, write + type: string + datalog: + enum: + - '*' + - read + - write + - read, write + type: string + info: + enum: + - '*' + - read + - write + - read, write + type: string + mdlog: + enum: + - '*' + - read + - write + - read, write + type: string + metadata: + enum: + - '*' + - read + - write + - read, write + type: string + oidc-provider: + enum: + - '*' + - read + - write + - read, write + type: string + ratelimit: + enum: + - '*' + - read + - write + - read, write + type: string + roles: + enum: + - '*' + - read + - write + - read, write + type: string + usage: + enum: + - '*' + - read + - write + - read, write + type: string + user: + enum: + - '*' + - read + - write + - read, write + type: string + user-policy: + enum: + - '*' + - read + - write + - read, write + type: string + users: + enum: + - '*' + - read + - write + - read, write + type: string + zone: + enum: + - '*' + - read + - write + - read, write + type: string + type: object + clusterNamespace: + type: string + displayName: + type: string + quotas: + nullable: true + properties: + maxBuckets: + nullable: true + type: integer + maxObjects: + format: int64 + nullable: true + type: integer + maxSize: + anyOf: + - type: integer + - type: string + nullable: true + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + store: + type: string + type: object + status: + properties: + info: + additionalProperties: + type: string + nullable: true + type: object + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml b/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml new file mode 100644 index 000000000000..ed69601a1481 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml @@ -0,0 +1,79 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephobjectzonegroups.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephObjectZoneGroup + listKind: CephObjectZoneGroupList + plural: cephobjectzonegroups + singular: cephobjectzonegroup + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + realm: + type: string + required: + - realm + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml b/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml new file mode 100644 index 000000000000..20c2845a1d97 --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml @@ -0,0 +1,364 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephobjectzones.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephObjectZone + listKind: CephObjectZoneList + plural: cephobjectzones + singular: cephobjectzone + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + customEndpoints: + items: + type: string + nullable: true + type: array + dataPool: + nullable: true + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + metadataPool: + nullable: true + properties: + application: + type: string + compressionMode: + enum: + - none + - passive + - aggressive + - force + - "" + nullable: true + type: string + crushRoot: + nullable: true + type: string + deviceClass: + nullable: true + type: string + enableRBDStats: + type: boolean + erasureCoded: + properties: + algorithm: + type: string + codingChunks: + minimum: 0 + type: integer + dataChunks: + minimum: 0 + type: integer + required: + - codingChunks + - dataChunks + type: object + failureDomain: + type: string + mirroring: + properties: + enabled: + type: boolean + mode: + type: string + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + snapshotSchedules: + items: + properties: + interval: + type: string + path: + type: string + startTime: + type: string + type: object + type: array + type: object + parameters: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + quotas: + nullable: true + properties: + maxBytes: + format: int64 + type: integer + maxObjects: + format: int64 + type: integer + maxSize: + pattern: ^[0-9]+[\.]?[0-9]*([KMGTPE]i|[kMGTPE])?$ + type: string + type: object + replicated: + properties: + hybridStorage: + nullable: true + properties: + primaryDeviceClass: + minLength: 1 + type: string + secondaryDeviceClass: + minLength: 1 + type: string + required: + - primaryDeviceClass + - secondaryDeviceClass + type: object + replicasPerFailureDomain: + minimum: 1 + type: integer + requireSafeReplicaSize: + type: boolean + size: + minimum: 0 + type: integer + subFailureDomain: + type: string + targetSizeRatio: + type: number + required: + - size + type: object + statusCheck: + properties: + mirror: + nullable: true + properties: + disabled: + type: boolean + interval: + type: string + timeout: + type: string + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + preservePoolsOnDelete: + default: true + type: boolean + sharedPools: + nullable: true + properties: + dataPoolName: + type: string + x-kubernetes-validations: + - message: object store shared data pool is immutable + rule: self == oldSelf + metadataPoolName: + type: string + x-kubernetes-validations: + - message: object store shared metadata pool is immutable + rule: self == oldSelf + preserveRadosNamespaceDataOnDelete: + type: boolean + required: + - dataPoolName + - metadataPoolName + type: object + zoneGroup: + type: string + required: + - dataPool + - metadataPool + - zoneGroup + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml b/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml new file mode 100644 index 000000000000..39c840dc034a --- /dev/null +++ b/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml @@ -0,0 +1,610 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + creationTimestamp: null + name: cephrbdmirrors.ceph.rook.io +spec: + group: ceph.rook.io + names: + kind: CephRBDMirror + listKind: CephRBDMirrorList + plural: cephrbdmirrors + singular: cephrbdmirror + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .status.phase + name: Phase + type: string + name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + type: string + kind: + type: string + metadata: + type: object + spec: + properties: + annotations: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + count: + minimum: 1 + type: integer + labels: + additionalProperties: + type: string + nullable: true + type: object + x-kubernetes-preserve-unknown-fields: true + peers: + nullable: true + properties: + secretNames: + items: + type: string + type: array + type: object + placement: + nullable: true + properties: + nodeAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + preference: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + weight: + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + properties: + nodeSelectorTerms: + items: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchFields: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + type: object + x-kubernetes-map-type: atomic + type: array + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + podAntiAffinity: + properties: + preferredDuringSchedulingIgnoredDuringExecution: + items: + properties: + podAffinityTerm: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + weight: + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + requiredDuringSchedulingIgnoredDuringExecution: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + items: + type: string + type: array + topologyKey: + type: string + required: + - topologyKey + type: object + type: array + type: object + tolerations: + items: + properties: + effect: + type: string + key: + type: string + operator: + type: string + tolerationSeconds: + format: int64 + type: integer + value: + type: string + type: object + type: array + topologySpreadConstraints: + items: + properties: + labelSelector: + properties: + matchExpressions: + items: + properties: + key: + type: string + operator: + type: string + values: + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + format: int32 + type: integer + minDomains: + format: int32 + type: integer + nodeAffinityPolicy: + type: string + nodeTaintsPolicy: + type: string + topologyKey: + type: string + whenUnsatisfiable: + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + x-kubernetes-preserve-unknown-fields: true + priorityClassName: + type: string + resources: + nullable: true + properties: + claims: + items: + properties: + name: + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - count + type: object + status: + properties: + conditions: + items: + properties: + lastHeartbeatTime: + format: date-time + type: string + lastTransitionTime: + format: date-time + type: string + message: + type: string + reason: + type: string + status: + type: string + type: + type: string + type: object + type: array + observedGeneration: + format: int64 + type: integer + phase: + type: string + type: object + x-kubernetes-preserve-unknown-fields: true + required: + - metadata + - spec + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null diff --git a/build/csv/ceph/objectstorage-provisioner-role-binding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml b/build/csv/ceph/objectstorage-provisioner-role-binding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml new file mode 100644 index 000000000000..6d59e2c12a96 --- /dev/null +++ b/build/csv/ceph/objectstorage-provisioner-role-binding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml @@ -0,0 +1,17 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + creationTimestamp: null + labels: + app.kubernetes.io/component: driver-ceph + app.kubernetes.io/name: cosi-driver-ceph + app.kubernetes.io/part-of: container-object-storage-interface + name: objectstorage-provisioner-role-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: objectstorage-provisioner-role +subjects: +- kind: ServiceAccount + name: objectstorage-provisioner + namespace: rook-ceph diff --git a/build/csv/ceph/objectstorage-provisioner-role_rbac.authorization.k8s.io_v1_clusterrole.yaml b/build/csv/ceph/objectstorage-provisioner-role_rbac.authorization.k8s.io_v1_clusterrole.yaml new file mode 100644 index 000000000000..f0b6ec1e5845 --- /dev/null +++ b/build/csv/ceph/objectstorage-provisioner-role_rbac.authorization.k8s.io_v1_clusterrole.yaml @@ -0,0 +1,49 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + creationTimestamp: null + labels: + app.kubernetes.io/component: driver-ceph + app.kubernetes.io/name: cosi-driver-ceph + app.kubernetes.io/part-of: container-object-storage-interface + name: objectstorage-provisioner-role +rules: +- apiGroups: + - objectstorage.k8s.io + resources: + - buckets + - bucketaccesses + - bucketclaims + - bucketaccessclasses + - buckets/status + - bucketaccesses/status + - bucketclaims/status + - bucketaccessclasses/status + verbs: + - get + - list + - watch + - update + - create + - delete +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create +- apiGroups: + - "" + resources: + - secrets + - events + verbs: + - get + - delete + - update + - create diff --git a/build/csv/ceph/objectstorage-provisioner_v1_serviceaccount.yaml b/build/csv/ceph/objectstorage-provisioner_v1_serviceaccount.yaml new file mode 100644 index 000000000000..ddd7800c9b90 --- /dev/null +++ b/build/csv/ceph/objectstorage-provisioner_v1_serviceaccount.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: null + labels: + app.kubernetes.io/component: driver-ceph + app.kubernetes.io/name: cosi-driver-ceph + app.kubernetes.io/part-of: container-object-storage-interface + name: objectstorage-provisioner diff --git a/build/csv/rook-ceph.clusterserviceversion.yaml b/build/csv/rook-ceph.clusterserviceversion.yaml new file mode 100644 index 000000000000..b39e47e6a496 --- /dev/null +++ b/build/csv/rook-ceph.clusterserviceversion.yaml @@ -0,0 +1,3378 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: ClusterServiceVersion +metadata: + annotations: + alm-examples: |- + [ + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephCluster", + "metadata": { + "name": "my-rook-ceph", + "namespace": "my-rook-ceph" + }, + "spec": { + "cephVersion": { + "image": "quay.io/ceph/ceph:v17.2.6" + }, + "dataDirHostPath": "/var/lib/rook", + "mon": { + "count": 3 + }, + "dashboard": { + "enabled": true + }, + "network": { + "hostNetwork": false + }, + "rbdMirroring": { + "workers": 0 + }, + "storage": { + "useAllNodes": true, + "useAllDevices": true + } + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephBlockPool", + "metadata": { + "name": "replicapool", + "namespace": "my-rook-ceph" + }, + "spec": { + "failureDomain": "host", + "replicated": { + "size": 3 + }, + "annotations": null + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephObjectStore", + "metadata": { + "name": "my-store", + "namespace": "my-rook-ceph" + }, + "spec": { + "metadataPool": { + "failureDomain": "host", + "replicated": { + "size": 3 + } + }, + "dataPool": { + "failureDomain": "host", + "replicated": { + "size": 3 + } + }, + "gateway": { + "type": "s3", + "sslCertificateRef": null, + "port": 8080, + "securePort": null, + "instances": 1, + "placement": null, + "annotations": null, + "resources": null + } + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephObjectStoreUser", + "metadata": { + "name": "my-user", + "namespace": "my-rook-ceph" + }, + "spec": { + "store": "my-store", + "displayName": "my display name" + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephNFS", + "metadata": { + "name": "my-nfs", + "namespace": "rook-ceph" + }, + "spec": { + "rados": { + "pool": "myfs-data0", + "namespace": "nfs-ns" + }, + "server": { + "active": 3, + "placement": null, + "annotations": null, + "resources": null + } + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephClient", + "metadata": { + "name": "cinder", + "namespace": "rook-ceph" + }, + "spec": { + "caps": { + "mon": "profile rbd", + "osd": "profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images" + } + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephFilesystem", + "metadata": { + "name": "myfs", + "namespace": "rook-ceph" + }, + "spec": { + "dataPools": [ + { + "compressionMode": "", + "crushRoot": "", + "deviceClass": "", + "erasureCoded": { + "algorithm": "", + "codingChunks": 0, + "dataChunks": 0 + }, + "failureDomain": "host", + "replicated": { + "requireSafeReplicaSize": false, + "size": 1, + "targetSizeRatio": 0.5 + } + } + ], + "metadataPool": { + "compressionMode": "", + "crushRoot": "", + "deviceClass": "", + "erasureCoded": { + "algorithm": "", + "codingChunks": 0, + "dataChunks": 0 + }, + "failureDomain": "", + "replicated": { + "requireSafeReplicaSize": false, + "size": 1, + "targetSizeRatio": 0 + } + }, + "metadataServer": { + "activeCount": 1, + "activeStandby": true, + "placement": {}, + "resources": {} + }, + "preservePoolsOnDelete": false, + "preserveFilesystemOnDelete": false + } + }, + { + "apiVersion": "ceph.rook.io/v1", + "kind": "CephRBDMirror", + "metadata": { + "name": "my-rbd-mirror", + "namespace": "rook-ceph" + }, + "spec": { + "annotations": null, + "count": 1, + "placement": { + "topologyKey": "kubernetes.io/hostname" + }, + "resources": null + } + } + ] + capabilities: Basic Install + operators.operatorframework.io/builder: operator-sdk-v1.25.0 + operators.operatorframework.io/project_layout: unknown + tectonic-visibility: ocs + repository: https://github.com/red-hat-storage/rook + containerImage: '{{.RookOperatorImage}}' + externalClusterScript: |- + IiIiCkNvcHlyaWdodCAyMDIwIFRoZSBSb29rIEF1dGhvcnMuIEFsbCByaWdodHMgcmVzZXJ2ZWQu + CgpMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxp + Y2Vuc2UiKTsKeW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3 + aXRoIHRoZSBMaWNlbnNlLgpZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQK + CglodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKClVubGVzcyByZXF1 + aXJlZCBieSBhcHBsaWNhYmxlIGxhdyBvciBhZ3JlZWQgdG8gaW4gd3JpdGluZywgc29mdHdhcmUK + ZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElT + IiBCQVNJUywKV0lUSE9VVCBXQVJSQU5USUVTIE9SIENPTkRJVElPTlMgT0YgQU5ZIEtJTkQsIGVp + dGhlciBleHByZXNzIG9yIGltcGxpZWQuClNlZSB0aGUgTGljZW5zZSBmb3IgdGhlIHNwZWNpZmlj + IGxhbmd1YWdlIGdvdmVybmluZyBwZXJtaXNzaW9ucyBhbmQKbGltaXRhdGlvbnMgdW5kZXIgdGhl + IExpY2Vuc2UuCiIiIgoKaW1wb3J0IGVycm5vCmltcG9ydCBzeXMKaW1wb3J0IGpzb24KaW1wb3J0 + IGFyZ3BhcnNlCmltcG9ydCByZQppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgaG1hYwpmcm9tIGhh + c2hsaWIgaW1wb3J0IHNoYTEgYXMgc2hhCmZyb20gb3MgaW1wb3J0IGxpbmVzZXAgYXMgTElORVNF + UApmcm9tIG9zIGltcG9ydCBwYXRoCmZyb20gZW1haWwudXRpbHMgaW1wb3J0IGZvcm1hdGRhdGUK + aW1wb3J0IHJlcXVlc3RzCmZyb20gcmVxdWVzdHMuYXV0aCBpbXBvcnQgQXV0aEJhc2UKCnB5M2sg + PSBGYWxzZQppZiBzeXMudmVyc2lvbl9pbmZvLm1ham9yID49IDM6CiAgICBweTNrID0gVHJ1ZQog + ICAgaW1wb3J0IHVybGxpYi5wYXJzZQogICAgZnJvbSBpcGFkZHJlc3MgaW1wb3J0IGlwX2FkZHJl + c3MsIElQdjRBZGRyZXNzCgpNb2R1bGVOb3RGb3VuZEVycm9yID0gSW1wb3J0RXJyb3IKCnRyeToK + ICAgIGltcG9ydCByYWRvcwpleGNlcHQgTW9kdWxlTm90Rm91bmRFcnJvciBhcyBub01vZEVycjoK + ICAgIHByaW50KGYiRXJyb3I6IHtub01vZEVycn1cbkV4aXRpbmcgdGhlIHNjcmlwdC4uLiIpCiAg + ICBzeXMuZXhpdCgxKQoKdHJ5OgogICAgaW1wb3J0IHJiZApleGNlcHQgTW9kdWxlTm90Rm91bmRF + cnJvciBhcyBub01vZEVycjoKICAgIHByaW50KGYiRXJyb3I6IHtub01vZEVycn1cbkV4aXRpbmcg + dGhlIHNjcmlwdC4uLiIpCiAgICBzeXMuZXhpdCgxKQoKdHJ5OgogICAgIyBmb3IgMi43LngKICAg + IGZyb20gU3RyaW5nSU8gaW1wb3J0IFN0cmluZ0lPCmV4Y2VwdCBNb2R1bGVOb3RGb3VuZEVycm9y + OgogICAgIyBmb3IgMy54CiAgICBmcm9tIGlvIGltcG9ydCBTdHJpbmdJTwoKdHJ5OgogICAgIyBm + b3IgMi43LngKICAgIGZyb20gdXJscGFyc2UgaW1wb3J0IHVybHBhcnNlCiAgICBmcm9tIHVybGxp + YiBpbXBvcnQgdXJsZW5jb2RlIGFzIHVybGVuY29kZQpleGNlcHQgTW9kdWxlTm90Rm91bmRFcnJv + cjoKICAgICMgZm9yIDMueAogICAgZnJvbSB1cmxsaWIucGFyc2UgaW1wb3J0IHVybHBhcnNlCiAg + ICBmcm9tIHVybGxpYi5wYXJzZSBpbXBvcnQgdXJsZW5jb2RlIGFzIHVybGVuY29kZQoKdHJ5Ogog + ICAgZnJvbSBiYXNlNjQgaW1wb3J0IGVuY29kZXN0cmluZwpleGNlcHQ6CiAgICBmcm9tIGJhc2U2 + NCBpbXBvcnQgZW5jb2RlYnl0ZXMgYXMgZW5jb2Rlc3RyaW5nCgoKY2xhc3MgRXhlY3V0aW9uRmFp + bHVyZUV4Y2VwdGlvbihFeGNlcHRpb24pOgogICAgcGFzcwoKCiMjIyMjIyMjIyMjIyMjIyMjIyMj + IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwojIyMjIyMjIyMjIyMjIyMjIyMgRHVtbXlSYWRv + cyAjIyMjIyMjIyMjIyMjIyMjIyMKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj + IyMjIyMjIyMjIyMjCiMgdGhpcyBpcyBtYWlubHkgZm9yIHRlc3RpbmcgYW5kIGNvdWxkIGJlIHVz + ZWQgd2hlcmUgJ3JhZG9zJyBpcyBub3QgYXZhaWxhYmxlCgoKY2xhc3MgRHVtbXlSYWRvcyhvYmpl + Y3QpOgogICAgZGVmIF9faW5pdF9fKHNlbGYpOgogICAgICAgIHNlbGYucmV0dXJuX3ZhbCA9IDAK + ICAgICAgICBzZWxmLmVycl9tZXNzYWdlID0gIiIKICAgICAgICBzZWxmLnN0YXRlID0gImNvbm5l + Y3RlZCIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwID0ge30KICAgICAgICBzZWxmLmNtZF9u + YW1lcyA9IHt9CiAgICAgICAgc2VsZi5faW5pdF9jbWRfb3V0cHV0X21hcCgpCiAgICAgICAgc2Vs + Zi5kdW1teV9ob3N0X2lwX21hcCA9IHt9CgogICAgZGVmIF9pbml0X2NtZF9vdXRwdXRfbWFwKHNl + bGYpOgogICAgICAgIGpzb25fZmlsZV9uYW1lID0gInRlc3QtZGF0YS9jZXBoLXN0YXR1cy1vdXQi + CiAgICAgICAgc2NyaXB0X2RpciA9IHBhdGguYWJzcGF0aChwYXRoLmRpcm5hbWUoX19maWxlX18p + KQogICAgICAgIGNlcGhfc3RhdHVzX3N0ciA9ICIiCiAgICAgICAgd2l0aCBvcGVuKAogICAgICAg + ICAgICBwYXRoLmpvaW4oc2NyaXB0X2RpciwganNvbl9maWxlX25hbWUpLCBtb2RlPSJyIiwgZW5j + b2Rpbmc9IlVURi04IgogICAgICAgICkgYXMganNvbl9maWxlOgogICAgICAgICAgICBjZXBoX3N0 + YXR1c19zdHIgPSBqc29uX2ZpbGUucmVhZCgpCiAgICAgICAgc2VsZi5jbWRfbmFtZXNbImZzIGxz + Il0gPSAiIiJ7ImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJmcyBscyJ9IiIiCiAgICAgICAg + c2VsZi5jbWRfbmFtZXNbInF1b3J1bV9zdGF0dXMiXSA9ICgKICAgICAgICAgICAgIiIieyJmb3Jt + YXQiOiAianNvbiIsICJwcmVmaXgiOiAicXVvcnVtX3N0YXR1cyJ9IiIiCiAgICAgICAgKQogICAg + ICAgIHNlbGYuY21kX25hbWVzWyJtZ3Igc2VydmljZXMiXSA9ICgKICAgICAgICAgICAgIiIieyJm + b3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAibWdyIHNlcnZpY2VzIn0iIiIKICAgICAgICApCiAg + ICAgICAgIyBhbGwgdGhlIGNvbW1hbmRzIGFuZCB0aGVpciBvdXRwdXQKICAgICAgICBzZWxmLmNt + ZF9vdXRwdXRfbWFwW3NlbGYuY21kX25hbWVzWyJmcyBscyJdXSA9ICgKICAgICAgICAgICAgIiIi + W3sibmFtZSI6Im15ZnMiLCJtZXRhZGF0YV9wb29sIjoibXlmcy1tZXRhZGF0YSIsIm1ldGFkYXRh + X3Bvb2xfaWQiOjIsImRhdGFfcG9vbF9pZHMiOlszXSwiZGF0YV9wb29scyI6WyJteWZzLXJlcGxp + Y2F0ZWQiXX1dIiIiCiAgICAgICAgKQogICAgICAgIHNlbGYuY21kX291dHB1dF9tYXBbc2VsZi5j + bWRfbmFtZXNbInF1b3J1bV9zdGF0dXMiXV0gPSAoCiAgICAgICAgICAgICIiInsiZWxlY3Rpb25f + ZXBvY2giOjMsInF1b3J1bSI6WzBdLCJxdW9ydW1fbmFtZXMiOlsiYSJdLCJxdW9ydW1fbGVhZGVy + X25hbWUiOiJhIiwicXVvcnVtX2FnZSI6MTQzODUsImZlYXR1cmVzIjp7InF1b3J1bV9jb24iOiI0 + NTQwMTM4MjkyODM2Njk2MDYzIiwicXVvcnVtX21vbiI6WyJrcmFrZW4iLCJsdW1pbm91cyIsIm1p + bWljIiwib3NkbWFwLXBydW5lIiwibmF1dGlsdXMiLCJvY3RvcHVzIl19LCJtb25tYXAiOnsiZXBv + Y2giOjEsImZzaWQiOiJhZjRlMTY3My0wYjcyLTQwMmQtOTkwYS0yMmQyOTE5ZDBmMWMiLCJtb2Rp + ZmllZCI6IjIwMjAtMDUtMDdUMDM6MzY6MzkuOTE4MDM1WiIsImNyZWF0ZWQiOiIyMDIwLTA1LTA3 + VDAzOjM2OjM5LjkxODAzNVoiLCJtaW5fbW9uX3JlbGVhc2UiOjE1LCJtaW5fbW9uX3JlbGVhc2Vf + bmFtZSI6Im9jdG9wdXMiLCJmZWF0dXJlcyI6eyJwZXJzaXN0ZW50IjpbImtyYWtlbiIsImx1bWlu + b3VzIiwibWltaWMiLCJvc2RtYXAtcHJ1bmUiLCJuYXV0aWx1cyIsIm9jdG9wdXMiXSwib3B0aW9u + YWwiOltdfSwibW9ucyI6W3sicmFuayI6MCwibmFtZSI6ImEiLCJwdWJsaWNfYWRkcnMiOnsiYWRk + cnZlYyI6W3sidHlwZSI6InYyIiwiYWRkciI6IjEwLjExMC4yMDUuMTc0OjMzMDAiLCJub25jZSI6 + MH0seyJ0eXBlIjoidjEiLCJhZGRyIjoiMTAuMTEwLjIwNS4xNzQ6Njc4OSIsIm5vbmNlIjowfV19 + LCJhZGRyIjoiMTAuMTEwLjIwNS4xNzQ6Njc4OS8wIiwicHVibGljX2FkZHIiOiIxMC4xMTAuMjA1 + LjE3NDo2Nzg5LzAiLCJwcmlvcml0eSI6MCwid2VpZ2h0IjowfV19fSIiIgogICAgICAgICkKICAg + ICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwW3NlbGYuY21kX25hbWVzWyJtZ3Igc2VydmljZXMiXV0g + PSAoCiAgICAgICAgICAgICIiInsiZGFzaGJvYXJkIjoiaHR0cHM6Ly9jZXBoLWRhc2hib2FyZDo4 + NDQzLyIsInByb21ldGhldXMiOiJodHRwOi8vY2VwaC1kYXNoYm9hcmQtZGI6OTI4My8ifSIiIgog + ICAgICAgICkKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAiIiJ7ImNh + cHMiOiBbIm1vbiIsICJhbGxvdyByLCBhbGxvdyBjb21tYW5kIHF1b3J1bV9zdGF0dXMiLCAib3Nk + IiwgInByb2ZpbGUgcmJkLXJlYWQtb25seSwgYWxsb3cgcnd4IHBvb2w9ZGVmYXVsdC5yZ3cubWV0 + YSwgYWxsb3cgciBwb29sPS5yZ3cucm9vdCwgYWxsb3cgcncgcG9vbD1kZWZhdWx0LnJndy5jb250 + cm9sLCBhbGxvdyB4IHBvb2w9ZGVmYXVsdC5yZ3cuYnVja2V0cy5pbmRleCJdLCAiZW50aXR5Ijog + ImNsaWVudC5oZWFsdGhjaGVja2VyIiwgImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJhdXRo + IGdldC1vci1jcmVhdGUifSIiIgogICAgICAgIF0gPSAiIiJbeyJlbnRpdHkiOiJjbGllbnQuaGVh + bHRoY2hlY2tlciIsImtleSI6IkFRREZrYk5lZnQ1YkZSQUFUbmRMTlVTRUtydW96eGlaaTNscmRB + PT0iLCJjYXBzIjp7Im1vbiI6ImFsbG93IHIsIGFsbG93IGNvbW1hbmQgcXVvcnVtX3N0YXR1cyIs + Im9zZCI6InByb2ZpbGUgcmJkLXJlYWQtb25seSwgYWxsb3cgcnd4IHBvb2w9ZGVmYXVsdC5yZ3cu + bWV0YSwgYWxsb3cgciBwb29sPS5yZ3cucm9vdCwgYWxsb3cgcncgcG9vbD1kZWZhdWx0LnJndy5j + b250cm9sLCBhbGxvdyB4IHBvb2w9ZGVmYXVsdC5yZ3cuYnVja2V0cy5pbmRleCJ9fV0iIiIKICAg + ICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAiIiJ7ImNhcHMiOiBbIm1vbiIs + ICJwcm9maWxlIHJiZCwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2NrbGlzdCciLCAib3NkIiwgInBy + b2ZpbGUgcmJkIl0sICJlbnRpdHkiOiAiY2xpZW50LmNzaS1yYmQtbm9kZSIsICJmb3JtYXQiOiAi + anNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQtb3ItY3JlYXRlIn0iIiIKICAgICAgICBdID0gIiIi + W3siZW50aXR5IjoiY2xpZW50LmNzaS1yYmQtbm9kZSIsImtleSI6IkFRQk9nck5lSGJLMUF4QUF1 + YllCZVY4UzFVL0dQenE1U1ZlcTZnPT0iLCJjYXBzIjp7Im1vbiI6InByb2ZpbGUgcmJkLCBhbGxv + dyBjb21tYW5kICdvc2QgYmxvY2tsaXN0JyIsIm9zZCI6InByb2ZpbGUgcmJkIn19XSIiIgogICAg + ICAgIHNlbGYuY21kX291dHB1dF9tYXBbCiAgICAgICAgICAgICIiInsiY2FwcyI6IFsibW9uIiwg + InByb2ZpbGUgcmJkLCBhbGxvdyBjb21tYW5kICdvc2QgYmxvY2tsaXN0JyIsICJtZ3IiLCAiYWxs + b3cgcnciLCAib3NkIiwgInByb2ZpbGUgcmJkIl0sICJlbnRpdHkiOiAiY2xpZW50LmNzaS1yYmQt + cHJvdmlzaW9uZXIiLCAiZm9ybWF0IjogImpzb24iLCAicHJlZml4IjogImF1dGggZ2V0LW9yLWNy + ZWF0ZSJ9IiIiCiAgICAgICAgXSA9ICIiIlt7ImVudGl0eSI6ImNsaWVudC5jc2ktcmJkLXByb3Zp + c2lvbmVyIiwia2V5IjoiQVFCTmdyTmUxZ2V5S3hBQThla1ZpUmRFK2hzczVPd2VZQmt3Tmc9PSIs + ImNhcHMiOnsibWdyIjoiYWxsb3cgcnciLCJtb24iOiJwcm9maWxlIHJiZCwgYWxsb3cgY29tbWFu + ZCAnb3NkIGJsb2NrbGlzdCciLCJvc2QiOiJwcm9maWxlIHJiZCJ9fV0iIiIKICAgICAgICBzZWxm + LmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAiIiJ7ImNhcHMiOiBbIm1vbiIsICJhbGxvdyBy + LCBhbGxvdyBjb21tYW5kICdvc2QgYmxvY2tsaXN0JyIsICJtZ3IiLCAiYWxsb3cgcnciLCAib3Nk + IiwgImFsbG93IHJ3IHRhZyBjZXBoZnMgKj0qIiwgIm1kcyIsICJhbGxvdyBydyJdLCAiZW50aXR5 + IjogImNsaWVudC5jc2ktY2VwaGZzLW5vZGUiLCAiZm9ybWF0IjogImpzb24iLCAicHJlZml4Ijog + ImF1dGggZ2V0LW9yLWNyZWF0ZSJ9IiIiCiAgICAgICAgXSA9ICIiIlt7ImVudGl0eSI6ImNsaWVu + dC5jc2ktY2VwaGZzLW5vZGUiLCJrZXkiOiJBUUJPZ3JOZUVOdW5LeEFBUENtZ0U3UjZHOERjWG5h + SjFGMzJxZz09IiwiY2FwcyI6eyJtZHMiOiJhbGxvdyBydyIsIm1nciI6ImFsbG93IHJ3IiwibW9u + IjoiYWxsb3cgciwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2NrbGlzdCciLCJvc2QiOiJhbGxvdyBy + dyB0YWcgY2VwaGZzICo9KiJ9fV0iIiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAg + ICAgICAgICAiIiJ7ImNhcHMiOiBbIm1vbiIsICJhbGxvdyByLCBhbGxvdyBjb21tYW5kICdvc2Qg + YmxvY2tsaXN0JyIsICJtZ3IiLCAiYWxsb3cgcnciLCAib3NkIiwgImFsbG93IHJ3IHRhZyBjZXBo + ZnMgbWV0YWRhdGE9KiJdLCAiZW50aXR5IjogImNsaWVudC5jc2ktY2VwaGZzLXByb3Zpc2lvbmVy + IiwgImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJhdXRoIGdldC1vci1jcmVhdGUifSIiIgog + ICAgICAgIF0gPSAiIiJbeyJlbnRpdHkiOiJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lciIs + ImtleSI6IkFRQk9nck5lQUZnY0dCQUF2R3FLT0FEMEQzeHhtVlkwUjkxMmRnPT0iLCJjYXBzIjp7 + Im1nciI6ImFsbG93IHJ3IiwibW9uIjoiYWxsb3cgciwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2Nr + bGlzdCciLCJvc2QiOiJhbGxvdyBydyB0YWcgY2VwaGZzIG1ldGFkYXRhPSoifX1dIiIiCiAgICAg + ICAgc2VsZi5jbWRfb3V0cHV0X21hcFsKICAgICAgICAgICAgIiIieyJjYXBzIjogWyJtb24iLCAi + YWxsb3cgciwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2NrbGlzdCciLCAibWdyIiwgImFsbG93IHJ3 + IiwgIm9zZCIsICJhbGxvdyBydyB0YWcgY2VwaGZzIG1ldGFkYXRhPSoiXSwgImVudGl0eSI6ICJj + bGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lci1vcGVuc2hpZnQtc3RvcmFnZSIsICJmb3JtYXQi + OiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQtb3ItY3JlYXRlIn0iIiIKICAgICAgICBdID0g + IiIiW3siZW50aXR5IjoiY2xpZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9uZXItb3BlbnNoaWZ0LXN0 + b3JhZ2UiLCJrZXkiOiJCUUJPZ3JOZUFGZ2NHQkFBdkdxS09BRDBEM3h4bVZZMFI5MTJkZz09Iiwi + Y2FwcyI6eyJtZ3IiOiJhbGxvdyBydyIsIm1vbiI6ImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ29z + ZCBibG9ja2xpc3QnIiwib3NkIjoiYWxsb3cgcncgdGFnIGNlcGhmcyBtZXRhZGF0YT0qIn19XSIi + IgogICAgICAgIHNlbGYuY21kX291dHB1dF9tYXBbCiAgICAgICAgICAgICIiInsiY2FwcyI6IFsi + bW9uIiwgImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ29zZCBibG9ja2xpc3QnIiwgIm1nciIsICJh + bGxvdyBydyIsICJvc2QiLCAiYWxsb3cgcncgdGFnIGNlcGhmcyBtZXRhZGF0YT1teWZzIl0sICJl + bnRpdHkiOiAiY2xpZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9uZXItb3BlbnNoaWZ0LXN0b3JhZ2Ut + bXlmcyIsICJmb3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQtb3ItY3JlYXRlIn0i + IiIKICAgICAgICBdID0gIiIiW3siZW50aXR5IjoiY2xpZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9u + ZXItb3BlbnNoaWZ0LXN0b3JhZ2UtbXlmcyIsImtleSI6IkNRQk9nck5lQUZnY0dCQUF2R3FLT0FE + MEQzeHhtVlkwUjkxMmRnPT0iLCJjYXBzIjp7Im1nciI6ImFsbG93IHJ3IiwibW9uIjoiYWxsb3cg + ciwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2NrbGlzdCciLCJvc2QiOiJhbGxvdyBydyB0YWcgY2Vw + aGZzIG1ldGFkYXRhPW15ZnMifX1dIiIiCiAgICAgICAgc2VsZi5jbWRfb3V0cHV0X21hcFsKICAg + ICAgICAgICAgIiIieyJjYXBzIjogWyJtb24iLCAiYWxsb3cgciwgYWxsb3cgY29tbWFuZCBxdW9y + dW1fc3RhdHVzLCBhbGxvdyBjb21tYW5kIHZlcnNpb24iLCAibWdyIiwgImFsbG93IGNvbW1hbmQg + Y29uZmlnIiwgIm9zZCIsICJwcm9maWxlIHJiZC1yZWFkLW9ubHksIGFsbG93IHJ3eCBwb29sPWRl + ZmF1bHQucmd3Lm1ldGEsIGFsbG93IHIgcG9vbD0ucmd3LnJvb3QsIGFsbG93IHJ3IHBvb2w9ZGVm + YXVsdC5yZ3cuY29udHJvbCwgYWxsb3cgcnggcG9vbD1kZWZhdWx0LnJndy5sb2csIGFsbG93IHgg + cG9vbD1kZWZhdWx0LnJndy5idWNrZXRzLmluZGV4Il0sICJlbnRpdHkiOiAiY2xpZW50LmhlYWx0 + aGNoZWNrZXIiLCAiZm9ybWF0IjogImpzb24iLCAicHJlZml4IjogImF1dGggZ2V0LW9yLWNyZWF0 + ZSJ9IiIiCiAgICAgICAgXSA9ICIiIlt7ImVudGl0eSI6ImNsaWVudC5oZWFsdGhjaGVja2VyIiwi + a2V5IjoiQVFERmtiTmVmdDViRlJBQVRuZExOVVNFS3J1b3p4aVppM2xyZEE9PSIsImNhcHMiOnsi + bW9uIjogImFsbG93IHIsIGFsbG93IGNvbW1hbmQgcXVvcnVtX3N0YXR1cywgYWxsb3cgY29tbWFu + ZCB2ZXJzaW9uIiwgIm1nciI6ICJhbGxvdyBjb21tYW5kIGNvbmZpZyIsICJvc2QiOiAicHJvZmls + ZSByYmQtcmVhZC1vbmx5LCBhbGxvdyByd3ggcG9vbD1kZWZhdWx0LnJndy5tZXRhLCBhbGxvdyBy + IHBvb2w9LnJndy5yb290LCBhbGxvdyBydyBwb29sPWRlZmF1bHQucmd3LmNvbnRyb2wsIGFsbG93 + IHJ4IHBvb2w9ZGVmYXVsdC5yZ3cubG9nLCBhbGxvdyB4IHBvb2w9ZGVmYXVsdC5yZ3cuYnVja2V0 + cy5pbmRleCJ9fV0iIiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAi + IiJ7ImNhcHMiOiBbIm1vbiIsICJhbGxvdyByLCBhbGxvdyBjb21tYW5kIHF1b3J1bV9zdGF0dXMs + IGFsbG93IGNvbW1hbmQgdmVyc2lvbiIsICJtZ3IiLCAiYWxsb3cgY29tbWFuZCBjb25maWciLCAi + b3NkIiwgInByb2ZpbGUgcmJkLXJlYWQtb25seSwgYWxsb3cgcnd4IHBvb2w9ZGVmYXVsdC5yZ3cu + bWV0YSwgYWxsb3cgciBwb29sPS5yZ3cucm9vdCwgYWxsb3cgcncgcG9vbD1kZWZhdWx0LnJndy5j + b250cm9sLCBhbGxvdyByeCBwb29sPWRlZmF1bHQucmd3LmxvZywgYWxsb3cgeCBwb29sPWRlZmF1 + bHQucmd3LmJ1Y2tldHMuaW5kZXgiXSwgImVudGl0eSI6ICJjbGllbnQuaGVhbHRoY2hlY2tlciIs + ICJmb3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBjYXBzIn0iIiIKICAgICAgICBdID0g + IiIiW3siZW50aXR5IjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJrZXkiOiJBUURGa2JOZWZ0NWJG + UkFBVG5kTE5VU1JLcnVvenhpWmkzbHJkQT09IiwiY2FwcyI6eyJtb24iOiAiYWxsb3cgciwgYWxs + b3cgY29tbWFuZCBxdW9ydW1fc3RhdHVzLCBhbGxvdyBjb21tYW5kIHZlcnNpb24iLCAibWdyIjog + ImFsbG93IGNvbW1hbmQgY29uZmlnIiwgIm9zZCI6ICJwcm9maWxlIHJiZC1yZWFkLW9ubHksIGFs + bG93IHJ3eCBwb29sPWRlZmF1bHQucmd3Lm1ldGEsIGFsbG93IHIgcG9vbD0ucmd3LnJvb3QsIGFs + bG93IHJ3IHBvb2w9ZGVmYXVsdC5yZ3cuY29udHJvbCwgYWxsb3cgcnggcG9vbD1kZWZhdWx0LnJn + dy5sb2csIGFsbG93IHggcG9vbD1kZWZhdWx0LnJndy5idWNrZXRzLmluZGV4In19XSIiIgogICAg + ICAgIHNlbGYuY21kX291dHB1dF9tYXBbIiIieyJmb3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAi + bWdyIHNlcnZpY2VzIn0iIiJdID0gKAogICAgICAgICAgICAiIiJ7ImRhc2hib2FyZCI6ICJodHRw + Oi8vcm9vay1jZXBoLW1nci1hLTU3Y2Y5Zjg0YmMtZjRqbmw6NzAwMC8iLCAicHJvbWV0aGV1cyI6 + ICJodHRwOi8vcm9vay1jZXBoLW1nci1hLTU3Y2Y5Zjg0YmMtZjRqbmw6OTI4My8ifSIiIgogICAg + ICAgICkKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAiIiJ7ImVudGl0 + eSI6ICJjbGllbnQuaGVhbHRoY2hlY2tlciIsICJmb3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAi + YXV0aCBnZXQifSIiIgogICAgICAgIF0gPSAiIiJ7ImRhc2hib2FyZCI6ICJodHRwOi8vcm9vay1j + ZXBoLW1nci1hLTU3Y2Y5Zjg0YmMtZjRqbmw6NzAwMC8iLCAicHJvbWV0aGV1cyI6ICJodHRwOi8v + cm9vay1jZXBoLW1nci1hLTU3Y2Y5Zjg0YmMtZjRqbmw6OTI4My8ifSIiIgogICAgICAgIHNlbGYu + Y21kX291dHB1dF9tYXBbCiAgICAgICAgICAgICIiInsiZW50aXR5IjogImNsaWVudC5oZWFsdGhj + aGVja2VyIiwgImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJhdXRoIGdldCJ9IiIiCiAgICAg + ICAgXSA9ICIiIlt7ImVudGl0eSI6ImNsaWVudC5oZWFsdGhjaGVja2VyIiwia2V5IjoiQVFERmti + TmVmdDViRlJBQVRuZExOVVNFS3J1b3p4aVppM2xyZEE9PSIsImNhcHMiOnsibW9uIjogImFsbG93 + IHIsIGFsbG93IGNvbW1hbmQgcXVvcnVtX3N0YXR1cywgYWxsb3cgY29tbWFuZCB2ZXJzaW9uIiwg + Im1nciI6ICJhbGxvdyBjb21tYW5kIGNvbmZpZyIsICJvc2QiOiAicHJvZmlsZSByYmQtcmVhZC1v + bmx5LCBhbGxvdyByd3ggcG9vbD1kZWZhdWx0LnJndy5tZXRhLCBhbGxvdyByIHBvb2w9LnJndy5y + b290LCBhbGxvdyBydyBwb29sPWRlZmF1bHQucmd3LmNvbnRyb2wsIGFsbG93IHJ4IHBvb2w9ZGVm + YXVsdC5yZ3cubG9nLCBhbGxvdyB4IHBvb2w9ZGVmYXVsdC5yZ3cuYnVja2V0cy5pbmRleCJ9fV0i + IiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAgICAgICAiIiJ7ImVudGl0eSI6 + ICJjbGllbnQuY3NpLWNlcGhmcy1ub2RlIiwgImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJh + dXRoIGdldCJ9IiIiCiAgICAgICAgXSA9ICIiIltdIiIiCiAgICAgICAgc2VsZi5jbWRfb3V0cHV0 + X21hcFsKICAgICAgICAgICAgIiIieyJlbnRpdHkiOiAiY2xpZW50LmNzaS1yYmQtbm9kZSIsICJm + b3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQifSIiIgogICAgICAgIF0gPSAiIiJb + XSIiIgogICAgICAgIHNlbGYuY21kX291dHB1dF9tYXBbCiAgICAgICAgICAgICIiInsiZW50aXR5 + IjogImNsaWVudC5jc2ktcmJkLXByb3Zpc2lvbmVyIiwgImZvcm1hdCI6ICJqc29uIiwgInByZWZp + eCI6ICJhdXRoIGdldCJ9IiIiCiAgICAgICAgXSA9ICIiIltdIiIiCiAgICAgICAgc2VsZi5jbWRf + b3V0cHV0X21hcFsKICAgICAgICAgICAgIiIieyJlbnRpdHkiOiAiY2xpZW50LmNzaS1jZXBoZnMt + cHJvdmlzaW9uZXIiLCAiZm9ybWF0IjogImpzb24iLCAicHJlZml4IjogImF1dGggZ2V0In0iIiIK + ICAgICAgICBdID0gIiIiW10iIiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAgICAg + ICAgICAiIiJ7ImVudGl0eSI6ICJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lci1vcGVuc2hp + ZnQtc3RvcmFnZSIsICJmb3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQifSIiIgog + ICAgICAgIF0gPSAiIiJbXSIiIgogICAgICAgIHNlbGYuY21kX291dHB1dF9tYXBbCiAgICAgICAg + ICAgICIiInsiZW50aXR5IjogImNsaWVudC5jc2ktY2VwaGZzLXByb3Zpc2lvbmVyLW9wZW5zaGlm + dC1zdG9yYWdlLW15ZnMiLCAiZm9ybWF0IjogImpzb24iLCAicHJlZml4IjogImF1dGggZ2V0In0i + IiIKICAgICAgICBdID0gIiIiW10iIiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFwWwogICAg + ICAgICAgICAiIiJ7ImVudGl0eSI6ICJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lciIsICJm + b3JtYXQiOiAianNvbiIsICJwcmVmaXgiOiAiYXV0aCBnZXQifSIiIgogICAgICAgIF0gPSAiIiJb + eyJlbnRpdHkiOiJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lciIsImtleSI6IkFRREZrYk5l + ZnQ1YkZSQUFUbmRMTlVTRUtydW96eGlaaTNscmRBPT0iLCJjYXBzIjp7Im1vbiI6ImFsbG93IHIi + LCAibWdyIjoiYWxsb3cgcnciLCAib3NkIjoiYWxsb3cgcncgdGFnIGNlcGhmcyBtZXRhZGF0YT0q + In19XSIiIgogICAgICAgIHNlbGYuY21kX291dHB1dF9tYXBbCiAgICAgICAgICAgICIiInsiY2Fw + cyI6IFsibW9uIiwgImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ29zZCBibG9ja2xpc3QnIiwgIm1n + ciIsICJhbGxvdyBydyIsICJvc2QiLCAiYWxsb3cgcncgdGFnIGNlcGhmcyBtZXRhZGF0YT0qIl0s + ICJlbnRpdHkiOiAiY2xpZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCAiZm9ybWF0IjogImpz + b24iLCAicHJlZml4IjogImF1dGggY2FwcyJ9IiIiCiAgICAgICAgXSA9ICIiIlt7ImVudGl0eSI6 + ImNsaWVudC5jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2V5IjoiQVFERmtiTmVmdDViRlJBQVRu + ZExOVVNFS3J1b3p4aVppM2xyZEE9PSIsImNhcHMiOnsibW9uIjoiYWxsb3cgciwgIGFsbG93IGNv + bW1hbmQgJ29zZCBibG9ja2xpc3QnIiwgIm1nciI6ImFsbG93IHJ3IiwgIm9zZCI6ImFsbG93IHJ3 + IHRhZyBjZXBoZnMgbWV0YWRhdGE9KiJ9fV0iIiIKICAgICAgICBzZWxmLmNtZF9vdXRwdXRfbWFw + Wyd7ImZvcm1hdCI6ICJqc29uIiwgInByZWZpeCI6ICJzdGF0dXMifSddID0gY2VwaF9zdGF0dXNf + c3RyCgogICAgZGVmIHNodXRkb3duKHNlbGYpOgogICAgICAgIHBhc3MKCiAgICBkZWYgZ2V0X2Zz + aWQoc2VsZik6CiAgICAgICAgcmV0dXJuICJhZjRlMTY3My0wYjcyLTQwMmQtOTkwYS0yMmQyOTE5 + ZDBmMWMiCgogICAgZGVmIGNvbmZfcmVhZF9maWxlKHNlbGYpOgogICAgICAgIHBhc3MKCiAgICBk + ZWYgY29ubmVjdChzZWxmKToKICAgICAgICBwYXNzCgogICAgZGVmIHBvb2xfZXhpc3RzKHNlbGYs + IHBvb2xfbmFtZSk6CiAgICAgICAgcmV0dXJuIFRydWUKCiAgICBkZWYgbW9uX2NvbW1hbmQoc2Vs + ZiwgY21kLCBvdXQpOgogICAgICAgIGpzb25fY21kID0ganNvbi5sb2FkcyhjbWQpCiAgICAgICAg + anNvbl9jbWRfc3RyID0ganNvbi5kdW1wcyhqc29uX2NtZCwgc29ydF9rZXlzPVRydWUpCiAgICAg + ICAgY21kX291dHB1dCA9IHNlbGYuY21kX291dHB1dF9tYXBbanNvbl9jbWRfc3RyXQogICAgICAg + IHJldHVybiBzZWxmLnJldHVybl92YWwsIGNtZF9vdXRwdXQsIHN0cihzZWxmLmVycl9tZXNzYWdl + LmVuY29kZSgidXRmLTgiKSkKCiAgICBkZWYgX2NvbnZlcnRfaG9zdG5hbWVfdG9faXAoc2VsZiwg + aG9zdF9uYW1lKToKICAgICAgICBpcF9yZWdfeCA9IHJlLmNvbXBpbGUociJcZHsxLDN9LlxkezEs + M30uXGR7MSwzfS5cZHsxLDN9IikKICAgICAgICAjIGlmIHByb3ZpZGVkIGhvc3QgaXMgZGlyZWN0 + bHkgYW4gSVAgYWRkcmVzcywgcmV0dXJuIHRoZSBzYW1lCiAgICAgICAgaWYgaXBfcmVnX3gubWF0 + Y2goaG9zdF9uYW1lKToKICAgICAgICAgICAgcmV0dXJuIGhvc3RfbmFtZQogICAgICAgIGltcG9y + dCByYW5kb20KCiAgICAgICAgaG9zdF9pcCA9IHNlbGYuZHVtbXlfaG9zdF9pcF9tYXAuZ2V0KGhv + c3RfbmFtZSwgIiIpCiAgICAgICAgaWYgbm90IGhvc3RfaXA6CiAgICAgICAgICAgIGhvc3RfaXAg + PSBmIjE3Mi45LntyYW5kb20ucmFuZGludCgwLCAyNTQpfS57cmFuZG9tLnJhbmRpbnQoMCwgMjU0 + KX0iCiAgICAgICAgICAgIHNlbGYuZHVtbXlfaG9zdF9pcF9tYXBbaG9zdF9uYW1lXSA9IGhvc3Rf + aXAKICAgICAgICBkZWwgcmFuZG9tCiAgICAgICAgcmV0dXJuIGhvc3RfaXAKCiAgICBAY2xhc3Nt + ZXRob2QKICAgIGRlZiBSYWRvcyhjb25mZmlsZT1Ob25lKToKICAgICAgICByZXR1cm4gRHVtbXlS + YWRvcygpCgoKY2xhc3MgUzNBdXRoKEF1dGhCYXNlKToKICAgICIiIkF0dGFjaGVzIEFXUyBBdXRo + ZW50aWNhdGlvbiB0byB0aGUgZ2l2ZW4gUmVxdWVzdCBvYmplY3QuIiIiCgogICAgc2VydmljZV9i + YXNlX3VybCA9ICJzMy5hbWF6b25hd3MuY29tIgoKICAgIGRlZiBfX2luaXRfXyhzZWxmLCBhY2Nl + c3Nfa2V5LCBzZWNyZXRfa2V5LCBzZXJ2aWNlX3VybD1Ob25lKToKICAgICAgICBpZiBzZXJ2aWNl + X3VybDoKICAgICAgICAgICAgc2VsZi5zZXJ2aWNlX2Jhc2VfdXJsID0gc2VydmljZV91cmwKICAg + ICAgICBzZWxmLmFjY2Vzc19rZXkgPSBzdHIoYWNjZXNzX2tleSkKICAgICAgICBzZWxmLnNlY3Jl + dF9rZXkgPSBzdHIoc2VjcmV0X2tleSkKCiAgICBkZWYgX19jYWxsX18oc2VsZiwgcik6CiAgICAg + ICAgIyBDcmVhdGUgZGF0ZSBoZWFkZXIgaWYgaXQgaXMgbm90IGNyZWF0ZWQgeWV0LgogICAgICAg + IGlmICJkYXRlIiBub3QgaW4gci5oZWFkZXJzIGFuZCAieC1hbXotZGF0ZSIgbm90IGluIHIuaGVh + ZGVyczoKICAgICAgICAgICAgci5oZWFkZXJzWyJkYXRlIl0gPSBmb3JtYXRkYXRlKHRpbWV2YWw9 + Tm9uZSwgbG9jYWx0aW1lPUZhbHNlLCB1c2VnbXQ9VHJ1ZSkKICAgICAgICBzaWduYXR1cmUgPSBz + ZWxmLmdldF9zaWduYXR1cmUocikKICAgICAgICBpZiBweTNrOgogICAgICAgICAgICBzaWduYXR1 + cmUgPSBzaWduYXR1cmUuZGVjb2RlKCJ1dGYtOCIpCiAgICAgICAgci5oZWFkZXJzWyJBdXRob3Jp + emF0aW9uIl0gPSBmIkFXUyB7c2VsZi5hY2Nlc3Nfa2V5fTp7c2lnbmF0dXJlfSIKICAgICAgICBy + ZXR1cm4gcgoKICAgIGRlZiBnZXRfc2lnbmF0dXJlKHNlbGYsIHIpOgogICAgICAgIGNhbm9uaWNh + bF9zdHJpbmcgPSBzZWxmLmdldF9jYW5vbmljYWxfc3RyaW5nKHIudXJsLCByLmhlYWRlcnMsIHIu + bWV0aG9kKQogICAgICAgIGlmIHB5M2s6CiAgICAgICAgICAgIGtleSA9IHNlbGYuc2VjcmV0X2tl + eS5lbmNvZGUoInV0Zi04IikKICAgICAgICAgICAgbXNnID0gY2Fub25pY2FsX3N0cmluZy5lbmNv + ZGUoInV0Zi04IikKICAgICAgICBlbHNlOgogICAgICAgICAgICBrZXkgPSBzZWxmLnNlY3JldF9r + ZXkKICAgICAgICAgICAgbXNnID0gY2Fub25pY2FsX3N0cmluZwogICAgICAgIGggPSBobWFjLm5l + dyhrZXksIG1zZywgZGlnZXN0bW9kPXNoYSkKICAgICAgICByZXR1cm4gZW5jb2Rlc3RyaW5nKGgu + ZGlnZXN0KCkpLnN0cmlwKCkKCiAgICBkZWYgZ2V0X2Nhbm9uaWNhbF9zdHJpbmcoc2VsZiwgdXJs + LCBoZWFkZXJzLCBtZXRob2QpOgogICAgICAgIHBhcnNlZHVybCA9IHVybHBhcnNlKHVybCkKICAg + ICAgICBvYmplY3RrZXkgPSBwYXJzZWR1cmwucGF0aFsxOl0KCiAgICAgICAgYnVja2V0ID0gcGFy + c2VkdXJsLm5ldGxvY1s6IC1sZW4oc2VsZi5zZXJ2aWNlX2Jhc2VfdXJsKV0KICAgICAgICBpZiBs + ZW4oYnVja2V0KSA+IDE6CiAgICAgICAgICAgICMgcmVtb3ZlIGxhc3QgZG90CiAgICAgICAgICAg + IGJ1Y2tldCA9IGJ1Y2tldFs6LTFdCgogICAgICAgIGludGVyZXN0aW5nX2hlYWRlcnMgPSB7ImNv + bnRlbnQtbWQ1IjogIiIsICJjb250ZW50LXR5cGUiOiAiIiwgImRhdGUiOiAiIn0KICAgICAgICBm + b3Iga2V5IGluIGhlYWRlcnM6CiAgICAgICAgICAgIGxrID0ga2V5Lmxvd2VyKCkKICAgICAgICAg + ICAgdHJ5OgogICAgICAgICAgICAgICAgbGsgPSBsay5kZWNvZGUoInV0Zi04IikKICAgICAgICAg + ICAgZXhjZXB0OgogICAgICAgICAgICAgICAgcGFzcwogICAgICAgICAgICBpZiBoZWFkZXJzW2tl + eV0gYW5kICgKICAgICAgICAgICAgICAgIGxrIGluIGludGVyZXN0aW5nX2hlYWRlcnMua2V5cygp + IG9yIGxrLnN0YXJ0c3dpdGgoIngtYW16LSIpCiAgICAgICAgICAgICk6CiAgICAgICAgICAgICAg + ICBpbnRlcmVzdGluZ19oZWFkZXJzW2xrXSA9IGhlYWRlcnNba2V5XS5zdHJpcCgpCgogICAgICAg + ICMgSWYgeC1hbXotZGF0ZSBpcyB1c2VkIGl0IHN1cGVyc2VkZXMgdGhlIGRhdGUgaGVhZGVyLgog + ICAgICAgIGlmIG5vdCBweTNrOgogICAgICAgICAgICBpZiAieC1hbXotZGF0ZSIgaW4gaW50ZXJl + c3RpbmdfaGVhZGVyczoKICAgICAgICAgICAgICAgIGludGVyZXN0aW5nX2hlYWRlcnNbImRhdGUi + XSA9ICIiCiAgICAgICAgZWxzZToKICAgICAgICAgICAgaWYgIngtYW16LWRhdGUiIGluIGludGVy + ZXN0aW5nX2hlYWRlcnM6CiAgICAgICAgICAgICAgICBpbnRlcmVzdGluZ19oZWFkZXJzWyJkYXRl + Il0gPSAiIgoKICAgICAgICBidWYgPSBmInttZXRob2R9XG4iCiAgICAgICAgZm9yIGtleSBpbiBz + b3J0ZWQoaW50ZXJlc3RpbmdfaGVhZGVycy5rZXlzKCkpOgogICAgICAgICAgICB2YWwgPSBpbnRl + cmVzdGluZ19oZWFkZXJzW2tleV0KICAgICAgICAgICAgaWYga2V5LnN0YXJ0c3dpdGgoIngtYW16 + LSIpOgogICAgICAgICAgICAgICAgYnVmICs9IGYie2tleX06e3ZhbH1cbiIKICAgICAgICAgICAg + ZWxzZToKICAgICAgICAgICAgICAgIGJ1ZiArPSBmInt2YWx9XG4iCgogICAgICAgICMgYXBwZW5k + IHRoZSBidWNrZXQgaWYgaXQgZXhpc3RzCiAgICAgICAgaWYgYnVja2V0ICE9ICIiOgogICAgICAg + ICAgICBidWYgKz0gZiIve2J1Y2tldH0iCgogICAgICAgICMgYWRkIHRoZSBvYmplY3RrZXkuIGV2 + ZW4gaWYgaXQgZG9lc24ndCBleGlzdCwgYWRkIHRoZSBzbGFzaAogICAgICAgIGJ1ZiArPSBmIi97 + b2JqZWN0a2V5fSIKCiAgICAgICAgcmV0dXJuIGJ1ZgoKCmNsYXNzIFJhZG9zSlNPTjoKICAgIEVY + VEVSTkFMX1VTRVJfTkFNRSA9ICJjbGllbnQuaGVhbHRoY2hlY2tlciIKICAgIEVYVEVSTkFMX1JH + V19BRE1JTl9PUFNfVVNFUl9OQU1FID0gInJndy1hZG1pbi1vcHMtdXNlciIKICAgIEVNUFRZX09V + VFBVVF9MSVNUID0gIkVtcHR5IG91dHB1dCBsaXN0IgogICAgREVGQVVMVF9SR1dfUE9PTF9QUkVG + SVggPSAiZGVmYXVsdCIKICAgIERFRkFVTFRfTU9OSVRPUklOR19FTkRQT0lOVF9QT1JUID0gIjky + ODMiCgogICAgQGNsYXNzbWV0aG9kCiAgICBkZWYgZ2VuX2FyZ19wYXJzZXIoY2xzLCBhcmdzX3Rv + X3BhcnNlPU5vbmUpOgogICAgICAgIGFyZ1AgPSBhcmdwYXJzZS5Bcmd1bWVudFBhcnNlcigpCgog + ICAgICAgIGNvbW1vbl9ncm91cCA9IGFyZ1AuYWRkX2FyZ3VtZW50X2dyb3VwKCJjb21tb24iKQog + ICAgICAgIGNvbW1vbl9ncm91cC5hZGRfYXJndW1lbnQoIi0tdmVyYm9zZSIsICItdiIsIGFjdGlv + bj0ic3RvcmVfdHJ1ZSIsIGRlZmF1bHQ9RmFsc2UpCiAgICAgICAgY29tbW9uX2dyb3VwLmFkZF9h + cmd1bWVudCgKICAgICAgICAgICAgIi0tY2VwaC1jb25mIiwgIi1jIiwgaGVscD0iUHJvdmlkZSBh + IGNlcGggY29uZiBmaWxlLiIsIHR5cGU9c3RyCiAgICAgICAgKQogICAgICAgIGNvbW1vbl9ncm91 + cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLWtleXJpbmciLCAiLWsiLCBoZWxwPSJQYXRo + IHRvIGNlcGgga2V5cmluZyBmaWxlLiIsIHR5cGU9c3RyCiAgICAgICAgKQogICAgICAgIGNvbW1v + bl9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLXJ1bi1hcy11c2VyIiwKICAgICAg + ICAgICAgIi11IiwKICAgICAgICAgICAgZGVmYXVsdD0iIiwKICAgICAgICAgICAgdHlwZT1zdHIs + CiAgICAgICAgICAgIGhlbHA9IlByb3ZpZGVzIGEgdXNlciBuYW1lIHRvIGNoZWNrIHRoZSBjbHVz + dGVyJ3MgaGVhbHRoIHN0YXR1cywgbXVzdCBiZSBwcmVmaXhlZCBieSAnY2xpZW50LiciLAogICAg + ICAgICkKICAgICAgICBjb21tb25fZ3JvdXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAiLS1j + bHVzdGVyLW5hbWUiLAogICAgICAgICAgICBkZWZhdWx0PSIiLAogICAgICAgICAgICBoZWxwPSJL + dWJlcm5ldGVzIGNsdXN0ZXIgbmFtZShsZWdhY3kgZmxhZyksIE5vdGU6IEVpdGhlciB1c2UgdGhp + cyBvciAtLWs4cy1jbHVzdGVyLW5hbWUiLAogICAgICAgICkKICAgICAgICBjb21tb25fZ3JvdXAu + YWRkX2FyZ3VtZW50KAogICAgICAgICAgICAiLS1rOHMtY2x1c3Rlci1uYW1lIiwgZGVmYXVsdD0i + IiwgaGVscD0iS3ViZXJuZXRlcyBjbHVzdGVyIG5hbWUiCiAgICAgICAgKQogICAgICAgIGNvbW1v + bl9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLW5hbWVzcGFjZSIsCiAgICAgICAg + ICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIGhlbHA9Ik5hbWVzcGFjZSB3aGVyZSBDZXBoQ2x1 + c3RlciBpcyBydW5uaW5nIiwKICAgICAgICApCiAgICAgICAgY29tbW9uX2dyb3VwLmFkZF9hcmd1 + bWVudCgKICAgICAgICAgICAgIi0tcmd3LXBvb2wtcHJlZml4IiwgZGVmYXVsdD0iIiwgaGVscD0i + UkdXIFBvb2wgcHJlZml4IgogICAgICAgICkKICAgICAgICBjb21tb25fZ3JvdXAuYWRkX2FyZ3Vt + ZW50KAogICAgICAgICAgICAiLS1yZXN0cmljdGVkLWF1dGgtcGVybWlzc2lvbiIsCiAgICAgICAg + ICAgIGRlZmF1bHQ9RmFsc2UsCiAgICAgICAgICAgIGhlbHA9IlJlc3RyaWN0IGNlcGhDU0lLZXly + aW5ncyBhdXRoIHBlcm1pc3Npb25zIHRvIHNwZWNpZmljIHBvb2xzLCBjbHVzdGVyLiIKICAgICAg + ICAgICAgKyAiTWFuZGF0b3J5IGZsYWdzIHRoYXQgbmVlZCB0byBiZSBzZXQgYXJlIC0tcmJkLWRh + dGEtcG9vbC1uYW1lLCBhbmQgLS1rOHMtY2x1c3Rlci1uYW1lLiIKICAgICAgICAgICAgKyAiLS1j + ZXBoZnMtZmlsZXN5c3RlbS1uYW1lIGZsYWcgY2FuIGFsc28gYmUgcGFzc2VkIGluIGNhc2Ugb2Yg + Y2VwaGZzIHVzZXIgcmVzdHJpY3Rpb24sIHNvIGl0IGNhbiByZXN0cmljdCB1c2VyIHRvIHBhcnRp + Y3VsYXIgY2VwaGZzIGZpbGVzeXN0ZW0iCiAgICAgICAgICAgICsgInNhbXBsZSBydW46IGBweXRo + b24zIC9ldGMvY2VwaC9jcmVhdGUtZXh0ZXJuYWwtY2x1c3Rlci1yZXNvdXJjZXMucHkgLS1jZXBo + ZnMtZmlsZXN5c3RlbS1uYW1lIG15ZnMgLS1yYmQtZGF0YS1wb29sLW5hbWUgcmVwbGljYXBvb2wg + LS1rOHMtY2x1c3Rlci1uYW1lIHJvb2tzdG9yYWdlIC0tcmVzdHJpY3RlZC1hdXRoLXBlcm1pc3Np + b24gdHJ1ZWAiCiAgICAgICAgICAgICsgIk5vdGU6IFJlc3RyaWN0aW5nIHRoZSBjc2ktdXNlcnMg + cGVyIHBvb2wsIGFuZCBwZXIgY2x1c3RlciB3aWxsIHJlcXVpcmUgY3JlYXRpbmcgbmV3IGNzaS11 + c2VycyBhbmQgbmV3IHNlY3JldHMgZm9yIHRoYXQgY3NpLXVzZXJzLiIKICAgICAgICAgICAgKyAi + U28gYXBwbHkgdGhlc2Ugc2VjcmV0cyBvbmx5IHRvIG5ldyBgQ29uc3VtZXIgY2x1c3RlcmAgZGVw + bG95bWVudCB3aGlsZSB1c2luZyB0aGUgc2FtZSBgU291cmNlIGNsdXN0ZXJgLiIsCiAgICAgICAg + KQogICAgICAgIGNvbW1vbl9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLXYyLXBv + cnQtZW5hYmxlIiwKICAgICAgICAgICAgYWN0aW9uPSJzdG9yZV90cnVlIiwKICAgICAgICAgICAg + ZGVmYXVsdD1GYWxzZSwKICAgICAgICAgICAgaGVscD0iRW5hYmxlIHYyIG1vbiBwb3J0KDMzMDAp + IGZvciBtb25zIiwKICAgICAgICApCgogICAgICAgIG91dHB1dF9ncm91cCA9IGFyZ1AuYWRkX2Fy + Z3VtZW50X2dyb3VwKCJvdXRwdXQiKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQo + CiAgICAgICAgICAgICItLWZvcm1hdCIsCiAgICAgICAgICAgICItdCIsCiAgICAgICAgICAgIGNo + b2ljZXM9WyJqc29uIiwgImJhc2giXSwKICAgICAgICAgICAgZGVmYXVsdD0ianNvbiIsCiAgICAg + ICAgICAgIGhlbHA9IlByb3ZpZGVzIHRoZSBvdXRwdXQgZm9ybWF0IChqc29uIHwgYmFzaCkiLAog + ICAgICAgICkKICAgICAgICBvdXRwdXRfZ3JvdXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAi + LS1vdXRwdXQiLAogICAgICAgICAgICAiLW8iLAogICAgICAgICAgICBkZWZhdWx0PSIiLAogICAg + ICAgICAgICBoZWxwPSJPdXRwdXQgd2lsbCBiZSBzdG9yZWQgaW50byB0aGUgcHJvdmlkZWQgZmls + ZSIsCiAgICAgICAgKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAg + ICAgICItLWNlcGhmcy1maWxlc3lzdGVtLW5hbWUiLAogICAgICAgICAgICBkZWZhdWx0PSIiLAog + ICAgICAgICAgICBoZWxwPSJQcm92aWRlcyB0aGUgbmFtZSBvZiB0aGUgQ2VwaCBmaWxlc3lzdGVt + IiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgKICAgICAgICAg + ICAgIi0tY2VwaGZzLW1ldGFkYXRhLXBvb2wtbmFtZSIsCiAgICAgICAgICAgIGRlZmF1bHQ9IiIs + CiAgICAgICAgICAgIGhlbHA9IlByb3ZpZGVzIHRoZSBuYW1lIG9mIHRoZSBjZXBoZnMgbWV0YWRh + dGEgcG9vbCIsCiAgICAgICAgKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQoCiAg + ICAgICAgICAgICItLWNlcGhmcy1kYXRhLXBvb2wtbmFtZSIsCiAgICAgICAgICAgIGRlZmF1bHQ9 + IiIsCiAgICAgICAgICAgIGhlbHA9IlByb3ZpZGVzIHRoZSBuYW1lIG9mIHRoZSBjZXBoZnMgZGF0 + YSBwb29sIiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgKICAg + ICAgICAgICAgIi0tcmJkLWRhdGEtcG9vbC1uYW1lIiwKICAgICAgICAgICAgZGVmYXVsdD0iIiwK + ICAgICAgICAgICAgcmVxdWlyZWQ9RmFsc2UsCiAgICAgICAgICAgIGhlbHA9IlByb3ZpZGVzIHRo + ZSBuYW1lIG9mIHRoZSBSQkQgZGF0YXBvb2wiLAogICAgICAgICkKICAgICAgICBvdXRwdXRfZ3Jv + dXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAiLS1hbGlhcy1yYmQtZGF0YS1wb29sLW5hbWUi + LAogICAgICAgICAgICBkZWZhdWx0PSIiLAogICAgICAgICAgICByZXF1aXJlZD1GYWxzZSwKICAg + ICAgICAgICAgaGVscD0iUHJvdmlkZXMgYW4gYWxpYXMgZm9yIHRoZSAgUkJEIGRhdGEgcG9vbCBu + YW1lLCBuZWNlc3NhcnkgaWYgYSBzcGVjaWFsIGNoYXJhY3RlciBpcyBwcmVzZW50IGluIHRoZSBw + b29sIG5hbWUgc3VjaCBhcyBhIHBlcmlvZCBvciB1bmRlcnNjb3JlIiwKICAgICAgICApCiAgICAg + ICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgKICAgICAgICAgICAgIi0tcmd3LWVuZHBvaW50 + IiwKICAgICAgICAgICAgZGVmYXVsdD0iIiwKICAgICAgICAgICAgcmVxdWlyZWQ9RmFsc2UsCiAg + ICAgICAgICAgIGhlbHA9IlJBRE9TIEdhdGV3YXkgZW5kcG9pbnQgKGluIGA8SVB2ND46PFBPUlQ+ + YCBvciBgPFtJUHY2XT46PFBPUlQ+YCBvciBgPEZRRE4+OjxQT1JUPmAgZm9ybWF0KSIsCiAgICAg + ICAgKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLXJn + dy10bHMtY2VydC1wYXRoIiwKICAgICAgICAgICAgZGVmYXVsdD0iIiwKICAgICAgICAgICAgcmVx + dWlyZWQ9RmFsc2UsCiAgICAgICAgICAgIGhlbHA9IlJBRE9TIEdhdGV3YXkgZW5kcG9pbnQgVExT + IGNlcnRpZmljYXRlIiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVu + dCgKICAgICAgICAgICAgIi0tcmd3LXNraXAtdGxzIiwKICAgICAgICAgICAgcmVxdWlyZWQ9RmFs + c2UsCiAgICAgICAgICAgIGRlZmF1bHQ9RmFsc2UsCiAgICAgICAgICAgIGhlbHA9Iklnbm9yZSBU + TFMgY2VydGlmaWNhdGlvbiB2YWxpZGF0aW9uIHdoZW4gYSBzZWxmLXNpZ25lZCBjZXJ0aWZpY2F0 + ZSBpcyBwcm92aWRlZCAoTk9UIFJFQ09NTUVOREVEIiwKICAgICAgICApCiAgICAgICAgb3V0cHV0 + X2dyb3VwLmFkZF9hcmd1bWVudCgKICAgICAgICAgICAgIi0tbW9uaXRvcmluZy1lbmRwb2ludCIs + CiAgICAgICAgICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAogICAg + ICAgICAgICBoZWxwPSJDZXBoIE1hbmFnZXIgcHJvbWV0aGV1cyBleHBvcnRlciBlbmRwb2ludHMg + KGNvbW1hIHNlcGFyYXRlZCBsaXN0IG9mIChmb3JtYXQgYDxJUHY0PmAgb3IgYDxbSVB2Nl0+YCBv + ciBgPEZRRE4+YCkgZW50cmllcyBvZiBhY3RpdmUgYW5kIHN0YW5kYnkgbWdycykiLAogICAgICAg + ICkKICAgICAgICBvdXRwdXRfZ3JvdXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAiLS1tb25p + dG9yaW5nLWVuZHBvaW50LXBvcnQiLAogICAgICAgICAgICBkZWZhdWx0PSIiLAogICAgICAgICAg + ICByZXF1aXJlZD1GYWxzZSwKICAgICAgICAgICAgaGVscD0iQ2VwaCBNYW5hZ2VyIHByb21ldGhl + dXMgZXhwb3J0ZXIgcG9ydCIsCiAgICAgICAgKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJn + dW1lbnQoCiAgICAgICAgICAgICItLXNraXAtbW9uaXRvcmluZy1lbmRwb2ludCIsCiAgICAgICAg + ICAgIGRlZmF1bHQ9RmFsc2UsCiAgICAgICAgICAgIGFjdGlvbj0ic3RvcmVfdHJ1ZSIsCiAgICAg + ICAgICAgIGhlbHA9IkRvIG5vdCBjaGVjayBmb3IgYSBtb25pdG9yaW5nIGVuZHBvaW50IGZvciB0 + aGUgQ2VwaCBjbHVzdGVyIiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1 + bWVudCgKICAgICAgICAgICAgIi0tcmJkLW1ldGFkYXRhLWVjLXBvb2wtbmFtZSIsCiAgICAgICAg + ICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAogICAgICAgICAgICBo + ZWxwPSJQcm92aWRlcyB0aGUgbmFtZSBvZiBlcmFzdXJlIGNvZGVkIFJCRCBtZXRhZGF0YSBwb29s + IiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgKICAgICAgICAg + ICAgIi0tZHJ5LXJ1biIsCiAgICAgICAgICAgIGRlZmF1bHQ9RmFsc2UsCiAgICAgICAgICAgIGFj + dGlvbj0ic3RvcmVfdHJ1ZSIsCiAgICAgICAgICAgIGhlbHA9IkRyeSBydW4gcHJpbnRzIHRoZSBl + eGVjdXRlZCBjb21tYW5kcyB3aXRob3V0IHJ1bm5pbmcgdGhlbSIsCiAgICAgICAgKQogICAgICAg + IG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQoCiAgICAgICAgICAgICItLXJhZG9zLW5hbWVzcGFj + ZSIsCiAgICAgICAgICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAog + ICAgICAgICAgICBoZWxwPSJEaXZpZGVzIGEgcG9vbCBpbnRvIHNlcGFyYXRlIGxvZ2ljYWwgbmFt + ZXNwYWNlcywgdXNlZCBmb3IgY3JlYXRpbmcgUkJEIFBWQyBpbiBhIENlcGhCbG9ja1Bvb2xSYWRv + c05hbWVzcGFjZSAoc2hvdWxkIGJlIGxvd2VyIGNhc2UpIiwKICAgICAgICApCiAgICAgICAgb3V0 + cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgKICAgICAgICAgICAgIi0tc3Vidm9sdW1lLWdyb3VwIiwK + ICAgICAgICAgICAgZGVmYXVsdD0iIiwKICAgICAgICAgICAgcmVxdWlyZWQ9RmFsc2UsCiAgICAg + ICAgICAgIGhlbHA9InByb3ZpZGVzIHRoZSBuYW1lIG9mIHRoZSBzdWJ2b2x1bWUgZ3JvdXAiLAog + ICAgICAgICkKICAgICAgICBvdXRwdXRfZ3JvdXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAi + LS1yZ3ctcmVhbG0tbmFtZSIsCiAgICAgICAgICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJl + cXVpcmVkPUZhbHNlLAogICAgICAgICAgICBoZWxwPSJwcm92aWRlcyB0aGUgbmFtZSBvZiB0aGUg + cmd3LXJlYWxtIiwKICAgICAgICApCiAgICAgICAgb3V0cHV0X2dyb3VwLmFkZF9hcmd1bWVudCgK + ICAgICAgICAgICAgIi0tcmd3LXpvbmUtbmFtZSIsCiAgICAgICAgICAgIGRlZmF1bHQ9IiIsCiAg + ICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAogICAgICAgICAgICBoZWxwPSJwcm92aWRlcyB0aGUg + bmFtZSBvZiB0aGUgcmd3LXpvbmUiLAogICAgICAgICkKICAgICAgICBvdXRwdXRfZ3JvdXAuYWRk + X2FyZ3VtZW50KAogICAgICAgICAgICAiLS1yZ3ctem9uZWdyb3VwLW5hbWUiLAogICAgICAgICAg + ICBkZWZhdWx0PSIiLAogICAgICAgICAgICByZXF1aXJlZD1GYWxzZSwKICAgICAgICAgICAgaGVs + cD0icHJvdmlkZXMgdGhlIG5hbWUgb2YgdGhlIHJndy16b25lZ3JvdXAiLAogICAgICAgICkKICAg + ICAgICBvdXRwdXRfZ3JvdXAuYWRkX2FyZ3VtZW50KAogICAgICAgICAgICAiLS10b3BvbG9neS1w + b29scyIsCiAgICAgICAgICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNl + LAogICAgICAgICAgICBoZWxwPSJjb21tYS1zZXBhcmF0ZWQgbGlzdCBvZiB0b3BvbG9neS1jb25z + dHJhaW5lZCByYmQgcG9vbHMiLAogICAgICAgICkKICAgICAgICBvdXRwdXRfZ3JvdXAuYWRkX2Fy + Z3VtZW50KAogICAgICAgICAgICAiLS10b3BvbG9neS1mYWlsdXJlLWRvbWFpbi1sYWJlbCIsCiAg + ICAgICAgICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAogICAgICAg + ICAgICBoZWxwPSJrOHMgY2x1c3RlciBmYWlsdXJlIGRvbWFpbiBsYWJlbCAoZXhhbXBsZTogem9u + ZSwgcmFjaywgb3IgaG9zdCkgZm9yIHRoZSB0b3BvbG9neS1wb29scyB0aGF0IG1hdGNoIHRoZSBj + ZXBoIGRvbWFpbiIsCiAgICAgICAgKQogICAgICAgIG91dHB1dF9ncm91cC5hZGRfYXJndW1lbnQo + CiAgICAgICAgICAgICItLXRvcG9sb2d5LWZhaWx1cmUtZG9tYWluLXZhbHVlcyIsCiAgICAgICAg + ICAgIGRlZmF1bHQ9IiIsCiAgICAgICAgICAgIHJlcXVpcmVkPUZhbHNlLAogICAgICAgICAgICBo + ZWxwPSJjb21tYS1zZXBhcmF0ZWQgbGlzdCBvZiB0aGUgazhzIGNsdXN0ZXIgZmFpbHVyZSBkb21h + aW4gdmFsdWVzIGNvcnJlc3BvbmRpbmcgdG8gZWFjaCBvZiB0aGUgcG9vbHMgaW4gdGhlIGB0b3Bv + bG9neS1wb29sc2AgbGlzdCIsCiAgICAgICAgKQoKICAgICAgICB1cGdyYWRlX2dyb3VwID0gYXJn + UC5hZGRfYXJndW1lbnRfZ3JvdXAoInVwZ3JhZGUiKQogICAgICAgIHVwZ3JhZGVfZ3JvdXAuYWRk + X2FyZ3VtZW50KAogICAgICAgICAgICAiLS11cGdyYWRlIiwKICAgICAgICAgICAgYWN0aW9uPSJz + dG9yZV90cnVlIiwKICAgICAgICAgICAgZGVmYXVsdD1GYWxzZSwKICAgICAgICAgICAgaGVscD0i + VXBncmFkZXMgdGhlIGNlcGhDU0lLZXlyaW5ncyhGb3IgZXhhbXBsZTogY2xpZW50LmNzaS1jZXBo + ZnMtcHJvdmlzaW9uZXIpIGFuZCBjbGllbnQuaGVhbHRoY2hlY2tlciBjZXBoIHVzZXJzIHdpdGgg + bmV3IHBlcm1pc3Npb25zIG5lZWRlZCBmb3IgdGhlIG5ldyBjbHVzdGVyIHZlcnNpb24gYW5kIG9s + ZGVyIHBlcm1pc3Npb24gd2lsbCBzdGlsbCBiZSBhcHBsaWVkLiIKICAgICAgICAgICAgKyAiU2Ft + cGxlIHJ1bjogYHB5dGhvbjMgL2V0Yy9jZXBoL2NyZWF0ZS1leHRlcm5hbC1jbHVzdGVyLXJlc291 + cmNlcy5weSAtLXVwZ3JhZGVgLCB0aGlzIHdpbGwgdXBncmFkZSBhbGwgdGhlIGRlZmF1bHQgY3Np + IHVzZXJzKG5vbi1yZXN0cmljdGVkKSIKICAgICAgICAgICAgKyAiRm9yIHJlc3RyaWN0ZWQgdXNl + cnMoRm9yIGV4YW1wbGU6IGNsaWVudC5jc2ktY2VwaGZzLXByb3Zpc2lvbmVyLW9wZW5zaGlmdC1z + dG9yYWdlLW15ZnMpLCB1c2VycyBjcmVhdGVkIHVzaW5nIC0tcmVzdHJpY3RlZC1hdXRoLXBlcm1p + c3Npb24gZmxhZyBuZWVkIHRvIHBhc3MgbWFuZGF0b3J5IGZsYWdzIgogICAgICAgICAgICArICJt + YW5kYXRvcnkgZmxhZ3M6ICctLXJiZC1kYXRhLXBvb2wtbmFtZSwgLS1rOHMtY2x1c3Rlci1uYW1l + IGFuZCAtLXJ1bi1hcy11c2VyJyBmbGFncyB3aGlsZSB1cGdyYWRpbmciCiAgICAgICAgICAgICsg + ImluIGNhc2Ugb2YgY2VwaGZzIHVzZXJzIGlmIHlvdSBoYXZlIHBhc3NlZCAtLWNlcGhmcy1maWxl + c3lzdGVtLW5hbWUgZmxhZyB3aGlsZSBjcmVhdGluZyB1c2VyIHRoZW4gd2hpbGUgdXBncmFkaW5n + IGl0IHdpbGwgYmUgbWFuZGF0b3J5IHRvbyIKICAgICAgICAgICAgKyAiU2FtcGxlIHJ1bjogYHB5 + dGhvbjMgL2V0Yy9jZXBoL2NyZWF0ZS1leHRlcm5hbC1jbHVzdGVyLXJlc291cmNlcy5weSAtLXVw + Z3JhZGUgLS1yYmQtZGF0YS1wb29sLW5hbWUgcmVwbGljYXBvb2wgLS1rOHMtY2x1c3Rlci1uYW1l + IHJvb2tzdG9yYWdlICAtLXJ1bi1hcy11c2VyIGNsaWVudC5jc2ktcmJkLW5vZGUtcm9va3N0b3Jh + Z2UtcmVwbGljYXBvb2xgIgogICAgICAgICAgICArICJQUzogQW4gZXhpc3Rpbmcgbm9uLXJlc3Ry + aWN0ZWQgdXNlciBjYW5ub3QgYmUgY29udmVydGVkIHRvIGEgcmVzdHJpY3RlZCB1c2VyIGJ5IHVw + Z3JhZGluZy4iCiAgICAgICAgICAgICsgIlVwZ3JhZGUgZmxhZyBzaG91bGQgb25seSBiZSB1c2Vk + IHRvIGFwcGVuZCBuZXcgcGVybWlzc2lvbnMgdG8gdXNlcnMsIGl0IHNob3VsZG4ndCBiZSB1c2Vk + IGZvciBjaGFuZ2luZyB1c2VyIGFscmVhZHkgYXBwbGllZCBwZXJtaXNzaW9uLCBmb3IgZXhhbXBs + ZSB5b3Ugc2hvdWxkbid0IGNoYW5nZSBpbiB3aGljaCBwb29sIHVzZXIgaGFzIGFjY2VzcyIsCiAg + ICAgICAgKQoKICAgICAgICBpZiBhcmdzX3RvX3BhcnNlOgogICAgICAgICAgICBhc3NlcnQgKAog + ICAgICAgICAgICAgICAgdHlwZShhcmdzX3RvX3BhcnNlKSA9PSBsaXN0CiAgICAgICAgICAgICks + ICJBcmd1bWVudCB0byAnZ2VuX2FyZ19wYXJzZXInIHNob3VsZCBiZSBhIGxpc3QiCiAgICAgICAg + ZWxzZToKICAgICAgICAgICAgYXJnc190b19wYXJzZSA9IHN5cy5hcmd2WzE6XQogICAgICAgIHJl + dHVybiBhcmdQLnBhcnNlX2FyZ3MoYXJnc190b19wYXJzZSkKCiAgICBkZWYgdmFsaWRhdGVfcmJk + X21ldGFkYXRhX2VjX3Bvb2xfbmFtZShzZWxmKToKICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2Vy + LnJiZF9tZXRhZGF0YV9lY19wb29sX25hbWU6CiAgICAgICAgICAgIHJiZF9tZXRhZGF0YV9lY19w + b29sX25hbWUgPSBzZWxmLl9hcmdfcGFyc2VyLnJiZF9tZXRhZGF0YV9lY19wb29sX25hbWUKICAg + ICAgICAgICAgcmJkX3Bvb2xfbmFtZSA9IHNlbGYuX2FyZ19wYXJzZXIucmJkX2RhdGFfcG9vbF9u + YW1lCgogICAgICAgICAgICBpZiByYmRfcG9vbF9uYW1lID09ICIiOgogICAgICAgICAgICAgICAg + cmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgICAgICAiRmxh + ZyAnLS1yYmQtZGF0YS1wb29sLW5hbWUnIHNob3VsZCBub3QgYmUgZW1wdHkiCiAgICAgICAgICAg + ICAgICApCgogICAgICAgICAgICBpZiByYmRfbWV0YWRhdGFfZWNfcG9vbF9uYW1lID09ICIiOgog + ICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAg + ICAgICAgICAgICAiRmxhZyAnLS1yYmQtbWV0YWRhdGEtZWMtcG9vbC1uYW1lJyBzaG91bGQgbm90 + IGJlIGVtcHR5IgogICAgICAgICAgICAgICAgKQoKICAgICAgICAgICAgY21kX2pzb24gPSB7InBy + ZWZpeCI6ICJvc2QgZHVtcCIsICJmb3JtYXQiOiAianNvbiJ9CiAgICAgICAgICAgIHJldF92YWws + IGpzb25fb3V0LCBlcnJfbXNnID0gc2VsZi5fY29tbW9uX2NtZF9qc29uX2dlbihjbWRfanNvbikK + ICAgICAgICAgICAgaWYgcmV0X3ZhbCAhPSAwIG9yIGxlbihqc29uX291dCkgPT0gMDoKICAgICAg + ICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAg + ICAgICAgZiJ7Y21kX2pzb25bJ3ByZWZpeCddfSBjb21tYW5kIGZhaWxlZC5cbiIKICAgICAgICAg + ICAgICAgICAgICBmIkVycm9yOiB7ZXJyX21zZyBpZiByZXRfdmFsICE9IDAgZWxzZSBzZWxmLkVN + UFRZX09VVFBVVF9MSVNUfSIKICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgbWV0YWRhdGFf + cG9vbF9leGlzdCwgcG9vbF9leGlzdCA9IEZhbHNlLCBGYWxzZQoKICAgICAgICAgICAgZm9yIGtl + eSBpbiBqc29uX291dFsicG9vbHMiXToKICAgICAgICAgICAgICAgICMgaWYgZXJhc3VyZV9jb2Rl + X3Byb2ZpbGUgaXMgZW1wdHkgYW5kIHBvb2wgbmFtZSBleGlzdHMgdGhlbiBpdCByZXBsaWNhIHBv + b2wKICAgICAgICAgICAgICAgIGlmICgKICAgICAgICAgICAgICAgICAgICBrZXlbImVyYXN1cmVf + Y29kZV9wcm9maWxlIl0gPT0gIiIKICAgICAgICAgICAgICAgICAgICBhbmQga2V5WyJwb29sX25h + bWUiXSA9PSByYmRfbWV0YWRhdGFfZWNfcG9vbF9uYW1lCiAgICAgICAgICAgICAgICApOgogICAg + ICAgICAgICAgICAgICAgIG1ldGFkYXRhX3Bvb2xfZXhpc3QgPSBUcnVlCiAgICAgICAgICAgICAg + ICAjIGlmIGVyYXN1cmVfY29kZV9wcm9maWxlIGlzIG5vdCBlbXB0eSBhbmQgcG9vbCBuYW1lIGV4 + aXN0cyB0aGVuIGl0IGlzIGVjIHBvb2wKICAgICAgICAgICAgICAgIGlmIGtleVsiZXJhc3VyZV9j + b2RlX3Byb2ZpbGUiXSBhbmQga2V5WyJwb29sX25hbWUiXSA9PSByYmRfcG9vbF9uYW1lOgogICAg + ICAgICAgICAgICAgICAgIHBvb2xfZXhpc3QgPSBUcnVlCgogICAgICAgICAgICBpZiBub3QgbWV0 + YWRhdGFfcG9vbF9leGlzdDoKICAgICAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVF + eGNlcHRpb24oCiAgICAgICAgICAgICAgICAgICAgIlByb3ZpZGVkIHJiZF9lY19tZXRhZGF0YV9w + b29sIG5hbWUsIgogICAgICAgICAgICAgICAgICAgIGYiIHtyYmRfbWV0YWRhdGFfZWNfcG9vbF9u + YW1lfSwgZG9lcyBub3QgZXhpc3QiCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIGlmIG5v + dCBwb29sX2V4aXN0OgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2Vw + dGlvbigKICAgICAgICAgICAgICAgICAgICBmIlByb3ZpZGVkIHJiZF9kYXRhX3Bvb2wgbmFtZSwg + e3JiZF9wb29sX25hbWV9LCBkb2VzIG5vdCBleGlzdCIKICAgICAgICAgICAgICAgICkKICAgICAg + ICAgICAgcmV0dXJuIHJiZF9tZXRhZGF0YV9lY19wb29sX25hbWUKCiAgICBkZWYgZHJ5X3J1bihz + ZWxmLCBtc2cpOgogICAgICAgIGlmIHNlbGYuX2FyZ19wYXJzZXIuZHJ5X3J1bjoKICAgICAgICAg + ICAgcHJpbnQoIkV4ZWN1dGU6ICIgKyAiJyIgKyBtc2cgKyAiJyIpCgogICAgZGVmIHZhbGlkYXRl + X3Jnd19lbmRwb2ludF90bHNfY2VydChzZWxmKToKICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2Vy + LnJnd190bHNfY2VydF9wYXRoOgogICAgICAgICAgICB3aXRoIG9wZW4oc2VsZi5fYXJnX3BhcnNl + ci5yZ3dfdGxzX2NlcnRfcGF0aCwgZW5jb2Rpbmc9InV0ZjgiKSBhcyBmOgogICAgICAgICAgICAg + ICAgY29udGVudHMgPSBmLnJlYWQoKQogICAgICAgICAgICAgICAgcmV0dXJuIGNvbnRlbnRzLnJz + dHJpcCgpCgogICAgZGVmIF9jaGVja19jb25mbGljdGluZ19vcHRpb25zKHNlbGYpOgogICAgICAg + IGlmIG5vdCBzZWxmLl9hcmdfcGFyc2VyLnVwZ3JhZGUgYW5kIG5vdCBzZWxmLl9hcmdfcGFyc2Vy + LnJiZF9kYXRhX3Bvb2xfbmFtZToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4 + Y2VwdGlvbigKICAgICAgICAgICAgICAgICJFaXRoZXIgJy0tdXBncmFkZScgb3IgJy0tcmJkLWRh + dGEtcG9vbC1uYW1lIDxwb29sX25hbWU+JyBzaG91bGQgYmUgc3BlY2lmaWVkIgogICAgICAgICAg + ICApCgogICAgZGVmIF9pbnZhbGlkX2VuZHBvaW50KHNlbGYsIGVuZHBvaW50X3N0cik6CiAgICAg + ICAgIyBleHRyYWN0IHRoZSBwb3J0IGJ5IGdldHRpbmcgdGhlIGxhc3Qgc3BsaXQgb24gYDpgIGRl + bGltaXRlcgogICAgICAgIHRyeToKICAgICAgICAgICAgZW5kcG9pbnRfc3RyX2lwLCBwb3J0ID0g + ZW5kcG9pbnRfc3RyLnJzcGxpdCgiOiIsIDEpCiAgICAgICAgZXhjZXB0IFZhbHVlRXJyb3I6CiAg + ICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oZiJOb3QgYSBwcm9wZXIg + ZW5kcG9pbnQ6IHtlbmRwb2ludF9zdHJ9IikKCiAgICAgICAgdHJ5OgogICAgICAgICAgICBpZiBl + bmRwb2ludF9zdHJfaXBbMF0gPT0gIlsiOgogICAgICAgICAgICAgICAgZW5kcG9pbnRfc3RyX2lw + ID0gZW5kcG9pbnRfc3RyX2lwWzEgOiBsZW4oZW5kcG9pbnRfc3RyX2lwKSAtIDFdCiAgICAgICAg + ICAgIGlwX3R5cGUgPSAoCiAgICAgICAgICAgICAgICAiSVB2NCIgaWYgdHlwZShpcF9hZGRyZXNz + KGVuZHBvaW50X3N0cl9pcCkpIGlzIElQdjRBZGRyZXNzIGVsc2UgIklQdjYiCiAgICAgICAgICAg + ICkKICAgICAgICBleGNlcHQgVmFsdWVFcnJvcjoKICAgICAgICAgICAgaXBfdHlwZSA9ICJGUURO + IgogICAgICAgIGlmIG5vdCBwb3J0LmlzZGlnaXQoKToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0 + aW9uRmFpbHVyZUV4Y2VwdGlvbihmIlBvcnQgbm90IHZhbGlkOiB7cG9ydH0iKQogICAgICAgIGlu + dFBvcnQgPSBpbnQocG9ydCkKICAgICAgICBpZiBpbnRQb3J0IDwgMSBvciBpbnRQb3J0ID4gMioq + MTYgLSAxOgogICAgICAgICAgICByYWlzZSBFeGVjdXRpb25GYWlsdXJlRXhjZXB0aW9uKGYiT3V0 + IG9mIHJhbmdlIHBvcnQgbnVtYmVyOiB7cG9ydH0iKQoKICAgICAgICByZXR1cm4gaXBfdHlwZQoK + ICAgIGRlZiBlbmRwb2ludF9kaWFsKHNlbGYsIGVuZHBvaW50X3N0ciwgaXBfdHlwZSwgdGltZW91 + dD0zLCBjZXJ0PU5vbmUpOgogICAgICAgICMgaWYgdGhlICdjbHVzdGVyJyBpbnN0YW5jZSBpcyBh + IGR1bW15IG9uZSwKICAgICAgICAjIGRvbid0IHRyeSB0byByZWFjaCBvdXQgdG8gdGhlIGVuZHBv + aW50CiAgICAgICAgaWYgaXNpbnN0YW5jZShzZWxmLmNsdXN0ZXIsIER1bW15UmFkb3MpOgogICAg + ICAgICAgICByZXR1cm4gIiIsICIiLCAiIgogICAgICAgIGlmIGlwX3R5cGUgPT0gIklQdjYiOgog + ICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBlbmRwb2ludF9zdHJfaXAsIGVuZHBvaW50 + X3N0cl9wb3J0ID0gZW5kcG9pbnRfc3RyLnJzcGxpdCgiOiIsIDEpCiAgICAgICAgICAgIGV4Y2Vw + dCBWYWx1ZUVycm9yOgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2Vw + dGlvbigKICAgICAgICAgICAgICAgICAgICBmIk5vdCBhIHByb3BlciBlbmRwb2ludDoge2VuZHBv + aW50X3N0cn0iCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIGlmIGVuZHBvaW50X3N0cl9p + cFswXSAhPSAiWyI6CiAgICAgICAgICAgICAgICBlbmRwb2ludF9zdHJfaXAgPSAiWyIgKyBlbmRw + b2ludF9zdHJfaXAgKyAiXSIKICAgICAgICAgICAgZW5kcG9pbnRfc3RyID0gIjoiLmpvaW4oW2Vu + ZHBvaW50X3N0cl9pcCwgZW5kcG9pbnRfc3RyX3BvcnRdKQoKICAgICAgICBwcm90b2NvbHMgPSBb + Imh0dHAiLCAiaHR0cHMiXQogICAgICAgIHJlc3BvbnNlX2Vycm9yID0gTm9uZQogICAgICAgIGZv + ciBwcmVmaXggaW4gcHJvdG9jb2xzOgogICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBl + cCA9IGYie3ByZWZpeH06Ly97ZW5kcG9pbnRfc3RyfSIKICAgICAgICAgICAgICAgIHZlcmlmeSA9 + IE5vbmUKICAgICAgICAgICAgICAgICMgSWYgdmVyaWZ5IGlzIHNldCB0byBhIHBhdGggdG8gYSBk + aXJlY3RvcnksCiAgICAgICAgICAgICAgICAjIHRoZSBkaXJlY3RvcnkgbXVzdCBoYXZlIGJlZW4g + cHJvY2Vzc2VkIHVzaW5nIHRoZSBjX3JlaGFzaCB1dGlsaXR5IHN1cHBsaWVkIHdpdGggT3BlblNT + TC4KICAgICAgICAgICAgICAgIGlmIHByZWZpeCA9PSAiaHR0cHMiIGFuZCBzZWxmLl9hcmdfcGFy + c2VyLnJnd19za2lwX3RsczoKICAgICAgICAgICAgICAgICAgICB2ZXJpZnkgPSBGYWxzZQogICAg + ICAgICAgICAgICAgICAgIHIgPSByZXF1ZXN0cy5oZWFkKGVwLCB0aW1lb3V0PXRpbWVvdXQsIHZl + cmlmeT1GYWxzZSkKICAgICAgICAgICAgICAgIGVsaWYgcHJlZml4ID09ICJodHRwcyIgYW5kIGNl + cnQ6CiAgICAgICAgICAgICAgICAgICAgdmVyaWZ5ID0gY2VydAogICAgICAgICAgICAgICAgICAg + IHIgPSByZXF1ZXN0cy5oZWFkKGVwLCB0aW1lb3V0PXRpbWVvdXQsIHZlcmlmeT1jZXJ0KQogICAg + ICAgICAgICAgICAgZWxzZToKICAgICAgICAgICAgICAgICAgICByID0gcmVxdWVzdHMuaGVhZChl + cCwgdGltZW91dD10aW1lb3V0KQogICAgICAgICAgICAgICAgaWYgci5zdGF0dXNfY29kZSA9PSAy + MDA6CiAgICAgICAgICAgICAgICAgICAgcmV0dXJuIHByZWZpeCwgdmVyaWZ5LCAiIgogICAgICAg + ICAgICBleGNlcHQgRXhjZXB0aW9uIGFzIGVycjoKICAgICAgICAgICAgICAgIHJlc3BvbnNlX2Vy + cm9yID0gZXJyCiAgICAgICAgICAgICAgICBjb250aW51ZQogICAgICAgIHN5cy5zdGRlcnIud3Jp + dGUoCiAgICAgICAgICAgIGYidW5hYmxlIHRvIGNvbm5lY3QgdG8gZW5kcG9pbnQ6IHtlbmRwb2lu + dF9zdHJ9LCBmYWlsZWQgZXJyb3I6IHtyZXNwb25zZV9lcnJvcn0iCiAgICAgICAgKQogICAgICAg + IHJldHVybiAoCiAgICAgICAgICAgICIiLAogICAgICAgICAgICAiIiwKICAgICAgICAgICAgKCIt + MSIpLAogICAgICAgICkKCiAgICBkZWYgX19pbml0X18oc2VsZiwgYXJnX2xpc3Q9Tm9uZSk6CiAg + ICAgICAgc2VsZi5vdXRfbWFwID0ge30KICAgICAgICBzZWxmLl9leGNsdWRlZF9rZXlzID0gc2V0 + KCkKICAgICAgICBzZWxmLl9hcmdfcGFyc2VyID0gc2VsZi5nZW5fYXJnX3BhcnNlcihhcmdzX3Rv + X3BhcnNlPWFyZ19saXN0KQogICAgICAgIHNlbGYuX2NoZWNrX2NvbmZsaWN0aW5nX29wdGlvbnMo + KQogICAgICAgIHNlbGYucnVuX2FzX3VzZXIgPSBzZWxmLl9hcmdfcGFyc2VyLnJ1bl9hc191c2Vy + CiAgICAgICAgc2VsZi5vdXRwdXRfZmlsZSA9IHNlbGYuX2FyZ19wYXJzZXIub3V0cHV0CiAgICAg + ICAgc2VsZi5jZXBoX2NvbmYgPSBzZWxmLl9hcmdfcGFyc2VyLmNlcGhfY29uZgogICAgICAgIHNl + bGYuY2VwaF9rZXlyaW5nID0gc2VsZi5fYXJnX3BhcnNlci5rZXlyaW5nCiAgICAgICAgIyBpZiB1 + c2VyIG5vdCBwcm92aWRlZCwgZ2l2ZSBhIGRlZmF1bHQgdXNlcgogICAgICAgIGlmIG5vdCBzZWxm + LnJ1bl9hc191c2VyIGFuZCBub3Qgc2VsZi5fYXJnX3BhcnNlci51cGdyYWRlOgogICAgICAgICAg + ICBzZWxmLnJ1bl9hc191c2VyID0gc2VsZi5FWFRFUk5BTF9VU0VSX05BTUUKICAgICAgICBpZiBu + b3Qgc2VsZi5fYXJnX3BhcnNlci5yZ3dfcG9vbF9wcmVmaXggYW5kIG5vdCBzZWxmLl9hcmdfcGFy + c2VyLnVwZ3JhZGU6CiAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmd3X3Bvb2xfcHJlZml4 + ID0gc2VsZi5ERUZBVUxUX1JHV19QT09MX1BSRUZJWAogICAgICAgIGlmIHNlbGYuY2VwaF9jb25m + OgogICAgICAgICAgICBrd2FyZ3MgPSB7fQogICAgICAgICAgICBpZiBzZWxmLmNlcGhfa2V5cmlu + ZzoKICAgICAgICAgICAgICAgIGt3YXJnc1siY29uZiJdID0geyJrZXlyaW5nIjogc2VsZi5jZXBo + X2tleXJpbmd9CiAgICAgICAgICAgIHNlbGYuY2x1c3RlciA9IHJhZG9zLlJhZG9zKGNvbmZmaWxl + PXNlbGYuY2VwaF9jb25mLCAqKmt3YXJncykKICAgICAgICBlbHNlOgogICAgICAgICAgICBzZWxm + LmNsdXN0ZXIgPSByYWRvcy5SYWRvcygpCiAgICAgICAgICAgIHNlbGYuY2x1c3Rlci5jb25mX3Jl + YWRfZmlsZSgpCiAgICAgICAgc2VsZi5jbHVzdGVyLmNvbm5lY3QoKQoKICAgIGRlZiBzaHV0ZG93 + bihzZWxmKToKICAgICAgICBpZiBzZWxmLmNsdXN0ZXIuc3RhdGUgPT0gImNvbm5lY3RlZCI6CiAg + ICAgICAgICAgIHNlbGYuY2x1c3Rlci5zaHV0ZG93bigpCgogICAgZGVmIGdldF9mc2lkKHNlbGYp + OgogICAgICAgIGlmIHNlbGYuX2FyZ19wYXJzZXIuZHJ5X3J1bjoKICAgICAgICAgICAgcmV0dXJu + IHNlbGYuZHJ5X3J1bigiY2VwaCBmc2lkIikKICAgICAgICByZXR1cm4gc3RyKHNlbGYuY2x1c3Rl + ci5nZXRfZnNpZCgpKQoKICAgIGRlZiBfY29tbW9uX2NtZF9qc29uX2dlbihzZWxmLCBjbWRfanNv + bik6CiAgICAgICAgY21kID0ganNvbi5kdW1wcyhjbWRfanNvbiwgc29ydF9rZXlzPVRydWUpCiAg + ICAgICAgcmV0X3ZhbCwgY21kX291dCwgZXJyX21zZyA9IHNlbGYuY2x1c3Rlci5tb25fY29tbWFu + ZChjbWQsIGIiIikKICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLnZlcmJvc2U6CiAgICAgICAg + ICAgIHByaW50KGYiQ29tbWFuZCBJbnB1dDoge2NtZH0iKQogICAgICAgICAgICBwcmludCgKICAg + ICAgICAgICAgICAgIGYiUmV0dXJuIFZhbDoge3JldF92YWx9XG5Db21tYW5kIE91dHB1dDoge2Nt + ZF9vdXR9XG4iCiAgICAgICAgICAgICAgICBmIkVycm9yIE1lc3NhZ2U6IHtlcnJfbXNnfVxuLS0t + LS0tLS0tLVxuIgogICAgICAgICAgICApCiAgICAgICAganNvbl9vdXQgPSB7fQogICAgICAgICMg + aWYgdGhlcmUgaXMgbm8gZXJyb3IgKGkuZTsgcmV0X3ZhbCBpcyBaRVJPKSBhbmQgJ2NtZF9vdXQn + IGlzIG5vdCBlbXB0eQogICAgICAgICMgdGhlbiBjb252ZXJ0ICdjbWRfb3V0JyB0byBhIGpzb24g + b3V0cHV0CiAgICAgICAgaWYgcmV0X3ZhbCA9PSAwIGFuZCBjbWRfb3V0OgogICAgICAgICAgICBq + c29uX291dCA9IGpzb24ubG9hZHMoY21kX291dCkKICAgICAgICByZXR1cm4gcmV0X3ZhbCwganNv + bl9vdXQsIGVycl9tc2cKCiAgICBkZWYgZ2V0X2NlcGhfZXh0ZXJuYWxfbW9uX2RhdGEoc2VsZik6 + CiAgICAgICAgY21kX2pzb24gPSB7InByZWZpeCI6ICJxdW9ydW1fc3RhdHVzIiwgImZvcm1hdCI6 + ICJqc29uIn0KICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLmRyeV9ydW46CiAgICAgICAgICAg + IHJldHVybiBzZWxmLmRyeV9ydW4oImNlcGggIiArIGNtZF9qc29uWyJwcmVmaXgiXSkKICAgICAg + ICByZXRfdmFsLCBqc29uX291dCwgZXJyX21zZyA9IHNlbGYuX2NvbW1vbl9jbWRfanNvbl9nZW4o + Y21kX2pzb24pCiAgICAgICAgIyBpZiB0aGVyZSBpcyBhbiB1bnN1Y2Nlc3NmdWwgYXR0ZW1wdCwK + ICAgICAgICBpZiByZXRfdmFsICE9IDAgb3IgbGVuKGpzb25fb3V0KSA9PSAwOgogICAgICAgICAg + ICByYWlzZSBFeGVjdXRpb25GYWlsdXJlRXhjZXB0aW9uKAogICAgICAgICAgICAgICAgIidxdW9y + dW1fc3RhdHVzJyBjb21tYW5kIGZhaWxlZC5cbiIKICAgICAgICAgICAgICAgIGYiRXJyb3I6IHtl + cnJfbXNnIGlmIHJldF92YWwgIT0gMCBlbHNlIHNlbGYuRU1QVFlfT1VUUFVUX0xJU1R9IgogICAg + ICAgICAgICApCiAgICAgICAgcV9sZWFkZXJfbmFtZSA9IGpzb25fb3V0WyJxdW9ydW1fbGVhZGVy + X25hbWUiXQogICAgICAgIHFfbGVhZGVyX2RldGFpbHMgPSB7fQogICAgICAgIHFfbGVhZGVyX21h + dGNoaW5nX2xpc3QgPSBbCiAgICAgICAgICAgIGwgZm9yIGwgaW4ganNvbl9vdXRbIm1vbm1hcCJd + WyJtb25zIl0gaWYgbFsibmFtZSJdID09IHFfbGVhZGVyX25hbWUKICAgICAgICBdCiAgICAgICAg + aWYgbGVuKHFfbGVhZGVyX21hdGNoaW5nX2xpc3QpID09IDA6CiAgICAgICAgICAgIHJhaXNlIEV4 + ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oIk5vIG1hdGNoaW5nICdtb24nIGRldGFpbHMgZm91bmQi + KQogICAgICAgIHFfbGVhZGVyX2RldGFpbHMgPSBxX2xlYWRlcl9tYXRjaGluZ19saXN0WzBdCiAg + ICAgICAgIyBnZXQgdGhlIGFkZHJlc3MgdmVjdG9yIG9mIHRoZSBxdW9ydW0tbGVhZGVyCiAgICAg + ICAgcV9sZWFkZXJfYWRkcnZlYyA9IHFfbGVhZGVyX2RldGFpbHMuZ2V0KCJwdWJsaWNfYWRkcnMi + LCB7fSkuZ2V0KCJhZGRydmVjIiwgW10pCiAgICAgICAgaXBfYWRkciA9IHN0cihxX2xlYWRlcl9k + ZXRhaWxzWyJwdWJsaWNfYWRkciJdLnNwbGl0KCIvIilbMF0pCgogICAgICAgIGlmIHNlbGYuX2Fy + Z19wYXJzZXIudjJfcG9ydF9lbmFibGU6CiAgICAgICAgICAgIGlmIGxlbihxX2xlYWRlcl9hZGRy + dmVjKSA+IDE6CiAgICAgICAgICAgICAgICBpZiBxX2xlYWRlcl9hZGRydmVjWzBdWyJ0eXBlIl0g + PT0gInYyIjoKICAgICAgICAgICAgICAgICAgICBpcF9hZGRyID0gcV9sZWFkZXJfYWRkcnZlY1sw + XVsiYWRkciJdCiAgICAgICAgICAgICAgICBlbGlmIHFfbGVhZGVyX2FkZHJ2ZWNbMV1bInR5cGUi + XSA9PSAidjIiOgogICAgICAgICAgICAgICAgICAgIGlwX2FkZHIgPSBxX2xlYWRlcl9hZGRydmVj + WzFdWyJhZGRyIl0KICAgICAgICAgICAgZWxzZToKICAgICAgICAgICAgICAgIHN5cy5zdGRlcnIu + d3JpdGUoCiAgICAgICAgICAgICAgICAgICAgIid2MicgYWRkcmVzcyB0eXBlIG5vdCBwcmVzZW50 + LCBhbmQgJ3YyLXBvcnQtZW5hYmxlJyBmbGFnIGlzIHByb3ZpZGVkIgogICAgICAgICAgICAgICAg + KQoKICAgICAgICByZXR1cm4gZiJ7c3RyKHFfbGVhZGVyX25hbWUpfT17aXBfYWRkcn0iCgogICAg + ZGVmIF9jb252ZXJ0X2hvc3RuYW1lX3RvX2lwKHNlbGYsIGhvc3RfbmFtZSwgcG9ydCwgaXBfdHlw + ZSk6CiAgICAgICAgIyBpZiAnY2x1c3RlcicgaW5zdGFuY2UgaXMgYSBkdW1teSB0eXBlLAogICAg + ICAgICMgY2FsbCB0aGUgZHVtbXkgaW5zdGFuY2UncyAiY29udmVydCIgbWV0aG9kCiAgICAgICAg + aWYgbm90IGhvc3RfbmFtZToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2Vw + dGlvbigiRW1wdHkgaG9zdG5hbWUgcHJvdmlkZWQiKQogICAgICAgIGlmIGlzaW5zdGFuY2Uoc2Vs + Zi5jbHVzdGVyLCBEdW1teVJhZG9zKToKICAgICAgICAgICAgcmV0dXJuIHNlbGYuY2x1c3Rlci5f + Y29udmVydF9ob3N0bmFtZV90b19pcChob3N0X25hbWUpCgogICAgICAgIGlmIGlwX3R5cGUgPT0g + IkZRRE4iOgogICAgICAgICAgICAjIGNoZWNrIHdoaWNoIGlwIEZRRE4gc2hvdWxkIGJlIGNvbnZl + cnRlZCB0bywgSVB2NCBvciBJUHY2CiAgICAgICAgICAgICMgY2hlY2sgdGhlIGhvc3QgaXAsIHRo + ZSBlbmRwb2ludCBpcCB0eXBlIHdvdWxkIGJlIHNpbWlsYXIgdG8gaG9zdCBpcAogICAgICAgICAg + ICBjbWRfanNvbiA9IHsicHJlZml4IjogIm9yY2ggaG9zdCBscyIsICJmb3JtYXQiOiAianNvbiJ9 + CiAgICAgICAgICAgIHJldF92YWwsIGpzb25fb3V0LCBlcnJfbXNnID0gc2VsZi5fY29tbW9uX2Nt + ZF9qc29uX2dlbihjbWRfanNvbikKICAgICAgICAgICAgIyBpZiB0aGVyZSBpcyBhbiB1bnN1Y2Nl + c3NmdWwgYXR0ZW1wdCwKICAgICAgICAgICAgaWYgcmV0X3ZhbCAhPSAwIG9yIGxlbihqc29uX291 + dCkgPT0gMDoKICAgICAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24o + CiAgICAgICAgICAgICAgICAgICAgIidvcmNoIGhvc3QgbHMnIGNvbW1hbmQgZmFpbGVkLlxuIgog + ICAgICAgICAgICAgICAgICAgIGYiRXJyb3I6IHtlcnJfbXNnIGlmIHJldF92YWwgIT0gMCBlbHNl + IHNlbGYuRU1QVFlfT1VUUFVUX0xJU1R9IgogICAgICAgICAgICAgICAgKQogICAgICAgICAgICBo + b3N0X2FkZHIgPSBqc29uX291dFswXVsiYWRkciJdCiAgICAgICAgICAgICMgYWRkIDo4MCBzYW1w + bGUgcG9ydCBpbiBpcF90eXBlLCBhcyBfaW52YWxpZF9lbmRwb2ludCBhbHNvIHZlcmlmeSBwb3J0 + CiAgICAgICAgICAgIGhvc3RfaXBfdHlwZSA9IHNlbGYuX2ludmFsaWRfZW5kcG9pbnQoaG9zdF9h + ZGRyICsgIjo4MCIpCiAgICAgICAgICAgIGltcG9ydCBzb2NrZXQKCiAgICAgICAgICAgICMgZXhh + bXBsZSBvdXRwdXQgWyg8QWRkcmVzc0ZhbWlseS5BRl9JTkVUOiAyPiwgPFNvY2tldEtpbmQuU09D + S19TVFJFQU06IDE+LCA2LCAnJywgKCc5My4xODQuMjE2LjM0JywgODApKSwgLi4uXQogICAgICAg + ICAgICAjIHdlIG5lZWQgdG8gZ2V0IDkzLjE4NC4yMTYuMzQgc28gaXQgd291bGQgYmUgaXBbMF1b + NF1bMF0KICAgICAgICAgICAgaWYgaG9zdF9pcF90eXBlID09ICJJUHY2IjoKICAgICAgICAgICAg + ICAgIGlwID0gc29ja2V0LmdldGFkZHJpbmZvKAogICAgICAgICAgICAgICAgICAgIGhvc3RfbmFt + ZSwgcG9ydCwgZmFtaWx5PXNvY2tldC5BRl9JTkVUNiwgcHJvdG89c29ja2V0LklQUFJPVE9fVENQ + CiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIGVsaWYgaG9zdF9pcF90eXBlID09ICJJUHY0 + IjoKICAgICAgICAgICAgICAgIGlwID0gc29ja2V0LmdldGFkZHJpbmZvKAogICAgICAgICAgICAg + ICAgICAgIGhvc3RfbmFtZSwgcG9ydCwgZmFtaWx5PXNvY2tldC5BRl9JTkVULCBwcm90bz1zb2Nr + ZXQuSVBQUk9UT19UQ1AKICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgZGVsIHNvY2tldAog + ICAgICAgICAgICByZXR1cm4gaXBbMF1bNF1bMF0KICAgICAgICByZXR1cm4gaG9zdF9uYW1lCgog + ICAgZGVmIGdldF9hY3RpdmVfYW5kX3N0YW5kYnlfbWdycyhzZWxmKToKICAgICAgICBpZiBzZWxm + Ll9hcmdfcGFyc2VyLmRyeV9ydW46CiAgICAgICAgICAgIHJldHVybiAiIiwgc2VsZi5kcnlfcnVu + KCJjZXBoIHN0YXR1cyIpCiAgICAgICAgbW9uaXRvcmluZ19lbmRwb2ludF9wb3J0ID0gc2VsZi5f + YXJnX3BhcnNlci5tb25pdG9yaW5nX2VuZHBvaW50X3BvcnQKICAgICAgICBtb25pdG9yaW5nX2Vu + ZHBvaW50X2lwX2xpc3QgPSBzZWxmLl9hcmdfcGFyc2VyLm1vbml0b3JpbmdfZW5kcG9pbnQKICAg + ICAgICBzdGFuZGJ5X21ncnMgPSBbXQogICAgICAgIGlmIG5vdCBtb25pdG9yaW5nX2VuZHBvaW50 + X2lwX2xpc3Q6CiAgICAgICAgICAgIGNtZF9qc29uID0geyJwcmVmaXgiOiAic3RhdHVzIiwgImZv + cm1hdCI6ICJqc29uIn0KICAgICAgICAgICAgcmV0X3ZhbCwganNvbl9vdXQsIGVycl9tc2cgPSBz + ZWxmLl9jb21tb25fY21kX2pzb25fZ2VuKGNtZF9qc29uKQogICAgICAgICAgICAjIGlmIHRoZXJl + IGlzIGFuIHVuc3VjY2Vzc2Z1bCBhdHRlbXB0LAogICAgICAgICAgICBpZiByZXRfdmFsICE9IDAg + b3IgbGVuKGpzb25fb3V0KSA9PSAwOgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFp + bHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgICAgICAiJ21nciBzZXJ2aWNlcycgY29tbWFu + ZCBmYWlsZWQuXG4iCiAgICAgICAgICAgICAgICAgICAgZiJFcnJvcjoge2Vycl9tc2cgaWYgcmV0 + X3ZhbCAhPSAwIGVsc2Ugc2VsZi5FTVBUWV9PVVRQVVRfTElTVH0iCiAgICAgICAgICAgICAgICAp + CiAgICAgICAgICAgIG1vbml0b3JpbmdfZW5kcG9pbnQgPSAoCiAgICAgICAgICAgICAgICBqc29u + X291dC5nZXQoIm1ncm1hcCIsIHt9KS5nZXQoInNlcnZpY2VzIiwge30pLmdldCgicHJvbWV0aGV1 + cyIsICIiKQogICAgICAgICAgICApCiAgICAgICAgICAgIGlmIG5vdCBtb25pdG9yaW5nX2VuZHBv + aW50OgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAg + ICAgICAgICAgICAgICAgICAiY2FuJ3QgZmluZCBtb25pdG9yaW5nX2VuZHBvaW50LCBwcm9tZXRo + ZXVzIG1vZHVsZSBtaWdodCBub3QgYmUgZW5hYmxlZCwgIgogICAgICAgICAgICAgICAgICAgICJl + bmFibGUgdGhlIG1vZHVsZSBieSBydW5uaW5nICdjZXBoIG1nciBtb2R1bGUgZW5hYmxlIHByb21l + dGhldXMnIgogICAgICAgICAgICAgICAgKQogICAgICAgICAgICAjIG5vdyBjaGVjayB0aGUgc3Rh + bmQtYnkgbWdyLXMKICAgICAgICAgICAgc3RhbmRieV9hcnIgPSBqc29uX291dC5nZXQoIm1ncm1h + cCIsIHt9KS5nZXQoInN0YW5kYnlzIiwgW10pCiAgICAgICAgICAgIGZvciBlYWNoX3N0YW5kYnkg + aW4gc3RhbmRieV9hcnI6CiAgICAgICAgICAgICAgICBpZiAibmFtZSIgaW4gZWFjaF9zdGFuZGJ5 + LmtleXMoKToKICAgICAgICAgICAgICAgICAgICBzdGFuZGJ5X21ncnMuYXBwZW5kKGVhY2hfc3Rh + bmRieVsibmFtZSJdKQogICAgICAgICAgICB0cnk6CiAgICAgICAgICAgICAgICBwYXJzZWRfZW5k + cG9pbnQgPSB1cmxwYXJzZShtb25pdG9yaW5nX2VuZHBvaW50KQogICAgICAgICAgICBleGNlcHQg + VmFsdWVFcnJvcjoKICAgICAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRp + b24oCiAgICAgICAgICAgICAgICAgICAgZiJpbnZhbGlkIGVuZHBvaW50OiB7bW9uaXRvcmluZ19l + bmRwb2ludH0iCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIG1vbml0b3JpbmdfZW5kcG9p + bnRfaXBfbGlzdCA9IHBhcnNlZF9lbmRwb2ludC5ob3N0bmFtZQogICAgICAgICAgICBpZiBub3Qg + bW9uaXRvcmluZ19lbmRwb2ludF9wb3J0OgogICAgICAgICAgICAgICAgbW9uaXRvcmluZ19lbmRw + b2ludF9wb3J0ID0gc3RyKHBhcnNlZF9lbmRwb2ludC5wb3J0KQoKICAgICAgICAjIGlmIG1vbml0 + b3JpbmcgZW5kcG9pbnQgcG9ydCBpcyBub3Qgc2V0LCBwdXQgYSBkZWZhdWx0IG1vbiBwb3J0CiAg + ICAgICAgaWYgbm90IG1vbml0b3JpbmdfZW5kcG9pbnRfcG9ydDoKICAgICAgICAgICAgbW9uaXRv + cmluZ19lbmRwb2ludF9wb3J0ID0gc2VsZi5ERUZBVUxUX01PTklUT1JJTkdfRU5EUE9JTlRfUE9S + VAoKICAgICAgICAjIHVzZXIgY291bGQgZ2l2ZSBjb21tYSBhbmQgc3BhY2Ugc2VwYXJhdGVkIGlu + cHV0cyAobGlrZSAtLW1vbml0b3JpbmctZW5kcG9pbnQ9IjxpcDE+LCA8aXAyPiIpCiAgICAgICAg + bW9uaXRvcmluZ19lbmRwb2ludF9pcF9saXN0ID0gbW9uaXRvcmluZ19lbmRwb2ludF9pcF9saXN0 + LnJlcGxhY2UoIiwiLCAiICIpCiAgICAgICAgbW9uaXRvcmluZ19lbmRwb2ludF9pcF9saXN0X3Nw + bGl0ID0gbW9uaXRvcmluZ19lbmRwb2ludF9pcF9saXN0LnNwbGl0KCkKICAgICAgICAjIGlmIG1v + bml0b3JpbmctZW5kcG9pbnQgY291bGQgbm90IGJlIGZvdW5kLCByYWlzZSBhbiBlcnJvcgogICAg + ICAgIGlmIGxlbihtb25pdG9yaW5nX2VuZHBvaW50X2lwX2xpc3Rfc3BsaXQpID09IDA6CiAgICAg + ICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oIk5vICdtb25pdG9yaW5nLWVu + ZHBvaW50JyBmb3VuZCIpCiAgICAgICAgIyBmaXJzdCBpcCBpcyB0cmVhdGVkIGFzIHRoZSBtYWlu + IG1vbml0b3JpbmctZW5kcG9pbnQKICAgICAgICBtb25pdG9yaW5nX2VuZHBvaW50X2lwID0gbW9u + aXRvcmluZ19lbmRwb2ludF9pcF9saXN0X3NwbGl0WzBdCiAgICAgICAgIyByZXN0IG9mIHRoZSBp + cC1zIGFyZSBhZGRlZCB0byB0aGUgJ3N0YW5kYnlfbWdycycgbGlzdAogICAgICAgIHN0YW5kYnlf + bWdycy5leHRlbmQobW9uaXRvcmluZ19lbmRwb2ludF9pcF9saXN0X3NwbGl0WzE6XSkKICAgICAg + ICBmYWlsZWRfaXAgPSBtb25pdG9yaW5nX2VuZHBvaW50X2lwCgogICAgICAgIG1vbml0b3Jpbmdf + ZW5kcG9pbnQgPSAiOiIuam9pbigKICAgICAgICAgICAgW21vbml0b3JpbmdfZW5kcG9pbnRfaXAs + IG1vbml0b3JpbmdfZW5kcG9pbnRfcG9ydF0KICAgICAgICApCiAgICAgICAgaXBfdHlwZSA9IHNl + bGYuX2ludmFsaWRfZW5kcG9pbnQobW9uaXRvcmluZ19lbmRwb2ludCkKICAgICAgICB0cnk6CiAg + ICAgICAgICAgIG1vbml0b3JpbmdfZW5kcG9pbnRfaXAgPSBzZWxmLl9jb252ZXJ0X2hvc3RuYW1l + X3RvX2lwKAogICAgICAgICAgICAgICAgbW9uaXRvcmluZ19lbmRwb2ludF9pcCwgbW9uaXRvcmlu + Z19lbmRwb2ludF9wb3J0LCBpcF90eXBlCiAgICAgICAgICAgICkKICAgICAgICAgICAgIyBjb2xs + ZWN0IGFsbCB0aGUgJ3N0YW5kLWJ5JyBtZ3IgaXBzCiAgICAgICAgICAgIG1ncl9pcHMgPSBbXQog + ICAgICAgICAgICBmb3IgZWFjaF9zdGFuZGJ5X21nciBpbiBzdGFuZGJ5X21ncnM6CiAgICAgICAg + ICAgICAgICBmYWlsZWRfaXAgPSBlYWNoX3N0YW5kYnlfbWdyCiAgICAgICAgICAgICAgICBtZ3Jf + aXBzLmFwcGVuZCgKICAgICAgICAgICAgICAgICAgICBzZWxmLl9jb252ZXJ0X2hvc3RuYW1lX3Rv + X2lwKAogICAgICAgICAgICAgICAgICAgICAgICBlYWNoX3N0YW5kYnlfbWdyLCBtb25pdG9yaW5n + X2VuZHBvaW50X3BvcnQsIGlwX3R5cGUKICAgICAgICAgICAgICAgICAgICApCiAgICAgICAgICAg + ICAgICApCiAgICAgICAgZXhjZXB0OgogICAgICAgICAgICByYWlzZSBFeGVjdXRpb25GYWlsdXJl + RXhjZXB0aW9uKAogICAgICAgICAgICAgICAgZiJDb252ZXJzaW9uIG9mIGhvc3Q6IHtmYWlsZWRf + aXB9IHRvIElQIGZhaWxlZC4gIgogICAgICAgICAgICAgICAgIlBsZWFzZSBlbnRlciB0aGUgSVAg + YWRkcmVzc2VzIG9mIGFsbCB0aGUgY2VwaC1tZ3JzIHdpdGggdGhlICctLW1vbml0b3JpbmctZW5k + cG9pbnQnIGZsYWciCiAgICAgICAgICAgICkKCiAgICAgICAgXywgXywgZXJyID0gc2VsZi5lbmRw + b2ludF9kaWFsKG1vbml0b3JpbmdfZW5kcG9pbnQsIGlwX3R5cGUpCiAgICAgICAgaWYgZXJyID09 + ICItMSI6CiAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oZXJyKQog + ICAgICAgICMgYWRkIHRoZSB2YWxpZGF0ZWQgYWN0aXZlIG1nciBJUCBpbnRvIHRoZSBmaXJzdCBp + bmRleAogICAgICAgIG1ncl9pcHMuaW5zZXJ0KDAsIG1vbml0b3JpbmdfZW5kcG9pbnRfaXApCiAg + ICAgICAgYWxsX21ncl9pcHNfc3RyID0gIiwiLmpvaW4obWdyX2lwcykKICAgICAgICByZXR1cm4g + YWxsX21ncl9pcHNfc3RyLCBtb25pdG9yaW5nX2VuZHBvaW50X3BvcnQKCiAgICBkZWYgY2hlY2tf + dXNlcl9leGlzdChzZWxmLCB1c2VyKToKICAgICAgICBjbWRfanNvbiA9IHsicHJlZml4IjogImF1 + dGggZ2V0IiwgImVudGl0eSI6IGYie3VzZXJ9IiwgImZvcm1hdCI6ICJqc29uIn0KICAgICAgICBy + ZXRfdmFsLCBqc29uX291dCwgXyA9IHNlbGYuX2NvbW1vbl9jbWRfanNvbl9nZW4oY21kX2pzb24p + CiAgICAgICAgaWYgcmV0X3ZhbCAhPSAwIG9yIGxlbihqc29uX291dCkgPT0gMDoKICAgICAgICAg + ICAgcmV0dXJuICIiCiAgICAgICAgcmV0dXJuIHN0cihqc29uX291dFswXVsia2V5Il0pCgogICAg + ZGVmIGdldF9jZXBoZnNfcHJvdmlzaW9uZXJfY2Fwc19hbmRfZW50aXR5KHNlbGYpOgogICAgICAg + IGVudGl0eSA9ICJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNpb25lciIKICAgICAgICBjYXBzID0g + ewogICAgICAgICAgICAibW9uIjogImFsbG93IHIsIGFsbG93IGNvbW1hbmQgJ29zZCBibG9ja2xp + c3QnIiwKICAgICAgICAgICAgIm1nciI6ICJhbGxvdyBydyIsCiAgICAgICAgICAgICJvc2QiOiAi + YWxsb3cgcncgdGFnIGNlcGhmcyBtZXRhZGF0YT0qIiwKICAgICAgICB9CiAgICAgICAgaWYgc2Vs + Zi5fYXJnX3BhcnNlci5yZXN0cmljdGVkX2F1dGhfcGVybWlzc2lvbjoKICAgICAgICAgICAgazhz + X2NsdXN0ZXJfbmFtZSA9IHNlbGYuX2FyZ19wYXJzZXIuazhzX2NsdXN0ZXJfbmFtZQogICAgICAg + ICAgICBpZiBrOHNfY2x1c3Rlcl9uYW1lID09ICIiOgogICAgICAgICAgICAgICAgcmFpc2UgRXhl + Y3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgICAgICAiazhzX2NsdXN0ZXJf + bmFtZSBub3QgZm91bmQsIHBsZWFzZSBzZXQgdGhlICctLWs4cy1jbHVzdGVyLW5hbWUnIGZsYWci + CiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIGNlcGhmc19maWxlc3lzdGVtID0gc2VsZi5f + YXJnX3BhcnNlci5jZXBoZnNfZmlsZXN5c3RlbV9uYW1lCiAgICAgICAgICAgIGlmIGNlcGhmc19m + aWxlc3lzdGVtID09ICIiOgogICAgICAgICAgICAgICAgZW50aXR5ID0gZiJ7ZW50aXR5fS17azhz + X2NsdXN0ZXJfbmFtZX0iCiAgICAgICAgICAgIGVsc2U6CiAgICAgICAgICAgICAgICBlbnRpdHkg + PSBmIntlbnRpdHl9LXtrOHNfY2x1c3Rlcl9uYW1lfS17Y2VwaGZzX2ZpbGVzeXN0ZW19IgogICAg + ICAgICAgICAgICAgY2Fwc1sib3NkIl0gPSBmImFsbG93IHJ3IHRhZyBjZXBoZnMgbWV0YWRhdGE9 + e2NlcGhmc19maWxlc3lzdGVtfSIKCiAgICAgICAgcmV0dXJuIGNhcHMsIGVudGl0eQoKICAgIGRl + ZiBnZXRfY2VwaGZzX25vZGVfY2Fwc19hbmRfZW50aXR5KHNlbGYpOgogICAgICAgIGVudGl0eSA9 + ICJjbGllbnQuY3NpLWNlcGhmcy1ub2RlIgogICAgICAgIGNhcHMgPSB7CiAgICAgICAgICAgICJt + b24iOiAiYWxsb3cgciwgYWxsb3cgY29tbWFuZCAnb3NkIGJsb2NrbGlzdCciLAogICAgICAgICAg + ICAibWdyIjogImFsbG93IHJ3IiwKICAgICAgICAgICAgIm9zZCI6ICJhbGxvdyBydyB0YWcgY2Vw + aGZzICo9KiIsCiAgICAgICAgICAgICJtZHMiOiAiYWxsb3cgcnciLAogICAgICAgIH0KICAgICAg + ICBpZiBzZWxmLl9hcmdfcGFyc2VyLnJlc3RyaWN0ZWRfYXV0aF9wZXJtaXNzaW9uOgogICAgICAg + ICAgICBrOHNfY2x1c3Rlcl9uYW1lID0gc2VsZi5fYXJnX3BhcnNlci5rOHNfY2x1c3Rlcl9uYW1l + CiAgICAgICAgICAgIGlmIGs4c19jbHVzdGVyX25hbWUgPT0gIiI6CiAgICAgICAgICAgICAgICBy + YWlzZSBFeGVjdXRpb25GYWlsdXJlRXhjZXB0aW9uKAogICAgICAgICAgICAgICAgICAgICJrOHNf + Y2x1c3Rlcl9uYW1lIG5vdCBmb3VuZCwgcGxlYXNlIHNldCB0aGUgJy0tazhzLWNsdXN0ZXItbmFt + ZScgZmxhZyIKICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgY2VwaGZzX2ZpbGVzeXN0ZW0g + PSBzZWxmLl9hcmdfcGFyc2VyLmNlcGhmc19maWxlc3lzdGVtX25hbWUKICAgICAgICAgICAgaWYg + Y2VwaGZzX2ZpbGVzeXN0ZW0gPT0gIiI6CiAgICAgICAgICAgICAgICBlbnRpdHkgPSBmIntlbnRp + dHl9LXtrOHNfY2x1c3Rlcl9uYW1lfSIKICAgICAgICAgICAgZWxzZToKICAgICAgICAgICAgICAg + IGVudGl0eSA9IGYie2VudGl0eX0te2s4c19jbHVzdGVyX25hbWV9LXtjZXBoZnNfZmlsZXN5c3Rl + bX0iCiAgICAgICAgICAgICAgICBjYXBzWyJvc2QiXSA9IGYiYWxsb3cgcncgdGFnIGNlcGhmcyAq + PXtjZXBoZnNfZmlsZXN5c3RlbX0iCgogICAgICAgIHJldHVybiBjYXBzLCBlbnRpdHkKCiAgICBk + ZWYgZ2V0X2VudGl0eSgKICAgICAgICBzZWxmLAogICAgICAgIGVudGl0eSwKICAgICAgICByYmRf + cG9vbF9uYW1lLAogICAgICAgIGFsaWFzX3JiZF9wb29sX25hbWUsCiAgICAgICAgazhzX2NsdXN0 + ZXJfbmFtZSwKICAgICAgICByYWRvc19uYW1lc3BhY2UsCiAgICApOgogICAgICAgIGlmICgKICAg + ICAgICAgICAgcmJkX3Bvb2xfbmFtZS5jb3VudCgiLiIpICE9IDAKICAgICAgICAgICAgb3IgcmJk + X3Bvb2xfbmFtZS5jb3VudCgiXyIpICE9IDAKICAgICAgICAgICAgb3IgYWxpYXNfcmJkX3Bvb2xf + bmFtZSAhPSAiIgogICAgICAgICAgICAjIGNoZWNraW5nIGFsaWFzX3JiZF9wb29sX25hbWUgaXMg + bm90IGVtcHR5IGFzIHRoZXJlIG1heWJlIGEgc3BlY2lhbCBjaGFyYWN0ZXIgdXNlZCBvdGhlciB0 + aGFuIC4gb3IgXwogICAgICAgICk6CiAgICAgICAgICAgIGlmIGFsaWFzX3JiZF9wb29sX25hbWUg + PT0gIiI6CiAgICAgICAgICAgICAgICByYWlzZSBFeGVjdXRpb25GYWlsdXJlRXhjZXB0aW9uKAog + ICAgICAgICAgICAgICAgICAgICJwbGVhc2Ugc2V0IHRoZSAnLS1hbGlhcy1yYmQtZGF0YS1wb29s + LW5hbWUnIGZsYWcgYXMgdGhlIHJiZCBkYXRhIHBvb2wgbmFtZSBjb250YWlucyAnLicgb3IgJ18n + IgogICAgICAgICAgICAgICAgKQogICAgICAgICAgICBpZiAoCiAgICAgICAgICAgICAgICBhbGlh + c19yYmRfcG9vbF9uYW1lLmNvdW50KCIuIikgIT0gMAogICAgICAgICAgICAgICAgb3IgYWxpYXNf + cmJkX3Bvb2xfbmFtZS5jb3VudCgiXyIpICE9IDAKICAgICAgICAgICAgKToKICAgICAgICAgICAg + ICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICAgICAg + IictLWFsaWFzLXJiZC1kYXRhLXBvb2wtbmFtZScgZmxhZyB2YWx1ZSBzaG91bGQgbm90IGNvbnRh + aW4gJy4nIG9yICdfJyIKICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgZW50aXR5ID0gZiJ7 + ZW50aXR5fS17azhzX2NsdXN0ZXJfbmFtZX0te2FsaWFzX3JiZF9wb29sX25hbWV9IgogICAgICAg + IGVsc2U6CiAgICAgICAgICAgIGVudGl0eSA9IGYie2VudGl0eX0te2s4c19jbHVzdGVyX25hbWV9 + LXtyYmRfcG9vbF9uYW1lfSIKCiAgICAgICAgaWYgcmFkb3NfbmFtZXNwYWNlOgogICAgICAgICAg + ICBlbnRpdHkgPSBmIntlbnRpdHl9LXtyYWRvc19uYW1lc3BhY2V9IgogICAgICAgIHJldHVybiBl + bnRpdHkKCiAgICBkZWYgZ2V0X3JiZF9wcm92aXNpb25lcl9jYXBzX2FuZF9lbnRpdHkoc2VsZik6 + CiAgICAgICAgZW50aXR5ID0gImNsaWVudC5jc2ktcmJkLXByb3Zpc2lvbmVyIgogICAgICAgIGNh + cHMgPSB7CiAgICAgICAgICAgICJtb24iOiAicHJvZmlsZSByYmQsIGFsbG93IGNvbW1hbmQgJ29z + ZCBibG9ja2xpc3QnIiwKICAgICAgICAgICAgIm1nciI6ICJhbGxvdyBydyIsCiAgICAgICAgICAg + ICJvc2QiOiAicHJvZmlsZSByYmQiLAogICAgICAgIH0KICAgICAgICBpZiBzZWxmLl9hcmdfcGFy + c2VyLnJlc3RyaWN0ZWRfYXV0aF9wZXJtaXNzaW9uOgogICAgICAgICAgICByYmRfcG9vbF9uYW1l + ID0gc2VsZi5fYXJnX3BhcnNlci5yYmRfZGF0YV9wb29sX25hbWUKICAgICAgICAgICAgYWxpYXNf + cmJkX3Bvb2xfbmFtZSA9IHNlbGYuX2FyZ19wYXJzZXIuYWxpYXNfcmJkX2RhdGFfcG9vbF9uYW1l + CiAgICAgICAgICAgIGs4c19jbHVzdGVyX25hbWUgPSBzZWxmLl9hcmdfcGFyc2VyLms4c19jbHVz + dGVyX25hbWUKICAgICAgICAgICAgcmFkb3NfbmFtZXNwYWNlID0gc2VsZi5fYXJnX3BhcnNlci5y + YWRvc19uYW1lc3BhY2UKICAgICAgICAgICAgaWYgcmJkX3Bvb2xfbmFtZSA9PSAiIjoKICAgICAg + ICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAg + ICAgICAgIm1hbmRhdG9yeSBmbGFnIG5vdCBmb3VuZCwgcGxlYXNlIHNldCB0aGUgJy0tcmJkLWRh + dGEtcG9vbC1uYW1lJyBmbGFnIgogICAgICAgICAgICAgICAgKQogICAgICAgICAgICBpZiBrOHNf + Y2x1c3Rlcl9uYW1lID09ICIiOgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVy + ZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgICAgICAibWFuZGF0b3J5IGZsYWcgbm90IGZvdW5k + LCBwbGVhc2Ugc2V0IHRoZSAnLS1rOHMtY2x1c3Rlci1uYW1lJyBmbGFnIgogICAgICAgICAgICAg + ICAgKQogICAgICAgICAgICBlbnRpdHkgPSBzZWxmLmdldF9lbnRpdHkoCiAgICAgICAgICAgICAg + ICBlbnRpdHksCiAgICAgICAgICAgICAgICByYmRfcG9vbF9uYW1lLAogICAgICAgICAgICAgICAg + YWxpYXNfcmJkX3Bvb2xfbmFtZSwKICAgICAgICAgICAgICAgIGs4c19jbHVzdGVyX25hbWUsCiAg + ICAgICAgICAgICAgICByYWRvc19uYW1lc3BhY2UsCiAgICAgICAgICAgICkKICAgICAgICAgICAg + aWYgcmFkb3NfbmFtZXNwYWNlICE9ICIiOgogICAgICAgICAgICAgICAgY2Fwc1sib3NkIl0gPSAo + CiAgICAgICAgICAgICAgICAgICAgZiJwcm9maWxlIHJiZCBwb29sPXtyYmRfcG9vbF9uYW1lfSBu + YW1lc3BhY2U9e3JhZG9zX25hbWVzcGFjZX0iCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAg + IGVsc2U6CiAgICAgICAgICAgICAgICBjYXBzWyJvc2QiXSA9IGYicHJvZmlsZSByYmQgcG9vbD17 + cmJkX3Bvb2xfbmFtZX0iCgogICAgICAgIHJldHVybiBjYXBzLCBlbnRpdHkKCiAgICBkZWYgZ2V0 + X3JiZF9ub2RlX2NhcHNfYW5kX2VudGl0eShzZWxmKToKICAgICAgICBlbnRpdHkgPSAiY2xpZW50 + LmNzaS1yYmQtbm9kZSIKICAgICAgICBjYXBzID0gewogICAgICAgICAgICAibW9uIjogInByb2Zp + bGUgcmJkLCBhbGxvdyBjb21tYW5kICdvc2QgYmxvY2tsaXN0JyIsCiAgICAgICAgICAgICJvc2Qi + OiAicHJvZmlsZSByYmQiLAogICAgICAgIH0KICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLnJl + c3RyaWN0ZWRfYXV0aF9wZXJtaXNzaW9uOgogICAgICAgICAgICByYmRfcG9vbF9uYW1lID0gc2Vs + Zi5fYXJnX3BhcnNlci5yYmRfZGF0YV9wb29sX25hbWUKICAgICAgICAgICAgYWxpYXNfcmJkX3Bv + b2xfbmFtZSA9IHNlbGYuX2FyZ19wYXJzZXIuYWxpYXNfcmJkX2RhdGFfcG9vbF9uYW1lCiAgICAg + ICAgICAgIGs4c19jbHVzdGVyX25hbWUgPSBzZWxmLl9hcmdfcGFyc2VyLms4c19jbHVzdGVyX25h + bWUKICAgICAgICAgICAgcmFkb3NfbmFtZXNwYWNlID0gc2VsZi5fYXJnX3BhcnNlci5yYWRvc19u + YW1lc3BhY2UKICAgICAgICAgICAgaWYgcmJkX3Bvb2xfbmFtZSA9PSAiIjoKICAgICAgICAgICAg + ICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICAgICAg + Im1hbmRhdG9yeSBmbGFnIG5vdCBmb3VuZCwgcGxlYXNlIHNldCB0aGUgJy0tcmJkLWRhdGEtcG9v + bC1uYW1lJyBmbGFnIgogICAgICAgICAgICAgICAgKQogICAgICAgICAgICBpZiBrOHNfY2x1c3Rl + cl9uYW1lID09ICIiOgogICAgICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2Vw + dGlvbigKICAgICAgICAgICAgICAgICAgICAibWFuZGF0b3J5IGZsYWcgbm90IGZvdW5kLCBwbGVh + c2Ugc2V0IHRoZSAnLS1rOHMtY2x1c3Rlci1uYW1lJyBmbGFnIgogICAgICAgICAgICAgICAgKQog + ICAgICAgICAgICBlbnRpdHkgPSBzZWxmLmdldF9lbnRpdHkoCiAgICAgICAgICAgICAgICBlbnRp + dHksCiAgICAgICAgICAgICAgICByYmRfcG9vbF9uYW1lLAogICAgICAgICAgICAgICAgYWxpYXNf + cmJkX3Bvb2xfbmFtZSwKICAgICAgICAgICAgICAgIGs4c19jbHVzdGVyX25hbWUsCiAgICAgICAg + ICAgICAgICByYWRvc19uYW1lc3BhY2UsCiAgICAgICAgICAgICkKICAgICAgICAgICAgaWYgcmFk + b3NfbmFtZXNwYWNlICE9ICIiOgogICAgICAgICAgICAgICAgY2Fwc1sib3NkIl0gPSAoCiAgICAg + ICAgICAgICAgICAgICAgZiJwcm9maWxlIHJiZCBwb29sPXtyYmRfcG9vbF9uYW1lfSBuYW1lc3Bh + Y2U9e3JhZG9zX25hbWVzcGFjZX0iCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgIGVsc2U6 + CiAgICAgICAgICAgICAgICBjYXBzWyJvc2QiXSA9IGYicHJvZmlsZSByYmQgcG9vbD17cmJkX3Bv + b2xfbmFtZX0iCgogICAgICAgIHJldHVybiBjYXBzLCBlbnRpdHkKCiAgICBkZWYgZ2V0X2RlZmF1 + bHRVc2VyX2NhcHNfYW5kX2VudGl0eShzZWxmKToKICAgICAgICBlbnRpdHkgPSBzZWxmLnJ1bl9h + c191c2VyCiAgICAgICAgY2FwcyA9IHsKICAgICAgICAgICAgIm1vbiI6ICJhbGxvdyByLCBhbGxv + dyBjb21tYW5kIHF1b3J1bV9zdGF0dXMsIGFsbG93IGNvbW1hbmQgdmVyc2lvbiIsCiAgICAgICAg + ICAgICJtZ3IiOiAiYWxsb3cgY29tbWFuZCBjb25maWciLAogICAgICAgICAgICAib3NkIjogZiJw + cm9maWxlIHJiZC1yZWFkLW9ubHksIGFsbG93IHJ3eCBwb29sPXtzZWxmLl9hcmdfcGFyc2VyLnJn + d19wb29sX3ByZWZpeH0ucmd3Lm1ldGEsIGFsbG93IHIgcG9vbD0ucmd3LnJvb3QsIGFsbG93IHJ3 + IHBvb2w9e3NlbGYuX2FyZ19wYXJzZXIucmd3X3Bvb2xfcHJlZml4fS5yZ3cuY29udHJvbCwgYWxs + b3cgcnggcG9vbD17c2VsZi5fYXJnX3BhcnNlci5yZ3dfcG9vbF9wcmVmaXh9LnJndy5sb2csIGFs + bG93IHggcG9vbD17c2VsZi5fYXJnX3BhcnNlci5yZ3dfcG9vbF9wcmVmaXh9LnJndy5idWNrZXRz + LmluZGV4IiwKICAgICAgICB9CgogICAgICAgIHJldHVybiBjYXBzLCBlbnRpdHkKCiAgICBkZWYg + Z2V0X2NhcHNfYW5kX2VudGl0eShzZWxmLCB1c2VyX25hbWUpOgogICAgICAgIGlmICJjbGllbnQu + Y3NpLWNlcGhmcy1wcm92aXNpb25lciIgaW4gdXNlcl9uYW1lOgogICAgICAgICAgICBpZiAiY2xp + ZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiICE9IHVzZXJfbmFtZToKICAgICAgICAgICAgICAg + IHNlbGYuX2FyZ19wYXJzZXIucmVzdHJpY3RlZF9hdXRoX3Blcm1pc3Npb24gPSBUcnVlCiAgICAg + ICAgICAgIHJldHVybiBzZWxmLmdldF9jZXBoZnNfcHJvdmlzaW9uZXJfY2Fwc19hbmRfZW50aXR5 + KCkKICAgICAgICBpZiAiY2xpZW50LmNzaS1jZXBoZnMtbm9kZSIgaW4gdXNlcl9uYW1lOgogICAg + ICAgICAgICBpZiAiY2xpZW50LmNzaS1jZXBoZnMtbm9kZSIgIT0gdXNlcl9uYW1lOgogICAgICAg + ICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5yZXN0cmljdGVkX2F1dGhfcGVybWlzc2lvbiA9IFRy + dWUKICAgICAgICAgICAgcmV0dXJuIHNlbGYuZ2V0X2NlcGhmc19ub2RlX2NhcHNfYW5kX2VudGl0 + eSgpCiAgICAgICAgaWYgImNsaWVudC5jc2ktcmJkLXByb3Zpc2lvbmVyIiBpbiB1c2VyX25hbWU6 + CiAgICAgICAgICAgIGlmICJjbGllbnQuY3NpLXJiZC1wcm92aXNpb25lciIgIT0gdXNlcl9uYW1l + OgogICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5yZXN0cmljdGVkX2F1dGhfcGVybWlz + c2lvbiA9IFRydWUKICAgICAgICAgICAgcmV0dXJuIHNlbGYuZ2V0X3JiZF9wcm92aXNpb25lcl9j + YXBzX2FuZF9lbnRpdHkoKQogICAgICAgIGlmICJjbGllbnQuY3NpLXJiZC1ub2RlIiBpbiB1c2Vy + X25hbWU6CiAgICAgICAgICAgIGlmICJjbGllbnQuY3NpLXJiZC1ub2RlIiAhPSB1c2VyX25hbWU6 + CiAgICAgICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLnJlc3RyaWN0ZWRfYXV0aF9wZXJtaXNz + aW9uID0gVHJ1ZQogICAgICAgICAgICByZXR1cm4gc2VsZi5nZXRfcmJkX25vZGVfY2Fwc19hbmRf + ZW50aXR5KCkKICAgICAgICBpZiAiY2xpZW50LmhlYWx0aGNoZWNrZXIiIGluIHVzZXJfbmFtZToK + ICAgICAgICAgICAgaWYgImNsaWVudC5oZWFsdGhjaGVja2VyIiAhPSB1c2VyX25hbWU6CiAgICAg + ICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLnJlc3RyaWN0ZWRfYXV0aF9wZXJtaXNzaW9uID0g + VHJ1ZQogICAgICAgICAgICByZXR1cm4gc2VsZi5nZXRfZGVmYXVsdFVzZXJfY2Fwc19hbmRfZW50 + aXR5KCkKCiAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAg + ICAgZiJubyB1c2VyIGZvdW5kIHdpdGggdXNlcl9uYW1lOiB7dXNlcl9uYW1lfSwgIgogICAgICAg + ICAgICAiZ2V0X2NhcHNfYW5kX2VudGl0eSBjb21tYW5kIGZhaWxlZC5cbiIKICAgICAgICApCgog + ICAgZGVmIGNyZWF0ZV9jZXBoQ1NJS2V5cmluZ191c2VyKHNlbGYsIHVzZXIpOgogICAgICAgICIi + IgogICAgICAgIGNvbW1hbmQ6IGNlcGggYXV0aCBnZXQtb3ItY3JlYXRlIGNsaWVudC5jc2ktY2Vw + aGZzLXByb3Zpc2lvbmVyIG1vbiAnYWxsb3cgcicgbWdyICdhbGxvdyBydycgb3NkICdhbGxvdyBy + dyB0YWcgY2VwaGZzIG1ldGFkYXRhPSonCiAgICAgICAgIiIiCiAgICAgICAgY2FwcywgZW50aXR5 + ID0gc2VsZi5nZXRfY2Fwc19hbmRfZW50aXR5KHVzZXIpCiAgICAgICAgY21kX2pzb24gPSB7CiAg + ICAgICAgICAgICJwcmVmaXgiOiAiYXV0aCBnZXQtb3ItY3JlYXRlIiwKICAgICAgICAgICAgImVu + dGl0eSI6IGVudGl0eSwKICAgICAgICAgICAgImNhcHMiOiBbY2FwIGZvciBjYXBfbGlzdCBpbiBs + aXN0KGNhcHMuaXRlbXMoKSkgZm9yIGNhcCBpbiBjYXBfbGlzdF0sCiAgICAgICAgICAgICJmb3Jt + YXQiOiAianNvbiIsCiAgICAgICAgfQoKICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLmRyeV9y + dW46CiAgICAgICAgICAgIHJldHVybiAoCiAgICAgICAgICAgICAgICBzZWxmLmRyeV9ydW4oCiAg + ICAgICAgICAgICAgICAgICAgImNlcGggIgogICAgICAgICAgICAgICAgICAgICsgY21kX2pzb25b + InByZWZpeCJdCiAgICAgICAgICAgICAgICAgICAgKyAiICIKICAgICAgICAgICAgICAgICAgICAr + IGNtZF9qc29uWyJlbnRpdHkiXQogICAgICAgICAgICAgICAgICAgICsgIiAiCiAgICAgICAgICAg + ICAgICAgICAgKyAiICIuam9pbihjbWRfanNvblsiY2FwcyJdKQogICAgICAgICAgICAgICAgKSwK + ICAgICAgICAgICAgICAgICIiLAogICAgICAgICAgICApCiAgICAgICAgIyBjaGVjayBpZiB1c2Vy + IGFscmVhZHkgZXhpc3QKICAgICAgICB1c2VyX2tleSA9IHNlbGYuY2hlY2tfdXNlcl9leGlzdChl + bnRpdHkpCiAgICAgICAgaWYgdXNlcl9rZXkgIT0gIiI6CiAgICAgICAgICAgIHJldHVybiB1c2Vy + X2tleSwgZiJ7ZW50aXR5LnNwbGl0KCcuJywgMSlbMV19IgogICAgICAgICAgICAjIGVudGl0eS5z + cGxpdCgnLicsMSlbMV0gdG8gcmVuYW1lIGVudGl0eShjbGllbnQuY3NpLXJiZC1ub2RlKSBhcyBj + c2ktcmJkLW5vZGUKCiAgICAgICAgcmV0X3ZhbCwganNvbl9vdXQsIGVycl9tc2cgPSBzZWxmLl9j + b21tb25fY21kX2pzb25fZ2VuKGNtZF9qc29uKQogICAgICAgICMgaWYgdGhlcmUgaXMgYW4gdW5z + dWNjZXNzZnVsIGF0dGVtcHQsCiAgICAgICAgaWYgcmV0X3ZhbCAhPSAwIG9yIGxlbihqc29uX291 + dCkgPT0gMDoKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAg + ICAgICAgICAgICAgIGYiJ2F1dGggZ2V0LW9yLWNyZWF0ZSB7dXNlcn0nIGNvbW1hbmQgZmFpbGVk + LlxuIgogICAgICAgICAgICAgICAgZiJFcnJvcjoge2Vycl9tc2cgaWYgcmV0X3ZhbCAhPSAwIGVs + c2Ugc2VsZi5FTVBUWV9PVVRQVVRfTElTVH0iCiAgICAgICAgICAgICkKICAgICAgICByZXR1cm4g + c3RyKGpzb25fb3V0WzBdWyJrZXkiXSksIGYie2VudGl0eS5zcGxpdCgnLicsIDEpWzFdfSIKICAg + ICAgICAjIGVudGl0eS5zcGxpdCgnLicsMSlbMV0gdG8gcmVuYW1lIGVudGl0eShjbGllbnQuY3Np + LXJiZC1ub2RlKSBhcyBjc2ktcmJkLW5vZGUKCiAgICBkZWYgZ2V0X2NlcGhmc19kYXRhX3Bvb2xf + ZGV0YWlscyhzZWxmKToKICAgICAgICBjbWRfanNvbiA9IHsicHJlZml4IjogImZzIGxzIiwgImZv + cm1hdCI6ICJqc29uIn0KICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLmRyeV9ydW46CiAgICAg + ICAgICAgIHJldHVybiBzZWxmLmRyeV9ydW4oImNlcGggIiArIGNtZF9qc29uWyJwcmVmaXgiXSkK + ICAgICAgICByZXRfdmFsLCBqc29uX291dCwgZXJyX21zZyA9IHNlbGYuX2NvbW1vbl9jbWRfanNv + bl9nZW4oY21kX2pzb24pCiAgICAgICAgIyBpZiB0aGVyZSBpcyBhbiB1bnN1Y2Nlc3NmdWwgYXR0 + ZW1wdCwgcmVwb3J0IGFuIGVycm9yCiAgICAgICAgaWYgcmV0X3ZhbCAhPSAwOgogICAgICAgICAg + ICAjIGlmIGZzIGFuZCBkYXRhX3Bvb2wgYXJndW1lbnRzIGFyZSBub3Qgc2V0LCBzaWxlbnRseSBy + ZXR1cm4KICAgICAgICAgICAgaWYgKAogICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5j + ZXBoZnNfZmlsZXN5c3RlbV9uYW1lID09ICIiCiAgICAgICAgICAgICAgICBhbmQgc2VsZi5fYXJn + X3BhcnNlci5jZXBoZnNfZGF0YV9wb29sX25hbWUgPT0gIiIKICAgICAgICAgICAgKToKICAgICAg + ICAgICAgICAgIHJldHVybgogICAgICAgICAgICAjIGlmIHVzZXIgaGFzIHByb3ZpZGVkIGFueSBv + ZiB0aGUKICAgICAgICAgICAgIyAnLS1jZXBoZnMtZmlsZXN5c3RlbS1uYW1lJyBvciAnLS1jZXBo + ZnMtZGF0YS1wb29sLW5hbWUnIGFyZ3VtZW50cywKICAgICAgICAgICAgIyByYWlzZSBhbiBleGNl + cHRpb24gYXMgd2UgYXJlIHVuYWJsZSB0byB2ZXJpZnkgdGhlIGFyZ3MKICAgICAgICAgICAgcmFp + c2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgIGYiJ2ZzIGxzJyBj + ZXBoIGNhbGwgZmFpbGVkIHdpdGggZXJyb3I6IHtlcnJfbXNnfSIKICAgICAgICAgICAgKQoKICAg + ICAgICBtYXRjaGluZ19qc29uX291dCA9IHt9CiAgICAgICAgIyBpZiAnLS1jZXBoZnMtZmlsZXN5 + c3RlbS1uYW1lJyBhcmd1bWVudCBpcyBwcm92aWRlZCwKICAgICAgICAjIGNoZWNrIHdoZXRoZXIg + dGhlIHByb3ZpZGVkIGZpbGVzeXN0ZW0tbmFtZSBleGlzdHMgb3Igbm90CiAgICAgICAgaWYgc2Vs + Zi5fYXJnX3BhcnNlci5jZXBoZnNfZmlsZXN5c3RlbV9uYW1lOgogICAgICAgICAgICAjIGdldCB0 + aGUgbWF0Y2hpbmcgbGlzdAogICAgICAgICAgICBtYXRjaGluZ19qc29uX291dF9saXN0ID0gWwog + ICAgICAgICAgICAgICAgbWF0Y2hlZAogICAgICAgICAgICAgICAgZm9yIG1hdGNoZWQgaW4ganNv + bl9vdXQKICAgICAgICAgICAgICAgIGlmIHN0cihtYXRjaGVkWyJuYW1lIl0pID09IHNlbGYuX2Fy + Z19wYXJzZXIuY2VwaGZzX2ZpbGVzeXN0ZW1fbmFtZQogICAgICAgICAgICBdCiAgICAgICAgICAg + ICMgdW5hYmxlIHRvIGZpbmQgYSBtYXRjaGluZyBmcy1uYW1lLCByYWlzZSBhbiBlcnJvcgogICAg + ICAgICAgICBpZiBsZW4obWF0Y2hpbmdfanNvbl9vdXRfbGlzdCkgPT0gMDoKICAgICAgICAgICAg + ICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICAgICAg + ZiJGaWxlc3lzdGVtIHByb3ZpZGVkLCAne3NlbGYuX2FyZ19wYXJzZXIuY2VwaGZzX2ZpbGVzeXN0 + ZW1fbmFtZX0nLCAiCiAgICAgICAgICAgICAgICAgICAgZiJpcyBub3QgZm91bmQgaW4gdGhlIGZz + LWxpc3Q6IHtbc3RyKHhbJ25hbWUnXSkgZm9yIHggaW4ganNvbl9vdXRdfSIKICAgICAgICAgICAg + ICAgICkKICAgICAgICAgICAgbWF0Y2hpbmdfanNvbl9vdXQgPSBtYXRjaGluZ19qc29uX291dF9s + aXN0WzBdCiAgICAgICAgIyBpZiBjZXBoZnMgZmlsZXN5c3RlbSBuYW1lIGlzIG5vdCBwcm92aWRl + ZCwKICAgICAgICAjIHRyeSB0byBnZXQgYSBkZWZhdWx0IGZzIG5hbWUgYnkgZG9pbmcgdGhlIGZv + bGxvd2luZwogICAgICAgIGVsc2U6CiAgICAgICAgICAgICMgYS4gY2hlY2sgaWYgdGhlcmUgaXMg + b25seSBvbmUgZmlsZXN5c3RlbSBpcyBwcmVzZW50CiAgICAgICAgICAgIGlmIGxlbihqc29uX291 + dCkgPT0gMToKICAgICAgICAgICAgICAgIG1hdGNoaW5nX2pzb25fb3V0ID0ganNvbl9vdXRbMF0K + ICAgICAgICAgICAgIyBiLiBvciBlbHNlLCBjaGVjayBpZiBkYXRhX3Bvb2wgbmFtZSBpcyBwcm92 + aWRlZAogICAgICAgICAgICBlbGlmIHNlbGYuX2FyZ19wYXJzZXIuY2VwaGZzX2RhdGFfcG9vbF9u + YW1lOgogICAgICAgICAgICAgICAgIyBhbmQgaWYgcHJlc2VudCwgY2hlY2sgd2hldGhlciB0aGVy + ZSBleGlzdHMgYSBmcyB3aGljaCBoYXMgdGhlIGRhdGFfcG9vbAogICAgICAgICAgICAgICAgZm9y + IGVhY2hKIGluIGpzb25fb3V0OgogICAgICAgICAgICAgICAgICAgIGlmIHNlbGYuX2FyZ19wYXJz + ZXIuY2VwaGZzX2RhdGFfcG9vbF9uYW1lIGluIGVhY2hKWyJkYXRhX3Bvb2xzIl06CiAgICAgICAg + ICAgICAgICAgICAgICAgIG1hdGNoaW5nX2pzb25fb3V0ID0gZWFjaEoKICAgICAgICAgICAgICAg + ICAgICAgICAgYnJlYWsKICAgICAgICAgICAgICAgICMgaWYgdGhlcmUgaXMgbm8gbWF0Y2hpbmcg + ZnMgZXhpc3RzLCB0aGF0IG1lYW5zIHByb3ZpZGVkIGRhdGFfcG9vbCBuYW1lIGlzIGludmFsaWQK + ICAgICAgICAgICAgICAgIGlmIG5vdCBtYXRjaGluZ19qc29uX291dDoKICAgICAgICAgICAgICAg + ICAgICByYWlzZSBFeGVjdXRpb25GYWlsdXJlRXhjZXB0aW9uKAogICAgICAgICAgICAgICAgICAg + ICAgICBmIlByb3ZpZGVkIGRhdGFfcG9vbCBuYW1lLCB7c2VsZi5fYXJnX3BhcnNlci5jZXBoZnNf + ZGF0YV9wb29sX25hbWV9LCIKICAgICAgICAgICAgICAgICAgICAgICAgIiBkb2VzIG5vdCBleGlz + dHMiCiAgICAgICAgICAgICAgICAgICAgKQogICAgICAgICAgICAjIGMuIGlmIG5vdGhpbmcgaXMg + c2V0IGFuZCBjb3VsZG4ndCBmaW5kIGEgZGVmYXVsdCwKICAgICAgICAgICAgZWxzZToKICAgICAg + ICAgICAgICAgICMganVzdCByZXR1cm4gc2lsZW50bHkKICAgICAgICAgICAgICAgIHJldHVybgoK + ICAgICAgICBpZiBtYXRjaGluZ19qc29uX291dDoKICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNl + ci5jZXBoZnNfZmlsZXN5c3RlbV9uYW1lID0gc3RyKG1hdGNoaW5nX2pzb25fb3V0WyJuYW1lIl0p + CiAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIuY2VwaGZzX21ldGFkYXRhX3Bvb2xfbmFtZSA9 + IHN0cigKICAgICAgICAgICAgICAgIG1hdGNoaW5nX2pzb25fb3V0WyJtZXRhZGF0YV9wb29sIl0K + ICAgICAgICAgICAgKQoKICAgICAgICBpZiBpc2luc3RhbmNlKG1hdGNoaW5nX2pzb25fb3V0WyJk + YXRhX3Bvb2xzIl0sIGxpc3QpOgogICAgICAgICAgICAjIGlmIHRoZSB1c2VyIGhhcyBhbHJlYWR5 + IHByb3ZpZGVkIGRhdGEtcG9vbC1uYW1lLAogICAgICAgICAgICAjIHRocm91Z2ggLS1jZXBoZnMt + ZGF0YS1wb29sLW5hbWUKICAgICAgICAgICAgaWYgc2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZGF0 + YV9wb29sX25hbWU6CiAgICAgICAgICAgICAgICAjIGlmIHRoZSBwcm92aWRlZCBuYW1lIGlzIG5v + dCBtYXRjaGluZyB3aXRoIHRoZSBvbmUgaW4gdGhlIGxpc3QKICAgICAgICAgICAgICAgIGlmICgK + ICAgICAgICAgICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLmNlcGhmc19kYXRhX3Bvb2xfbmFt + ZQogICAgICAgICAgICAgICAgICAgIG5vdCBpbiBtYXRjaGluZ19qc29uX291dFsiZGF0YV9wb29s + cyJdCiAgICAgICAgICAgICAgICApOgogICAgICAgICAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlv + bkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICAgICAgICAgIGYiUHJvdmlkZWQgZGF0 + YS1wb29sLW5hbWU6ICd7c2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZGF0YV9wb29sX25hbWV9Jywg + IgogICAgICAgICAgICAgICAgICAgICAgICAiZG9lc24ndCBtYXRjaCBmcm9tIHRoZSBkYXRhLXBv + b2xzIGxpc3Q6ICIKICAgICAgICAgICAgICAgICAgICAgICAgZiJ7W3N0cih4KSBmb3IgeCBpbiBt + YXRjaGluZ19qc29uX291dFsnZGF0YV9wb29scyddXX0iCiAgICAgICAgICAgICAgICAgICAgKQog + ICAgICAgICAgICAjIGlmIGRhdGFfcG9vbCBuYW1lIGlzIG5vdCBwcm92aWRlZCwKICAgICAgICAg + ICAgIyB0aGVuIHRyeSB0byBmaW5kIGEgZGVmYXVsdCBkYXRhIHBvb2wgbmFtZQogICAgICAgICAg + ICBlbHNlOgogICAgICAgICAgICAgICAgIyBpZiBubyBkYXRhX3Bvb2xzIGV4aXN0LCBzaWxlbnRs + eSByZXR1cm4KICAgICAgICAgICAgICAgIGlmIGxlbihtYXRjaGluZ19qc29uX291dFsiZGF0YV9w + b29scyJdKSA9PSAwOgogICAgICAgICAgICAgICAgICAgIHJldHVybgogICAgICAgICAgICAgICAg + c2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZGF0YV9wb29sX25hbWUgPSBzdHIoCiAgICAgICAgICAg + ICAgICAgICAgbWF0Y2hpbmdfanNvbl9vdXRbImRhdGFfcG9vbHMiXVswXQogICAgICAgICAgICAg + ICAgKQogICAgICAgICAgICAjIGlmIHRoZXJlIGFyZSBtb3JlIHRoYW4gb25lICdkYXRhX3Bvb2xz + JyBleGlzdCwKICAgICAgICAgICAgIyB0aGVuIHdhcm4gdGhlIHVzZXIgdGhhdCB3ZSBhcmUgdXNp + bmcgdGhlIHNlbGVjdGVkIG5hbWUKICAgICAgICAgICAgaWYgbGVuKG1hdGNoaW5nX2pzb25fb3V0 + WyJkYXRhX3Bvb2xzIl0pID4gMToKICAgICAgICAgICAgICAgIHByaW50KAogICAgICAgICAgICAg + ICAgICAgICJXQVJOSU5HOiBNdWx0aXBsZSBkYXRhIHBvb2xzIGRldGVjdGVkOiAiCiAgICAgICAg + ICAgICAgICAgICAgZiJ7W3N0cih4KSBmb3IgeCBpbiBtYXRjaGluZ19qc29uX291dFsnZGF0YV9w + b29scyddXX1cbiIKICAgICAgICAgICAgICAgICAgICBmIlVzaW5nIHRoZSBkYXRhLXBvb2w6ICd7 + c2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZGF0YV9wb29sX25hbWV9J1xuIgogICAgICAgICAgICAg + ICAgKQoKICAgIGRlZiBjcmVhdGVfY2hlY2tlcktleShzZWxmLCB1c2VyKToKICAgICAgICBjYXBz + LCBlbnRpdHkgPSBzZWxmLmdldF9jYXBzX2FuZF9lbnRpdHkodXNlcikKICAgICAgICBjbWRfanNv + biA9IHsKICAgICAgICAgICAgInByZWZpeCI6ICJhdXRoIGdldC1vci1jcmVhdGUiLAogICAgICAg + ICAgICAiZW50aXR5IjogZW50aXR5LAogICAgICAgICAgICAiY2FwcyI6IFtjYXAgZm9yIGNhcF9s + aXN0IGluIGxpc3QoY2Fwcy5pdGVtcygpKSBmb3IgY2FwIGluIGNhcF9saXN0XSwKICAgICAgICAg + ICAgImZvcm1hdCI6ICJqc29uIiwKICAgICAgICB9CgogICAgICAgIGlmIHNlbGYuX2FyZ19wYXJz + ZXIuZHJ5X3J1bjoKICAgICAgICAgICAgcmV0dXJuIHNlbGYuZHJ5X3J1bigKICAgICAgICAgICAg + ICAgICJjZXBoICIKICAgICAgICAgICAgICAgICsgY21kX2pzb25bInByZWZpeCJdCiAgICAgICAg + ICAgICAgICArICIgIgogICAgICAgICAgICAgICAgKyBjbWRfanNvblsiZW50aXR5Il0KICAgICAg + ICAgICAgICAgICsgIiAiCiAgICAgICAgICAgICAgICArICIgIi5qb2luKGNtZF9qc29uWyJjYXBz + Il0pCiAgICAgICAgICAgICkKICAgICAgICAjIGNoZWNrIGlmIHVzZXIgYWxyZWFkeSBleGlzdAog + ICAgICAgIHVzZXJfa2V5ID0gc2VsZi5jaGVja191c2VyX2V4aXN0KGVudGl0eSkKICAgICAgICBp + ZiB1c2VyX2tleSAhPSAiIjoKICAgICAgICAgICAgcmV0dXJuIHVzZXJfa2V5CgogICAgICAgIHJl + dF92YWwsIGpzb25fb3V0LCBlcnJfbXNnID0gc2VsZi5fY29tbW9uX2NtZF9qc29uX2dlbihjbWRf + anNvbikKICAgICAgICAjIGlmIHRoZXJlIGlzIGFuIHVuc3VjY2Vzc2Z1bCBhdHRlbXB0LAogICAg + ICAgIGlmIHJldF92YWwgIT0gMCBvciBsZW4oanNvbl9vdXQpID09IDA6CiAgICAgICAgICAgIHJh + aXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICBmIidhdXRoIGdl + dC1vci1jcmVhdGUge3NlbGYucnVuX2FzX3VzZXJ9JyBjb21tYW5kIGZhaWxlZFxuIgogICAgICAg + ICAgICAgICAgZiJFcnJvcjoge2Vycl9tc2cgaWYgcmV0X3ZhbCAhPSAwIGVsc2Ugc2VsZi5FTVBU + WV9PVVRQVVRfTElTVH0iCiAgICAgICAgICAgICkKICAgICAgICByZXR1cm4gc3RyKGpzb25fb3V0 + WzBdWyJrZXkiXSkKCiAgICBkZWYgZ2V0X2NlcGhfZGFzaGJvYXJkX2xpbmsoc2VsZik6CiAgICAg + ICAgY21kX2pzb24gPSB7InByZWZpeCI6ICJtZ3Igc2VydmljZXMiLCAiZm9ybWF0IjogImpzb24i + fQogICAgICAgIGlmIHNlbGYuX2FyZ19wYXJzZXIuZHJ5X3J1bjoKICAgICAgICAgICAgcmV0dXJu + IHNlbGYuZHJ5X3J1bigiY2VwaCAiICsgY21kX2pzb25bInByZWZpeCJdKQogICAgICAgIHJldF92 + YWwsIGpzb25fb3V0LCBfID0gc2VsZi5fY29tbW9uX2NtZF9qc29uX2dlbihjbWRfanNvbikKICAg + ICAgICAjIGlmIHRoZXJlIGlzIGFuIHVuc3VjY2Vzc2Z1bCBhdHRlbXB0LAogICAgICAgIGlmIHJl + dF92YWwgIT0gMCBvciBsZW4oanNvbl9vdXQpID09IDA6CiAgICAgICAgICAgIHJldHVybiBOb25l + CiAgICAgICAgaWYgImRhc2hib2FyZCIgbm90IGluIGpzb25fb3V0OgogICAgICAgICAgICByZXR1 + cm4gTm9uZQogICAgICAgIHJldHVybiBqc29uX291dFsiZGFzaGJvYXJkIl0KCiAgICBkZWYgY3Jl + YXRlX3Jnd19hZG1pbl9vcHNfdXNlcihzZWxmKToKICAgICAgICBjbWQgPSBbCiAgICAgICAgICAg + ICJyYWRvc2d3LWFkbWluIiwKICAgICAgICAgICAgInVzZXIiLAogICAgICAgICAgICAiY3JlYXRl + IiwKICAgICAgICAgICAgIi0tdWlkIiwKICAgICAgICAgICAgc2VsZi5FWFRFUk5BTF9SR1dfQURN + SU5fT1BTX1VTRVJfTkFNRSwKICAgICAgICAgICAgIi0tZGlzcGxheS1uYW1lIiwKICAgICAgICAg + ICAgIlJvb2sgUkdXIEFkbWluIE9wcyB1c2VyIiwKICAgICAgICAgICAgIi0tY2FwcyIsCiAgICAg + ICAgICAgICJidWNrZXRzPSo7dXNlcnM9Kjt1c2FnZT1yZWFkO21ldGFkYXRhPXJlYWQ7em9uZT1y + ZWFkIiwKICAgICAgICAgICAgIi0tcmd3LXJlYWxtIiwKICAgICAgICAgICAgc2VsZi5fYXJnX3Bh + cnNlci5yZ3dfcmVhbG1fbmFtZSwKICAgICAgICAgICAgIi0tcmd3LXpvbmVncm91cCIsCiAgICAg + ICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmd3X3pvbmVncm91cF9uYW1lLAogICAgICAgICAgICAi + LS1yZ3ctem9uZSIsCiAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmd3X3pvbmVfbmFtZSwK + ICAgICAgICBdCiAgICAgICAgaWYgc2VsZi5fYXJnX3BhcnNlci5kcnlfcnVuOgogICAgICAgICAg + ICByZXR1cm4gc2VsZi5kcnlfcnVuKCJjZXBoICIgKyAiICIuam9pbihjbWQpKQogICAgICAgIHRy + eToKICAgICAgICAgICAgb3V0cHV0ID0gc3VicHJvY2Vzcy5jaGVja19vdXRwdXQoY21kLCBzdGRl + cnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGV4Y2VwdCBzdWJwcm9jZXNzLkNhbGxlZFByb2Nl + c3NFcnJvciBhcyBleGVjRXJyOgogICAgICAgICAgICAjIGlmIHRoZSB1c2VyIGFscmVhZHkgZXhp + c3RzLCB3ZSBqdXN0IHF1ZXJ5IGl0CiAgICAgICAgICAgIGlmIGV4ZWNFcnIucmV0dXJuY29kZSA9 + PSBlcnJuby5FRVhJU1Q6CiAgICAgICAgICAgICAgICBjbWQgPSBbCiAgICAgICAgICAgICAgICAg + ICAgInJhZG9zZ3ctYWRtaW4iLAogICAgICAgICAgICAgICAgICAgICJ1c2VyIiwKICAgICAgICAg + ICAgICAgICAgICAiaW5mbyIsCiAgICAgICAgICAgICAgICAgICAgIi0tdWlkIiwKICAgICAgICAg + ICAgICAgICAgICBzZWxmLkVYVEVSTkFMX1JHV19BRE1JTl9PUFNfVVNFUl9OQU1FLAogICAgICAg + ICAgICAgICAgICAgICItLXJndy1yZWFsbSIsCiAgICAgICAgICAgICAgICAgICAgc2VsZi5fYXJn + X3BhcnNlci5yZ3dfcmVhbG1fbmFtZSwKICAgICAgICAgICAgICAgICAgICAiLS1yZ3ctem9uZWdy + b3VwIiwKICAgICAgICAgICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLnJnd196b25lZ3JvdXBf + bmFtZSwKICAgICAgICAgICAgICAgICAgICAiLS1yZ3ctem9uZSIsCiAgICAgICAgICAgICAgICAg + ICAgc2VsZi5fYXJnX3BhcnNlci5yZ3dfem9uZV9uYW1lLAogICAgICAgICAgICAgICAgXQogICAg + ICAgICAgICAgICAgdHJ5OgogICAgICAgICAgICAgICAgICAgIG91dHB1dCA9IHN1YnByb2Nlc3Mu + Y2hlY2tfb3V0cHV0KGNtZCwgc3RkZXJyPXN1YnByb2Nlc3MuUElQRSkKICAgICAgICAgICAgICAg + IGV4Y2VwdCBzdWJwcm9jZXNzLkNhbGxlZFByb2Nlc3NFcnJvciBhcyBleGVjRXJyOgogICAgICAg + ICAgICAgICAgICAgIGVycl9tc2cgPSAoCiAgICAgICAgICAgICAgICAgICAgICAgIGYiZmFpbGVk + IHRvIGV4ZWN1dGUgY29tbWFuZCB7Y21kfS4gT3V0cHV0OiB7ZXhlY0Vyci5vdXRwdXR9LiAiCiAg + ICAgICAgICAgICAgICAgICAgICAgIGYiQ29kZToge2V4ZWNFcnIucmV0dXJuY29kZX0uIEVycm9y + OiB7ZXhlY0Vyci5zdGRlcnJ9IgogICAgICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgICAg + ICAgICBzeXMuc3RkZXJyLndyaXRlKGVycl9tc2cpCiAgICAgICAgICAgICAgICAgICAgcmV0dXJu + IE5vbmUsIE5vbmUsIEZhbHNlLCAiLTEiCiAgICAgICAgICAgIGVsc2U6CiAgICAgICAgICAgICAg + ICBlcnJfbXNnID0gKAogICAgICAgICAgICAgICAgICAgIGYiZmFpbGVkIHRvIGV4ZWN1dGUgY29t + bWFuZCB7Y21kfS4gT3V0cHV0OiB7ZXhlY0Vyci5vdXRwdXR9LiAiCiAgICAgICAgICAgICAgICAg + ICAgZiJDb2RlOiB7ZXhlY0Vyci5yZXR1cm5jb2RlfS4gRXJyb3I6IHtleGVjRXJyLnN0ZGVycn0i + CiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICAgICBzeXMuc3RkZXJyLndyaXRlKGVycl9t + c2cpCiAgICAgICAgICAgICAgICByZXR1cm4gTm9uZSwgTm9uZSwgRmFsc2UsICItMSIKCiAgICAg + ICAgIyBpZiBpdCBpcyBweXRob24yLCBkb24ndCBjaGVjayBmb3IgY2VwaCB2ZXJzaW9uIGZvciBh + ZGRpbmcgYGluZm89cmVhZGAgY2FwKHJnd192YWxpZGF0aW9uKQogICAgICAgIGlmIHN5cy52ZXJz + aW9uX2luZm8ubWFqb3IgPCAzOgogICAgICAgICAgICBqc29ub3V0cHV0ID0ganNvbi5sb2Fkcyhv + dXRwdXQpCiAgICAgICAgICAgIHJldHVybiAoCiAgICAgICAgICAgICAgICBqc29ub3V0cHV0WyJr + ZXlzIl1bMF1bImFjY2Vzc19rZXkiXSwKICAgICAgICAgICAgICAgIGpzb25vdXRwdXRbImtleXMi + XVswXVsic2VjcmV0X2tleSJdLAogICAgICAgICAgICAgICAgRmFsc2UsCiAgICAgICAgICAgICAg + ICAiIiwKICAgICAgICAgICAgKQoKICAgICAgICAjIHNlcGFyYXRlbHkgYWRkIGluZm89cmVhZCBj + YXBzIGZvciByZ3ctZW5kcG9pbnQgaXAgdmFsaWRhdGlvbgogICAgICAgIGluZm9fY2FwX3N1cHBv + cnRlZCA9IFRydWUKICAgICAgICBjbWQgPSBbCiAgICAgICAgICAgICJyYWRvc2d3LWFkbWluIiwK + ICAgICAgICAgICAgImNhcHMiLAogICAgICAgICAgICAiYWRkIiwKICAgICAgICAgICAgIi0tdWlk + IiwKICAgICAgICAgICAgc2VsZi5FWFRFUk5BTF9SR1dfQURNSU5fT1BTX1VTRVJfTkFNRSwKICAg + ICAgICAgICAgIi0tY2FwcyIsCiAgICAgICAgICAgICJpbmZvPXJlYWQiLAogICAgICAgICAgICAi + LS1yZ3ctcmVhbG0iLAogICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLnJnd19yZWFsbV9uYW1l + LAogICAgICAgICAgICAiLS1yZ3ctem9uZWdyb3VwIiwKICAgICAgICAgICAgc2VsZi5fYXJnX3Bh + cnNlci5yZ3dfem9uZWdyb3VwX25hbWUsCiAgICAgICAgICAgICItLXJndy16b25lIiwKICAgICAg + ICAgICAgc2VsZi5fYXJnX3BhcnNlci5yZ3dfem9uZV9uYW1lLAogICAgICAgIF0KICAgICAgICB0 + cnk6CiAgICAgICAgICAgIG91dHB1dCA9IHN1YnByb2Nlc3MuY2hlY2tfb3V0cHV0KGNtZCwgc3Rk + ZXJyPXN1YnByb2Nlc3MuUElQRSkKICAgICAgICBleGNlcHQgc3VicHJvY2Vzcy5DYWxsZWRQcm9j + ZXNzRXJyb3IgYXMgZXhlY0VycjoKICAgICAgICAgICAgIyBpZiB0aGUgY2VwaCB2ZXJzaW9uIG5v + dCBzdXBwb3J0ZWQgZm9yIGFkZGluZyBgaW5mbz1yZWFkYCBjYXAocmd3X3ZhbGlkYXRpb24pCiAg + ICAgICAgICAgIGlmICgKICAgICAgICAgICAgICAgICJjb3VsZCBub3QgYWRkIGNhcHM6IHVuYWJs + ZSB0byBhZGQgY2FwczogaW5mbz1yZWFkXG4iCiAgICAgICAgICAgICAgICBpbiBleGVjRXJyLnN0 + ZGVyci5kZWNvZGUoInV0Zi04IikKICAgICAgICAgICAgICAgIGFuZCBleGVjRXJyLnJldHVybmNv + ZGUgPT0gMjQ0CiAgICAgICAgICAgICk6CiAgICAgICAgICAgICAgICBpbmZvX2NhcF9zdXBwb3J0 + ZWQgPSBGYWxzZQogICAgICAgICAgICBlbHNlOgogICAgICAgICAgICAgICAgZXJyX21zZyA9ICgK + ICAgICAgICAgICAgICAgICAgICBmImZhaWxlZCB0byBleGVjdXRlIGNvbW1hbmQge2NtZH0uIE91 + dHB1dDoge2V4ZWNFcnIub3V0cHV0fS4gIgogICAgICAgICAgICAgICAgICAgIGYiQ29kZToge2V4 + ZWNFcnIucmV0dXJuY29kZX0uIEVycm9yOiB7ZXhlY0Vyci5zdGRlcnJ9IgogICAgICAgICAgICAg + ICAgKQogICAgICAgICAgICAgICAgc3lzLnN0ZGVyci53cml0ZShlcnJfbXNnKQogICAgICAgICAg + ICAgICAgcmV0dXJuIE5vbmUsIE5vbmUsIEZhbHNlLCAiLTEiCgogICAgICAgIGpzb25vdXRwdXQg + PSBqc29uLmxvYWRzKG91dHB1dCkKICAgICAgICByZXR1cm4gKAogICAgICAgICAgICBqc29ub3V0 + cHV0WyJrZXlzIl1bMF1bImFjY2Vzc19rZXkiXSwKICAgICAgICAgICAganNvbm91dHB1dFsia2V5 + cyJdWzBdWyJzZWNyZXRfa2V5Il0sCiAgICAgICAgICAgIGluZm9fY2FwX3N1cHBvcnRlZCwKICAg + ICAgICAgICAgIiIsCiAgICAgICAgKQoKICAgIGRlZiB2YWxpZGF0ZV9yYmRfcG9vbChzZWxmLCBw + b29sX25hbWUpOgogICAgICAgIGlmIG5vdCBzZWxmLmNsdXN0ZXIucG9vbF9leGlzdHMocG9vbF9u + YW1lKToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAg + ICAgICAgICAgIGYiVGhlIHByb3ZpZGVkIHBvb2wsICd7cG9vbF9uYW1lfScsIGRvZXMgbm90IGV4 + aXN0IgogICAgICAgICAgICApCgogICAgZGVmIGluaXRfcmJkX3Bvb2woc2VsZiwgcmJkX3Bvb2xf + bmFtZSk6CiAgICAgICAgaWYgaXNpbnN0YW5jZShzZWxmLmNsdXN0ZXIsIER1bW15UmFkb3MpOgog + ICAgICAgICAgICByZXR1cm4KICAgICAgICBpb2N0eCA9IHNlbGYuY2x1c3Rlci5vcGVuX2lvY3R4 + KHJiZF9wb29sX25hbWUpCiAgICAgICAgcmJkX2luc3QgPSByYmQuUkJEKCkKICAgICAgICByYmRf + aW5zdC5wb29sX2luaXQoaW9jdHgsIFRydWUpCgogICAgZGVmIHZhbGlkYXRlX3JhZG9zX25hbWVz + cGFjZShzZWxmKToKICAgICAgICByYmRfcG9vbF9uYW1lID0gc2VsZi5fYXJnX3BhcnNlci5yYmRf + ZGF0YV9wb29sX25hbWUKICAgICAgICByYWRvc19uYW1lc3BhY2UgPSBzZWxmLl9hcmdfcGFyc2Vy + LnJhZG9zX25hbWVzcGFjZQogICAgICAgIGlmIHJhZG9zX25hbWVzcGFjZSA9PSAiIjoKICAgICAg + ICAgICAgcmV0dXJuCiAgICAgICAgaWYgcmFkb3NfbmFtZXNwYWNlLmlzbG93ZXIoKSA9PSBGYWxz + ZToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAg + ICAgICAgIGYiVGhlIHByb3ZpZGVkIHJhZG9zIE5hbWVzcGFjZSwgJ3tyYWRvc19uYW1lc3BhY2V9 + JywgIgogICAgICAgICAgICAgICAgZiJjb250YWlucyB1cHBlciBjYXNlIgogICAgICAgICAgICAp + CiAgICAgICAgcmJkX2luc3QgPSByYmQuUkJEKCkKICAgICAgICBpb2N0eCA9IHNlbGYuY2x1c3Rl + ci5vcGVuX2lvY3R4KHJiZF9wb29sX25hbWUpCiAgICAgICAgaWYgcmJkX2luc3QubmFtZXNwYWNl + X2V4aXN0cyhpb2N0eCwgcmFkb3NfbmFtZXNwYWNlKSBpcyBGYWxzZToKICAgICAgICAgICAgcmFp + c2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgIGYiVGhlIHByb3Zp + ZGVkIHJhZG9zIE5hbWVzcGFjZSwgJ3tyYWRvc19uYW1lc3BhY2V9JywgIgogICAgICAgICAgICAg + ICAgZiJpcyBub3QgZm91bmQgaW4gdGhlIHBvb2wgJ3tyYmRfcG9vbF9uYW1lfSciCiAgICAgICAg + ICAgICkKCiAgICBkZWYgZ2V0X29yX2NyZWF0ZV9zdWJ2b2x1bWVfZ3JvdXAoc2VsZiwgc3Vidm9s + dW1lX2dyb3VwLCBjZXBoZnNfZmlsZXN5c3RlbV9uYW1lKToKICAgICAgICBjbWQgPSBbCiAgICAg + ICAgICAgICJjZXBoIiwKICAgICAgICAgICAgImZzIiwKICAgICAgICAgICAgInN1YnZvbHVtZWdy + b3VwIiwKICAgICAgICAgICAgImdldHBhdGgiLAogICAgICAgICAgICBjZXBoZnNfZmlsZXN5c3Rl + bV9uYW1lLAogICAgICAgICAgICBzdWJ2b2x1bWVfZ3JvdXAsCiAgICAgICAgXQogICAgICAgIHRy + eToKICAgICAgICAgICAgXyA9IHN1YnByb2Nlc3MuY2hlY2tfb3V0cHV0KGNtZCwgc3RkZXJyPXN1 + YnByb2Nlc3MuUElQRSkKICAgICAgICBleGNlcHQgc3VicHJvY2Vzcy5DYWxsZWRQcm9jZXNzRXJy + b3I6CiAgICAgICAgICAgIGNtZCA9IFsKICAgICAgICAgICAgICAgICJjZXBoIiwKICAgICAgICAg + ICAgICAgICJmcyIsCiAgICAgICAgICAgICAgICAic3Vidm9sdW1lZ3JvdXAiLAogICAgICAgICAg + ICAgICAgImNyZWF0ZSIsCiAgICAgICAgICAgICAgICBjZXBoZnNfZmlsZXN5c3RlbV9uYW1lLAog + ICAgICAgICAgICAgICAgc3Vidm9sdW1lX2dyb3VwLAogICAgICAgICAgICBdCiAgICAgICAgICAg + IHRyeToKICAgICAgICAgICAgICAgIF8gPSBzdWJwcm9jZXNzLmNoZWNrX291dHB1dChjbWQsIHN0 + ZGVycj1zdWJwcm9jZXNzLlBJUEUpCiAgICAgICAgICAgIGV4Y2VwdCBzdWJwcm9jZXNzLkNhbGxl + ZFByb2Nlc3NFcnJvcjoKICAgICAgICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNl + cHRpb24oCiAgICAgICAgICAgICAgICAgICAgZiJzdWJ2b2x1bWUgZ3JvdXAge3N1YnZvbHVtZV9n + cm91cH0gaXMgbm90IGFibGUgdG8gZ2V0IGNyZWF0ZWQiCiAgICAgICAgICAgICAgICApCgogICAg + ZGVmIHBpbl9zdWJ2b2x1bWUoCiAgICAgICAgc2VsZiwgc3Vidm9sdW1lX2dyb3VwLCBjZXBoZnNf + ZmlsZXN5c3RlbV9uYW1lLCBwaW5fdHlwZSwgcGluX3NldHRpbmcKICAgICk6CiAgICAgICAgY21k + ID0gWwogICAgICAgICAgICAiY2VwaCIsCiAgICAgICAgICAgICJmcyIsCiAgICAgICAgICAgICJz + dWJ2b2x1bWVncm91cCIsCiAgICAgICAgICAgICJwaW4iLAogICAgICAgICAgICBjZXBoZnNfZmls + ZXN5c3RlbV9uYW1lLAogICAgICAgICAgICBzdWJ2b2x1bWVfZ3JvdXAsCiAgICAgICAgICAgIHBp + bl90eXBlLAogICAgICAgICAgICBwaW5fc2V0dGluZywKICAgICAgICBdCiAgICAgICAgdHJ5Ogog + ICAgICAgICAgICBfID0gc3VicHJvY2Vzcy5jaGVja19vdXRwdXQoY21kLCBzdGRlcnI9c3VicHJv + Y2Vzcy5QSVBFKQogICAgICAgIGV4Y2VwdCBzdWJwcm9jZXNzLkNhbGxlZFByb2Nlc3NFcnJvcjoK + ICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVyZUV4Y2VwdGlvbigKICAgICAgICAgICAg + ICAgIGYic3Vidm9sdW1lIGdyb3VwIHtzdWJ2b2x1bWVfZ3JvdXB9IGlzIG5vdCBhYmxlIHRvIGdl + dCBwaW5uZWQiCiAgICAgICAgICAgICkKCiAgICBkZWYgZ2V0X3Jnd19mc2lkKHNlbGYsIGJhc2Vf + dXJsLCB2ZXJpZnkpOgogICAgICAgIGFjY2Vzc19rZXkgPSBzZWxmLm91dF9tYXBbIlJHV19BRE1J + Tl9PUFNfVVNFUl9BQ0NFU1NfS0VZIl0KICAgICAgICBzZWNyZXRfa2V5ID0gc2VsZi5vdXRfbWFw + WyJSR1dfQURNSU5fT1BTX1VTRVJfU0VDUkVUX0tFWSJdCiAgICAgICAgcmd3X2VuZHBvaW50ID0g + c2VsZi5fYXJnX3BhcnNlci5yZ3dfZW5kcG9pbnQKICAgICAgICBiYXNlX3VybCA9IGJhc2VfdXJs + ICsgIjovLyIgKyByZ3dfZW5kcG9pbnQgKyAiL2FkbWluL2luZm8/IgogICAgICAgIHBhcmFtcyA9 + IHsiZm9ybWF0IjogImpzb24ifQogICAgICAgIHJlcXVlc3RfdXJsID0gYmFzZV91cmwgKyB1cmxl + bmNvZGUocGFyYW1zKQoKICAgICAgICB0cnk6CiAgICAgICAgICAgIHIgPSByZXF1ZXN0cy5nZXQo + CiAgICAgICAgICAgICAgICByZXF1ZXN0X3VybCwKICAgICAgICAgICAgICAgIGF1dGg9UzNBdXRo + KGFjY2Vzc19rZXksIHNlY3JldF9rZXksIHJnd19lbmRwb2ludCksCiAgICAgICAgICAgICAgICB2 + ZXJpZnk9dmVyaWZ5LAogICAgICAgICAgICApCiAgICAgICAgZXhjZXB0IHJlcXVlc3RzLmV4Y2Vw + dGlvbnMuVGltZW91dDoKICAgICAgICAgICAgc3lzLnN0ZGVyci53cml0ZSgKICAgICAgICAgICAg + ICAgIGYiaW52YWxpZCBlbmRwb2ludDosIG5vdCBhYmxlIHRvIGNhbGwgYWRtaW4tb3BzIGFwaXty + Z3dfZW5kcG9pbnR9IgogICAgICAgICAgICApCiAgICAgICAgICAgIHJldHVybiAiIiwgIi0xIgog + ICAgICAgIHIxID0gci5qc29uKCkKICAgICAgICBpZiByMSBpcyBOb25lIG9yIHIxLmdldCgiaW5m + byIpIGlzIE5vbmU6CiAgICAgICAgICAgIHN5cy5zdGRlcnIud3JpdGUoCiAgICAgICAgICAgICAg + ICBmIlRoZSBwcm92aWRlZCByZ3cgRW5kcG9pbnQsICd7c2VsZi5fYXJnX3BhcnNlci5yZ3dfZW5k + cG9pbnR9JywgaXMgaW52YWxpZC4iCiAgICAgICAgICAgICkKICAgICAgICAgICAgcmV0dXJuICgK + ICAgICAgICAgICAgICAgICIiLAogICAgICAgICAgICAgICAgIi0xIiwKICAgICAgICAgICAgKQoK + ICAgICAgICByZXR1cm4gcjFbImluZm8iXVsic3RvcmFnZV9iYWNrZW5kcyJdWzBdWyJjbHVzdGVy + X2lkIl0sICIiCgogICAgZGVmIHZhbGlkYXRlX3Jnd19lbmRwb2ludChzZWxmLCBpbmZvX2NhcF9z + dXBwb3J0ZWQpOgogICAgICAgICMgaWYgdGhlICdjbHVzdGVyJyBpbnN0YW5jZSBpcyBhIGR1bW15 + IG9uZSwKICAgICAgICAjIGRvbid0IHRyeSB0byByZWFjaCBvdXQgdG8gdGhlIGVuZHBvaW50CiAg + ICAgICAgaWYgaXNpbnN0YW5jZShzZWxmLmNsdXN0ZXIsIER1bW15UmFkb3MpOgogICAgICAgICAg + ICByZXR1cm4KCiAgICAgICAgcmd3X2VuZHBvaW50ID0gc2VsZi5fYXJnX3BhcnNlci5yZ3dfZW5k + cG9pbnQKCiAgICAgICAgIyB2YWxpZGF0ZSByZ3cgZW5kcG9pbnQgb25seSBpZiBpcCBhZGRyZXNz + IGlzIHBhc3NlZAogICAgICAgIGlwX3R5cGUgPSBzZWxmLl9pbnZhbGlkX2VuZHBvaW50KHJnd19l + bmRwb2ludCkKCiAgICAgICAgIyBjaGVjayBpZiB0aGUgcmd3IGVuZHBvaW50IGlzIHJlYWNoYWJs + ZQogICAgICAgIGNlcnQgPSBOb25lCiAgICAgICAgaWYgbm90IHNlbGYuX2FyZ19wYXJzZXIucmd3 + X3NraXBfdGxzIGFuZCBzZWxmLnZhbGlkYXRlX3Jnd19lbmRwb2ludF90bHNfY2VydCgpOgogICAg + ICAgICAgICBjZXJ0ID0gc2VsZi5fYXJnX3BhcnNlci5yZ3dfdGxzX2NlcnRfcGF0aAogICAgICAg + IGJhc2VfdXJsLCB2ZXJpZnksIGVyciA9IHNlbGYuZW5kcG9pbnRfZGlhbChyZ3dfZW5kcG9pbnQs + IGlwX3R5cGUsIGNlcnQ9Y2VydCkKICAgICAgICBpZiBlcnIgIT0gIiI6CiAgICAgICAgICAgIHJl + dHVybiAiLTEiCgogICAgICAgICMgY2hlY2sgaWYgdGhlIHJndyBlbmRwb2ludCBiZWxvbmdzIHRv + IHRoZSBzYW1lIGNsdXN0ZXIKICAgICAgICAjIG9ubHkgY2hlY2sgaWYgYGluZm9gIGNhcCBpcyBz + dXBwb3J0ZWQKICAgICAgICBpZiBpbmZvX2NhcF9zdXBwb3J0ZWQ6CiAgICAgICAgICAgIGZzaWQg + PSBzZWxmLmdldF9mc2lkKCkKICAgICAgICAgICAgcmd3X2ZzaWQsIGVyciA9IHNlbGYuZ2V0X3Jn + d19mc2lkKGJhc2VfdXJsLCB2ZXJpZnkpCiAgICAgICAgICAgIGlmIGVyciA9PSAiLTEiOgogICAg + ICAgICAgICAgICAgcmV0dXJuICItMSIKICAgICAgICAgICAgaWYgZnNpZCAhPSByZ3dfZnNpZDoK + ICAgICAgICAgICAgICAgIHN5cy5zdGRlcnIud3JpdGUoCiAgICAgICAgICAgICAgICAgICAgZiJU + aGUgcHJvdmlkZWQgcmd3IEVuZHBvaW50LCAne3NlbGYuX2FyZ19wYXJzZXIucmd3X2VuZHBvaW50 + fScsIGlzIGludmFsaWQuIFdlIGFyZSB2YWxpZGF0aW5nIGJ5IGNhbGxpbmcgdGhlIGFkbWlub3Bz + IGFwaSB0aHJvdWdoIHJndy1lbmRwb2ludCBhbmQgdmFsaWRhdGluZyB0aGUgY2x1c3Rlcl9pZCAn + e3Jnd19mc2lkfScgaXMgZXF1YWwgdG8gdGhlIGNlcGggY2x1c3RlciBmc2lkICd7ZnNpZH0nIgog + ICAgICAgICAgICAgICAgKQogICAgICAgICAgICAgICAgcmV0dXJuICItMSIKCiAgICAgICAgIyBj + aGVjayBpZiB0aGUgcmd3IGVuZHBvaW50IHBvb2wgZXhpc3QKICAgICAgICAjIG9ubHkgdmFsaWRh + dGUgaWYgcmd3X3Bvb2xfcHJlZml4IGlzIHBhc3NlZCBlbHNlIGl0IHdpbGwgdGFrZSBkZWZhdWx0 + IHZhbHVlIGFuZCB3ZSBkb24ndCBjcmVhdGUgdGhlc2UgZGVmYXVsdCBwb29scwogICAgICAgIGlm + IHNlbGYuX2FyZ19wYXJzZXIucmd3X3Bvb2xfcHJlZml4ICE9ICJkZWZhdWx0IjoKICAgICAgICAg + ICAgcmd3X3Bvb2xzX3RvX3ZhbGlkYXRlID0gWwogICAgICAgICAgICAgICAgZiJ7c2VsZi5fYXJn + X3BhcnNlci5yZ3dfcG9vbF9wcmVmaXh9LnJndy5tZXRhIiwKICAgICAgICAgICAgICAgICIucmd3 + LnJvb3QiLAogICAgICAgICAgICAgICAgZiJ7c2VsZi5fYXJnX3BhcnNlci5yZ3dfcG9vbF9wcmVm + aXh9LnJndy5jb250cm9sIiwKICAgICAgICAgICAgICAgIGYie3NlbGYuX2FyZ19wYXJzZXIucmd3 + X3Bvb2xfcHJlZml4fS5yZ3cubG9nIiwKICAgICAgICAgICAgXQogICAgICAgICAgICBmb3IgX3Jn + d19wb29sX3RvX3ZhbGlkYXRlIGluIHJnd19wb29sc190b192YWxpZGF0ZToKICAgICAgICAgICAg + ICAgIGlmIG5vdCBzZWxmLmNsdXN0ZXIucG9vbF9leGlzdHMoX3Jnd19wb29sX3RvX3ZhbGlkYXRl + KToKICAgICAgICAgICAgICAgICAgICBzeXMuc3RkZXJyLndyaXRlKAogICAgICAgICAgICAgICAg + ICAgICAgICBmIlRoZSBwcm92aWRlZCBwb29sLCAne19yZ3dfcG9vbF90b192YWxpZGF0ZX0nLCBk + b2VzIG5vdCBleGlzdCIKICAgICAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICAgICAgICAg + cmV0dXJuICItMSIKCiAgICAgICAgcmV0dXJuICIiCgogICAgZGVmIHZhbGlkYXRlX3Jnd19tdWx0 + aXNpdGUoc2VsZiwgcmd3X211bHRpc2l0ZV9jb25maWdfbmFtZSwgcmd3X211bHRpc2l0ZV9jb25m + aWcpOgogICAgICAgIGlmIHJnd19tdWx0aXNpdGVfY29uZmlnICE9ICIiOgogICAgICAgICAgICBj + bWQgPSBbCiAgICAgICAgICAgICAgICAicmFkb3Nndy1hZG1pbiIsCiAgICAgICAgICAgICAgICBy + Z3dfbXVsdGlzaXRlX2NvbmZpZywKICAgICAgICAgICAgICAgICJnZXQiLAogICAgICAgICAgICAg + ICAgIi0tcmd3LSIgKyByZ3dfbXVsdGlzaXRlX2NvbmZpZywKICAgICAgICAgICAgICAgIHJnd19t + dWx0aXNpdGVfY29uZmlnX25hbWUsCiAgICAgICAgICAgIF0KICAgICAgICAgICAgdHJ5OgogICAg + ICAgICAgICAgICAgXyA9IHN1YnByb2Nlc3MuY2hlY2tfb3V0cHV0KGNtZCwgc3RkZXJyPXN1YnBy + b2Nlc3MuUElQRSkKICAgICAgICAgICAgZXhjZXB0IHN1YnByb2Nlc3MuQ2FsbGVkUHJvY2Vzc0Vy + cm9yIGFzIGV4ZWNFcnI6CiAgICAgICAgICAgICAgICBlcnJfbXNnID0gKAogICAgICAgICAgICAg + ICAgICAgIGYiZmFpbGVkIHRvIGV4ZWN1dGUgY29tbWFuZCB7Y21kfS4gT3V0cHV0OiB7ZXhlY0Vy + ci5vdXRwdXR9LiAiCiAgICAgICAgICAgICAgICAgICAgZiJDb2RlOiB7ZXhlY0Vyci5yZXR1cm5j + b2RlfS4gRXJyb3I6IHtleGVjRXJyLnN0ZGVycn0iCiAgICAgICAgICAgICAgICApCiAgICAgICAg + ICAgICAgICBzeXMuc3RkZXJyLndyaXRlKGVycl9tc2cpCiAgICAgICAgICAgICAgICByZXR1cm4g + Ii0xIgogICAgICAgIHJldHVybiAiIgoKICAgIGRlZiBjb252ZXJ0X2NvbW1hX3NlcGFyYXRlZF90 + b19hcnJheShzZWxmLCB2YWx1ZSk6CiAgICAgICAgcmV0dXJuIHZhbHVlLnNwbGl0KCIsIikKCiAg + ICBkZWYgcmFpc2VfZXhjZXB0aW9uX2lmX2FueV90b3BvbG9neV9mbGFnX2lzX21pc3Npbmcoc2Vs + Zik6CiAgICAgICAgaWYgKAogICAgICAgICAgICAoCiAgICAgICAgICAgICAgICBzZWxmLl9hcmdf + cGFyc2VyLnRvcG9sb2d5X3Bvb2xzICE9ICIiCiAgICAgICAgICAgICAgICBhbmQgKAogICAgICAg + ICAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIudG9wb2xvZ3lfZmFpbHVyZV9kb21haW5fbGFi + ZWwgPT0gIiIKICAgICAgICAgICAgICAgICAgICBvciBzZWxmLl9hcmdfcGFyc2VyLnRvcG9sb2d5 + X2ZhaWx1cmVfZG9tYWluX3ZhbHVlcyA9PSAiIgogICAgICAgICAgICAgICAgKQogICAgICAgICAg + ICApCiAgICAgICAgICAgIG9yICgKICAgICAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIudG9w + b2xvZ3lfZmFpbHVyZV9kb21haW5fbGFiZWwgIT0gIiIKICAgICAgICAgICAgICAgIGFuZCAoCiAg + ICAgICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci50b3BvbG9neV9wb29scyA9PSAiIgog + ICAgICAgICAgICAgICAgICAgIG9yIHNlbGYuX2FyZ19wYXJzZXIudG9wb2xvZ3lfZmFpbHVyZV9k + b21haW5fdmFsdWVzID09ICIiCiAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICkKICAgICAg + ICAgICAgb3IgKAogICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci50b3BvbG9neV9mYWls + dXJlX2RvbWFpbl92YWx1ZXMgIT0gIiIKICAgICAgICAgICAgICAgIGFuZCAoCiAgICAgICAgICAg + ICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci50b3BvbG9neV9wb29scyA9PSAiIgogICAgICAgICAg + ICAgICAgICAgIG9yIHNlbGYuX2FyZ19wYXJzZXIudG9wb2xvZ3lfZmFpbHVyZV9kb21haW5fbGFi + ZWwgPT0gIiIKICAgICAgICAgICAgICAgICkKICAgICAgICAgICAgKQogICAgICAgICk6CiAgICAg + ICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICAi + cHJvdmlkZSBhbGwgdGhlIHRvcG9sb2d5IGZsYWdzIC0tdG9wb2xvZ3ktcG9vbHMsIC0tdG9wb2xv + Z3ktZmFpbHVyZS1kb21haW4tbGFiZWwsIC0tdG9wb2xvZ3ktZmFpbHVyZS1kb21haW4tdmFsdWVz + IgogICAgICAgICAgICApCgogICAgZGVmIHZhbGlkYXRlX3RvcG9sb2d5X3ZhbHVlcyhzZWxmLCB0 + b3BvbG9neV9wb29scywgdG9wb2xvZ3lfZmQpOgogICAgICAgIGlmIGxlbih0b3BvbG9neV9wb29s + cykgIT0gbGVuKHRvcG9sb2d5X2ZkKToKICAgICAgICAgICAgcmFpc2UgRXhlY3V0aW9uRmFpbHVy + ZUV4Y2VwdGlvbigKICAgICAgICAgICAgICAgIGYiVGhlIHByb3ZpZGVkIHRvcG9sb2d5IHBvb2xz + LCAne3RvcG9sb2d5X3Bvb2xzfScsIGFuZCAiCiAgICAgICAgICAgICAgICBmInRvcG9sb2d5IGZh + aWx1cmUgZG9tYWluLCAne3RvcG9sb2d5X2ZkfScsIgogICAgICAgICAgICAgICAgZiJhcmUgb2Yg + ZGlmZmVyZW50IGxlbmd0aCwgJ3tsZW4odG9wb2xvZ3lfcG9vbHMpfScgYW5kICd7bGVuKHRvcG9s + b2d5X2ZkKX0nIHJlc3BjdGl2ZWx5IgogICAgICAgICAgICApCiAgICAgICAgcmV0dXJuCgogICAg + ZGVmIHZhbGlkYXRlX3RvcG9sb2d5X3JiZF9wb29scyhzZWxmLCB0b3BvbG9neV9yYmRfcG9vbHMp + OgogICAgICAgIGZvciBwb29sIGluIHRvcG9sb2d5X3JiZF9wb29sczoKICAgICAgICAgICAgc2Vs + Zi52YWxpZGF0ZV9yYmRfcG9vbChwb29sKQoKICAgIGRlZiBpbml0X3RvcG9sb2d5X3JiZF9wb29s + cyhzZWxmLCB0b3BvbG9neV9yYmRfcG9vbHMpOgogICAgICAgIGZvciBwb29sIGluIHRvcG9sb2d5 + X3JiZF9wb29sczoKICAgICAgICAgICAgc2VsZi5pbml0X3JiZF9wb29sKHBvb2wpCgogICAgZGVm + IF9nZW5fb3V0cHV0X21hcChzZWxmKToKICAgICAgICBpZiBzZWxmLm91dF9tYXA6CiAgICAgICAg + ICAgIHJldHVybgogICAgICAgICMgc3VwcG9ydCBsZWdhY3kgZmxhZyB3aXRoIHVwZ3JhZGVzCiAg + ICAgICAgaWYgc2VsZi5fYXJnX3BhcnNlci5jbHVzdGVyX25hbWU6CiAgICAgICAgICAgIHNlbGYu + X2FyZ19wYXJzZXIuazhzX2NsdXN0ZXJfbmFtZSA9IHNlbGYuX2FyZ19wYXJzZXIuY2x1c3Rlcl9u + YW1lCiAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5rOHNfY2x1c3Rlcl9uYW1lID0gKAogICAgICAg + ICAgICBzZWxmLl9hcmdfcGFyc2VyLms4c19jbHVzdGVyX25hbWUubG93ZXIoKQogICAgICAgICkg + ICMgYWx3YXlzIGNvbnZlcnQgY2x1c3RlciBuYW1lIHRvIGxvd2VyY2FzZSBjaGFyYWN0ZXJzCiAg + ICAgICAgc2VsZi52YWxpZGF0ZV9yYmRfcG9vbChzZWxmLl9hcmdfcGFyc2VyLnJiZF9kYXRhX3Bv + b2xfbmFtZSkKICAgICAgICBzZWxmLmluaXRfcmJkX3Bvb2woc2VsZi5fYXJnX3BhcnNlci5yYmRf + ZGF0YV9wb29sX25hbWUpCiAgICAgICAgc2VsZi52YWxpZGF0ZV9yYWRvc19uYW1lc3BhY2UoKQog + ICAgICAgIHNlbGYuX2V4Y2x1ZGVkX2tleXMuYWRkKCJLOFNfQ0xVU1RFUl9OQU1FIikKICAgICAg + ICBzZWxmLmdldF9jZXBoZnNfZGF0YV9wb29sX2RldGFpbHMoKQogICAgICAgIHNlbGYub3V0X21h + cFsiTkFNRVNQQUNFIl0gPSBzZWxmLl9hcmdfcGFyc2VyLm5hbWVzcGFjZQogICAgICAgIHNlbGYu + b3V0X21hcFsiSzhTX0NMVVNURVJfTkFNRSJdID0gc2VsZi5fYXJnX3BhcnNlci5rOHNfY2x1c3Rl + cl9uYW1lCiAgICAgICAgc2VsZi5vdXRfbWFwWyJST09LX0VYVEVSTkFMX0ZTSUQiXSA9IHNlbGYu + Z2V0X2ZzaWQoKQogICAgICAgIHNlbGYub3V0X21hcFsiUk9PS19FWFRFUk5BTF9VU0VSTkFNRSJd + ID0gc2VsZi5ydW5fYXNfdXNlcgogICAgICAgIHNlbGYub3V0X21hcFsiUk9PS19FWFRFUk5BTF9D + RVBIX01PTl9EQVRBIl0gPSBzZWxmLmdldF9jZXBoX2V4dGVybmFsX21vbl9kYXRhKCkKICAgICAg + ICBzZWxmLm91dF9tYXBbIlJPT0tfRVhURVJOQUxfVVNFUl9TRUNSRVQiXSA9IHNlbGYuY3JlYXRl + X2NoZWNrZXJLZXkoCiAgICAgICAgICAgICJjbGllbnQuaGVhbHRoY2hlY2tlciIKICAgICAgICAp + CiAgICAgICAgc2VsZi5vdXRfbWFwWyJST09LX0VYVEVSTkFMX0RBU0hCT0FSRF9MSU5LIl0gPSBz + ZWxmLmdldF9jZXBoX2Rhc2hib2FyZF9saW5rKCkKICAgICAgICAoCiAgICAgICAgICAgIHNlbGYu + b3V0X21hcFsiQ1NJX1JCRF9OT0RFX1NFQ1JFVCJdLAogICAgICAgICAgICBzZWxmLm91dF9tYXBb + IkNTSV9SQkRfTk9ERV9TRUNSRVRfTkFNRSJdLAogICAgICAgICkgPSBzZWxmLmNyZWF0ZV9jZXBo + Q1NJS2V5cmluZ191c2VyKCJjbGllbnQuY3NpLXJiZC1ub2RlIikKICAgICAgICAoCiAgICAgICAg + ICAgIHNlbGYub3V0X21hcFsiQ1NJX1JCRF9QUk9WSVNJT05FUl9TRUNSRVQiXSwKICAgICAgICAg + ICAgc2VsZi5vdXRfbWFwWyJDU0lfUkJEX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FIl0sCiAgICAg + ICAgKSA9IHNlbGYuY3JlYXRlX2NlcGhDU0lLZXlyaW5nX3VzZXIoImNsaWVudC5jc2ktcmJkLXBy + b3Zpc2lvbmVyIikKICAgICAgICBzZWxmLm91dF9tYXBbIkNFUEhGU19QT09MX05BTUUiXSA9IHNl + bGYuX2FyZ19wYXJzZXIuY2VwaGZzX2RhdGFfcG9vbF9uYW1lCiAgICAgICAgc2VsZi5vdXRfbWFw + WyJDRVBIRlNfTUVUQURBVEFfUE9PTF9OQU1FIl0gPSAoCiAgICAgICAgICAgIHNlbGYuX2FyZ19w + YXJzZXIuY2VwaGZzX21ldGFkYXRhX3Bvb2xfbmFtZQogICAgICAgICkKICAgICAgICBzZWxmLm91 + dF9tYXBbIkNFUEhGU19GU19OQU1FIl0gPSBzZWxmLl9hcmdfcGFyc2VyLmNlcGhmc19maWxlc3lz + dGVtX25hbWUKICAgICAgICBzZWxmLm91dF9tYXBbIlJFU1RSSUNURURfQVVUSF9QRVJNSVNTSU9O + Il0gPSAoCiAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmVzdHJpY3RlZF9hdXRoX3Blcm1p + c3Npb24KICAgICAgICApCiAgICAgICAgc2VsZi5vdXRfbWFwWyJSQURPU19OQU1FU1BBQ0UiXSA9 + IHNlbGYuX2FyZ19wYXJzZXIucmFkb3NfbmFtZXNwYWNlCiAgICAgICAgc2VsZi5vdXRfbWFwWyJT + VUJWT0xVTUVfR1JPVVAiXSA9IHNlbGYuX2FyZ19wYXJzZXIuc3Vidm9sdW1lX2dyb3VwCiAgICAg + ICAgc2VsZi5vdXRfbWFwWyJDU0lfQ0VQSEZTX05PREVfU0VDUkVUIl0gPSAiIgogICAgICAgIHNl + bGYub3V0X21hcFsiQ1NJX0NFUEhGU19QUk9WSVNJT05FUl9TRUNSRVQiXSA9ICIiCiAgICAgICAg + IyBjcmVhdGUgQ2VwaEZTIG5vZGUgYW5kIHByb3Zpc2lvbmVyIGtleXJpbmcgb25seSB3aGVuIE1E + UyBleGlzdHMKICAgICAgICBpZiBzZWxmLm91dF9tYXBbIkNFUEhGU19GU19OQU1FIl0gYW5kIHNl + bGYub3V0X21hcFsiQ0VQSEZTX1BPT0xfTkFNRSJdOgogICAgICAgICAgICAoCiAgICAgICAgICAg + ICAgICBzZWxmLm91dF9tYXBbIkNTSV9DRVBIRlNfTk9ERV9TRUNSRVQiXSwKICAgICAgICAgICAg + ICAgIHNlbGYub3V0X21hcFsiQ1NJX0NFUEhGU19OT0RFX1NFQ1JFVF9OQU1FIl0sCiAgICAgICAg + ICAgICkgPSBzZWxmLmNyZWF0ZV9jZXBoQ1NJS2V5cmluZ191c2VyKCJjbGllbnQuY3NpLWNlcGhm + cy1ub2RlIikKICAgICAgICAgICAgKAogICAgICAgICAgICAgICAgc2VsZi5vdXRfbWFwWyJDU0lf + Q0VQSEZTX1BST1ZJU0lPTkVSX1NFQ1JFVCJdLAogICAgICAgICAgICAgICAgc2VsZi5vdXRfbWFw + WyJDU0lfQ0VQSEZTX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FIl0sCiAgICAgICAgICAgICkgPSBz + ZWxmLmNyZWF0ZV9jZXBoQ1NJS2V5cmluZ191c2VyKCJjbGllbnQuY3NpLWNlcGhmcy1wcm92aXNp + b25lciIpCiAgICAgICAgICAgICMgY3JlYXRlIHRoZSBkZWZhdWx0ICJjc2kiIHN1YnZvbHVtZWdy + b3VwCiAgICAgICAgICAgIHNlbGYuZ2V0X29yX2NyZWF0ZV9zdWJ2b2x1bWVfZ3JvdXAoCiAgICAg + ICAgICAgICAgICAiY3NpIiwgc2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZmlsZXN5c3RlbV9uYW1l + CiAgICAgICAgICAgICkKICAgICAgICAgICAgIyBwaW4gdGhlIGRlZmF1bHQgImNzaSIgc3Vidm9s + dW1lZ3JvdXAKICAgICAgICAgICAgc2VsZi5waW5fc3Vidm9sdW1lKAogICAgICAgICAgICAgICAg + ImNzaSIsIHNlbGYuX2FyZ19wYXJzZXIuY2VwaGZzX2ZpbGVzeXN0ZW1fbmFtZSwgImRpc3RyaWJ1 + dGVkIiwgIjEiCiAgICAgICAgICAgICkKICAgICAgICAgICAgaWYgc2VsZi5vdXRfbWFwWyJTVUJW + T0xVTUVfR1JPVVAiXToKICAgICAgICAgICAgICAgIHNlbGYuZ2V0X29yX2NyZWF0ZV9zdWJ2b2x1 + bWVfZ3JvdXAoCiAgICAgICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5zdWJ2b2x1bWVf + Z3JvdXAsCiAgICAgICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5jZXBoZnNfZmlsZXN5 + c3RlbV9uYW1lLAogICAgICAgICAgICAgICAgKQogICAgICAgICAgICAgICAgc2VsZi5waW5fc3Vi + dm9sdW1lKAogICAgICAgICAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIuc3Vidm9sdW1lX2dy + b3VwLAogICAgICAgICAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIuY2VwaGZzX2ZpbGVzeXN0 + ZW1fbmFtZSwKICAgICAgICAgICAgICAgICAgICAiZGlzdHJpYnV0ZWQiLAogICAgICAgICAgICAg + ICAgICAgICIxIiwKICAgICAgICAgICAgICAgICkKICAgICAgICBzZWxmLm91dF9tYXBbIlJHV19U + TFNfQ0VSVCJdID0gIiIKICAgICAgICBzZWxmLm91dF9tYXBbIk1PTklUT1JJTkdfRU5EUE9JTlQi + XSA9ICIiCiAgICAgICAgc2VsZi5vdXRfbWFwWyJNT05JVE9SSU5HX0VORFBPSU5UX1BPUlQiXSA9 + ICIiCiAgICAgICAgaWYgbm90IHNlbGYuX2FyZ19wYXJzZXIuc2tpcF9tb25pdG9yaW5nX2VuZHBv + aW50OgogICAgICAgICAgICAoCiAgICAgICAgICAgICAgICBzZWxmLm91dF9tYXBbIk1PTklUT1JJ + TkdfRU5EUE9JTlQiXSwKICAgICAgICAgICAgICAgIHNlbGYub3V0X21hcFsiTU9OSVRPUklOR19F + TkRQT0lOVF9QT1JUIl0sCiAgICAgICAgICAgICkgPSBzZWxmLmdldF9hY3RpdmVfYW5kX3N0YW5k + YnlfbWdycygpCiAgICAgICAgc2VsZi5vdXRfbWFwWyJSQkRfUE9PTF9OQU1FIl0gPSBzZWxmLl9h + cmdfcGFyc2VyLnJiZF9kYXRhX3Bvb2xfbmFtZQogICAgICAgIHNlbGYub3V0X21hcFsiUkJEX01F + VEFEQVRBX0VDX1BPT0xfTkFNRSJdID0gKAogICAgICAgICAgICBzZWxmLnZhbGlkYXRlX3JiZF9t + ZXRhZGF0YV9lY19wb29sX25hbWUoKQogICAgICAgICkKICAgICAgICBzZWxmLm91dF9tYXBbIlRP + UE9MT0dZX1BPT0xTIl0gPSBzZWxmLl9hcmdfcGFyc2VyLnRvcG9sb2d5X3Bvb2xzCiAgICAgICAg + c2VsZi5vdXRfbWFwWyJUT1BPTE9HWV9GQUlMVVJFX0RPTUFJTl9MQUJFTCJdID0gKAogICAgICAg + ICAgICBzZWxmLl9hcmdfcGFyc2VyLnRvcG9sb2d5X2ZhaWx1cmVfZG9tYWluX2xhYmVsCiAgICAg + ICAgKQogICAgICAgIHNlbGYub3V0X21hcFsiVE9QT0xPR1lfRkFJTFVSRV9ET01BSU5fVkFMVUVT + Il0gPSAoCiAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIudG9wb2xvZ3lfZmFpbHVyZV9kb21h + aW5fdmFsdWVzCiAgICAgICAgKQogICAgICAgIGlmICgKICAgICAgICAgICAgc2VsZi5fYXJnX3Bh + cnNlci50b3BvbG9neV9wb29scyAhPSAiIgogICAgICAgICAgICBhbmQgc2VsZi5fYXJnX3BhcnNl + ci50b3BvbG9neV9mYWlsdXJlX2RvbWFpbl9sYWJlbCAhPSAiIgogICAgICAgICAgICBhbmQgc2Vs + Zi5fYXJnX3BhcnNlci50b3BvbG9neV9mYWlsdXJlX2RvbWFpbl92YWx1ZXMgIT0gIiIKICAgICAg + ICApOgogICAgICAgICAgICBzZWxmLnZhbGlkYXRlX3RvcG9sb2d5X3ZhbHVlcygKICAgICAgICAg + ICAgICAgIHNlbGYuY29udmVydF9jb21tYV9zZXBhcmF0ZWRfdG9fYXJyYXkoc2VsZi5vdXRfbWFw + WyJUT1BPTE9HWV9QT09MUyJdKSwKICAgICAgICAgICAgICAgIHNlbGYuY29udmVydF9jb21tYV9z + ZXBhcmF0ZWRfdG9fYXJyYXkoCiAgICAgICAgICAgICAgICAgICAgc2VsZi5vdXRfbWFwWyJUT1BP + TE9HWV9GQUlMVVJFX0RPTUFJTl9WQUxVRVMiXQogICAgICAgICAgICAgICAgKSwKICAgICAgICAg + ICAgKQogICAgICAgICAgICBzZWxmLnZhbGlkYXRlX3RvcG9sb2d5X3JiZF9wb29scygKICAgICAg + ICAgICAgICAgIHNlbGYuY29udmVydF9jb21tYV9zZXBhcmF0ZWRfdG9fYXJyYXkoc2VsZi5vdXRf + bWFwWyJUT1BPTE9HWV9QT09MUyJdKQogICAgICAgICAgICApCiAgICAgICAgICAgIHNlbGYuaW5p + dF90b3BvbG9neV9yYmRfcG9vbHMoCiAgICAgICAgICAgICAgICBzZWxmLmNvbnZlcnRfY29tbWFf + c2VwYXJhdGVkX3RvX2FycmF5KHNlbGYub3V0X21hcFsiVE9QT0xPR1lfUE9PTFMiXSkKICAgICAg + ICAgICAgKQogICAgICAgIGVsc2U6CiAgICAgICAgICAgIHNlbGYucmFpc2VfZXhjZXB0aW9uX2lm + X2FueV90b3BvbG9neV9mbGFnX2lzX21pc3NpbmcoKQoKICAgICAgICBzZWxmLm91dF9tYXBbIlJH + V19QT09MX1BSRUZJWCJdID0gc2VsZi5fYXJnX3BhcnNlci5yZ3dfcG9vbF9wcmVmaXgKICAgICAg + ICBzZWxmLm91dF9tYXBbIlJHV19FTkRQT0lOVCJdID0gIiIKICAgICAgICBpZiBzZWxmLl9hcmdf + cGFyc2VyLnJnd19lbmRwb2ludDoKICAgICAgICAgICAgaWYgc2VsZi5fYXJnX3BhcnNlci5kcnlf + cnVuOgogICAgICAgICAgICAgICAgc2VsZi5jcmVhdGVfcmd3X2FkbWluX29wc191c2VyKCkKICAg + ICAgICAgICAgZWxzZToKICAgICAgICAgICAgICAgIGlmICgKICAgICAgICAgICAgICAgICAgICBz + ZWxmLl9hcmdfcGFyc2VyLnJnd19yZWFsbV9uYW1lICE9ICIiCiAgICAgICAgICAgICAgICAgICAg + YW5kIHNlbGYuX2FyZ19wYXJzZXIucmd3X3pvbmVncm91cF9uYW1lICE9ICIiCiAgICAgICAgICAg + ICAgICAgICAgYW5kIHNlbGYuX2FyZ19wYXJzZXIucmd3X3pvbmVfbmFtZSAhPSAiIgogICAgICAg + ICAgICAgICAgKToKICAgICAgICAgICAgICAgICAgICBlcnIgPSBzZWxmLnZhbGlkYXRlX3Jnd19t + dWx0aXNpdGUoCiAgICAgICAgICAgICAgICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmd3X3Jl + YWxtX25hbWUsICJyZWFsbSIKICAgICAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICAgICAg + ICAgZXJyID0gc2VsZi52YWxpZGF0ZV9yZ3dfbXVsdGlzaXRlKAogICAgICAgICAgICAgICAgICAg + ICAgICBzZWxmLl9hcmdfcGFyc2VyLnJnd196b25lZ3JvdXBfbmFtZSwgInpvbmVncm91cCIKICAg + ICAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICAgICAgICAgZXJyID0gc2VsZi52YWxpZGF0 + ZV9yZ3dfbXVsdGlzaXRlKAogICAgICAgICAgICAgICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2Vy + LnJnd196b25lX25hbWUsICJ6b25lIgogICAgICAgICAgICAgICAgICAgICkKCiAgICAgICAgICAg + ICAgICBpZiAoCiAgICAgICAgICAgICAgICAgICAgc2VsZi5fYXJnX3BhcnNlci5yZ3dfcmVhbG1f + bmFtZSA9PSAiIgogICAgICAgICAgICAgICAgICAgIGFuZCBzZWxmLl9hcmdfcGFyc2VyLnJnd196 + b25lZ3JvdXBfbmFtZSA9PSAiIgogICAgICAgICAgICAgICAgICAgIGFuZCBzZWxmLl9hcmdfcGFy + c2VyLnJnd196b25lX25hbWUgPT0gIiIKICAgICAgICAgICAgICAgICkgb3IgKAogICAgICAgICAg + ICAgICAgICAgIHNlbGYuX2FyZ19wYXJzZXIucmd3X3JlYWxtX25hbWUgIT0gIiIKICAgICAgICAg + ICAgICAgICAgICBhbmQgc2VsZi5fYXJnX3BhcnNlci5yZ3dfem9uZWdyb3VwX25hbWUgIT0gIiIK + ICAgICAgICAgICAgICAgICAgICBhbmQgc2VsZi5fYXJnX3BhcnNlci5yZ3dfem9uZV9uYW1lICE9 + ICIiCiAgICAgICAgICAgICAgICApOgogICAgICAgICAgICAgICAgICAgICgKICAgICAgICAgICAg + ICAgICAgICAgICAgc2VsZi5vdXRfbWFwWyJSR1dfQURNSU5fT1BTX1VTRVJfQUNDRVNTX0tFWSJd + LAogICAgICAgICAgICAgICAgICAgICAgICBzZWxmLm91dF9tYXBbIlJHV19BRE1JTl9PUFNfVVNF + Ul9TRUNSRVRfS0VZIl0sCiAgICAgICAgICAgICAgICAgICAgICAgIGluZm9fY2FwX3N1cHBvcnRl + ZCwKICAgICAgICAgICAgICAgICAgICAgICAgZXJyLAogICAgICAgICAgICAgICAgICAgICkgPSBz + ZWxmLmNyZWF0ZV9yZ3dfYWRtaW5fb3BzX3VzZXIoKQogICAgICAgICAgICAgICAgICAgIGVyciA9 + IHNlbGYudmFsaWRhdGVfcmd3X2VuZHBvaW50KGluZm9fY2FwX3N1cHBvcnRlZCkKICAgICAgICAg + ICAgICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLnJnd190bHNfY2VydF9wYXRoOgogICAgICAg + ICAgICAgICAgICAgICAgICBzZWxmLm91dF9tYXBbIlJHV19UTFNfQ0VSVCJdID0gKAogICAgICAg + ICAgICAgICAgICAgICAgICAgICAgc2VsZi52YWxpZGF0ZV9yZ3dfZW5kcG9pbnRfdGxzX2NlcnQo + KQogICAgICAgICAgICAgICAgICAgICAgICApCiAgICAgICAgICAgICAgICAgICAgIyBpZiB0aGVy + ZSBpcyBubyBlcnJvciwgc2V0IHRoZSBSR1dfRU5EUE9JTlQKICAgICAgICAgICAgICAgICAgICBp + ZiBlcnIgIT0gIi0xIjoKICAgICAgICAgICAgICAgICAgICAgICAgc2VsZi5vdXRfbWFwWyJSR1df + RU5EUE9JTlQiXSA9IHNlbGYuX2FyZ19wYXJzZXIucmd3X2VuZHBvaW50CiAgICAgICAgICAgICAg + ICBlbHNlOgogICAgICAgICAgICAgICAgICAgIGVyciA9ICJQbGVhc2UgcHJvdmlkZSBhbGwgdGhl + IFJHVyBtdWx0aXNpdGUgcGFyYW1ldGVycyBvciBub25lIG9mIHRoZW0iCiAgICAgICAgICAgICAg + ICAgICAgc3lzLnN0ZGVyci53cml0ZShlcnIpCgogICAgZGVmIGdlbl9zaGVsbF9vdXQoc2VsZik6 + CiAgICAgICAgc2VsZi5fZ2VuX291dHB1dF9tYXAoKQogICAgICAgIHNoT3V0SU8gPSBTdHJpbmdJ + TygpCiAgICAgICAgZm9yIGssIHYgaW4gc2VsZi5vdXRfbWFwLml0ZW1zKCk6CiAgICAgICAgICAg + IGlmIHYgYW5kIGsgbm90IGluIHNlbGYuX2V4Y2x1ZGVkX2tleXM6CiAgICAgICAgICAgICAgICBz + aE91dElPLndyaXRlKGYiZXhwb3J0IHtrfT17dn17TElORVNFUH0iKQogICAgICAgIHNoT3V0ID0g + c2hPdXRJTy5nZXR2YWx1ZSgpCiAgICAgICAgc2hPdXRJTy5jbG9zZSgpCiAgICAgICAgcmV0dXJu + IHNoT3V0CgogICAgZGVmIGdlbl9qc29uX291dChzZWxmKToKICAgICAgICBzZWxmLl9nZW5fb3V0 + cHV0X21hcCgpCiAgICAgICAgaWYgc2VsZi5fYXJnX3BhcnNlci5kcnlfcnVuOgogICAgICAgICAg + ICByZXR1cm4gIiIKICAgICAgICBqc29uX291dCA9IFsKICAgICAgICAgICAgewogICAgICAgICAg + ICAgICAgIm5hbWUiOiAicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLAogICAgICAgICAgICAgICAg + ImtpbmQiOiAiQ29uZmlnTWFwIiwKICAgICAgICAgICAgICAgICJkYXRhIjogewogICAgICAgICAg + ICAgICAgICAgICJkYXRhIjogc2VsZi5vdXRfbWFwWyJST09LX0VYVEVSTkFMX0NFUEhfTU9OX0RB + VEEiXSwKICAgICAgICAgICAgICAgICAgICAibWF4TW9uSWQiOiAiMCIsCiAgICAgICAgICAgICAg + ICAgICAgIm1hcHBpbmciOiAie30iLAogICAgICAgICAgICAgICAgfSwKICAgICAgICAgICAgfSwK + ICAgICAgICAgICAgewogICAgICAgICAgICAgICAgIm5hbWUiOiAicm9vay1jZXBoLW1vbiIsCiAg + ICAgICAgICAgICAgICAia2luZCI6ICJTZWNyZXQiLAogICAgICAgICAgICAgICAgImRhdGEiOiB7 + CiAgICAgICAgICAgICAgICAgICAgImFkbWluLXNlY3JldCI6ICJhZG1pbi1zZWNyZXQiLAogICAg + ICAgICAgICAgICAgICAgICJmc2lkIjogc2VsZi5vdXRfbWFwWyJST09LX0VYVEVSTkFMX0ZTSUQi + XSwKICAgICAgICAgICAgICAgICAgICAibW9uLXNlY3JldCI6ICJtb24tc2VjcmV0IiwKICAgICAg + ICAgICAgICAgIH0sCiAgICAgICAgICAgIH0sCiAgICAgICAgICAgIHsKICAgICAgICAgICAgICAg + ICJuYW1lIjogInJvb2stY2VwaC1vcGVyYXRvci1jcmVkcyIsCiAgICAgICAgICAgICAgICAia2lu + ZCI6ICJTZWNyZXQiLAogICAgICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAg + ICAgInVzZXJJRCI6IHNlbGYub3V0X21hcFsiUk9PS19FWFRFUk5BTF9VU0VSTkFNRSJdLAogICAg + ICAgICAgICAgICAgICAgICJ1c2VyS2V5Ijogc2VsZi5vdXRfbWFwWyJST09LX0VYVEVSTkFMX1VT + RVJfU0VDUkVUIl0sCiAgICAgICAgICAgICAgICB9LAogICAgICAgICAgICB9LAogICAgICAgIF0K + CiAgICAgICAgIyBpZiAnTU9OSVRPUklOR19FTkRQT0lOVCcgZXhpc3RzLCB0aGVuIG9ubHkgYWRk + ICdtb25pdG9yaW5nLWVuZHBvaW50JyB0byBDbHVzdGVyCiAgICAgICAgaWYgKAogICAgICAgICAg + ICBzZWxmLm91dF9tYXBbIk1PTklUT1JJTkdfRU5EUE9JTlQiXQogICAgICAgICAgICBhbmQgc2Vs + Zi5vdXRfbWFwWyJNT05JVE9SSU5HX0VORFBPSU5UX1BPUlQiXQogICAgICAgICk6CiAgICAgICAg + ICAgIGpzb25fb3V0LmFwcGVuZCgKICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAg + ICAibmFtZSI6ICJtb25pdG9yaW5nLWVuZHBvaW50IiwKICAgICAgICAgICAgICAgICAgICAia2lu + ZCI6ICJDZXBoQ2x1c3RlciIsCiAgICAgICAgICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAg + ICAgICAgICAgICAgICAgICJNb25pdG9yaW5nRW5kcG9pbnQiOiBzZWxmLm91dF9tYXBbIk1PTklU + T1JJTkdfRU5EUE9JTlQiXSwKICAgICAgICAgICAgICAgICAgICAgICAgIk1vbml0b3JpbmdQb3J0 + Ijogc2VsZi5vdXRfbWFwWyJNT05JVE9SSU5HX0VORFBPSU5UX1BPUlQiXSwKICAgICAgICAgICAg + ICAgICAgICB9LAogICAgICAgICAgICAgICAgfQogICAgICAgICAgICApCgogICAgICAgICMgaWYg + J0NTSV9SQkRfTk9ERV9TRUNSRVQnIGV4aXN0cywgdGhlbiBvbmx5IGFkZCAncm9vay1jc2ktcmJk + LXByb3Zpc2lvbmVyJyBTZWNyZXQKICAgICAgICBpZiAoCiAgICAgICAgICAgIHNlbGYub3V0X21h + cFsiQ1NJX1JCRF9OT0RFX1NFQ1JFVCJdCiAgICAgICAgICAgIGFuZCBzZWxmLm91dF9tYXBbIkNT + SV9SQkRfTk9ERV9TRUNSRVRfTkFNRSJdCiAgICAgICAgKToKICAgICAgICAgICAganNvbl9vdXQu + YXBwZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICJuYW1lIjogZiJy + b29rLXtzZWxmLm91dF9tYXBbJ0NTSV9SQkRfTk9ERV9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAg + ICAgICAgICAgICAgImtpbmQiOiAiU2VjcmV0IiwKICAgICAgICAgICAgICAgICAgICAiZGF0YSI6 + IHsKICAgICAgICAgICAgICAgICAgICAgICAgInVzZXJJRCI6IHNlbGYub3V0X21hcFsiQ1NJX1JC + RF9OT0RFX1NFQ1JFVF9OQU1FIl0sCiAgICAgICAgICAgICAgICAgICAgICAgICJ1c2VyS2V5Ijog + c2VsZi5vdXRfbWFwWyJDU0lfUkJEX05PREVfU0VDUkVUIl0sCiAgICAgICAgICAgICAgICAgICAg + fSwKICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgKQogICAgICAgICMgaWYgJ0NTSV9SQkRf + UFJPVklTSU9ORVJfU0VDUkVUJyBleGlzdHMsIHRoZW4gb25seSBhZGQgJ3Jvb2stY3NpLXJiZC1w + cm92aXNpb25lcicgU2VjcmV0CiAgICAgICAgaWYgKAogICAgICAgICAgICBzZWxmLm91dF9tYXBb + IkNTSV9SQkRfUFJPVklTSU9ORVJfU0VDUkVUIl0KICAgICAgICAgICAgYW5kIHNlbGYub3V0X21h + cFsiQ1NJX1JCRF9QUk9WSVNJT05FUl9TRUNSRVRfTkFNRSJdCiAgICAgICAgKToKICAgICAgICAg + ICAganNvbl9vdXQuYXBwZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAg + ICJuYW1lIjogZiJyb29rLXtzZWxmLm91dF9tYXBbJ0NTSV9SQkRfUFJPVklTSU9ORVJfU0VDUkVU + X05BTUUnXX0iLAogICAgICAgICAgICAgICAgICAgICJraW5kIjogIlNlY3JldCIsCiAgICAgICAg + ICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICJ1c2VySUQiOiBz + ZWxmLm91dF9tYXBbIkNTSV9SQkRfUFJPVklTSU9ORVJfU0VDUkVUX05BTUUiXSwKICAgICAgICAg + ICAgICAgICAgICAgICAgInVzZXJLZXkiOiBzZWxmLm91dF9tYXBbIkNTSV9SQkRfUFJPVklTSU9O + RVJfU0VDUkVUIl0sCiAgICAgICAgICAgICAgICAgICAgfSwKICAgICAgICAgICAgICAgIH0KICAg + ICAgICAgICAgKQogICAgICAgICMgaWYgJ0NTSV9DRVBIRlNfUFJPVklTSU9ORVJfU0VDUkVUJyBl + eGlzdHMsIHRoZW4gb25seSBhZGQgJ3Jvb2stY3NpLWNlcGhmcy1wcm92aXNpb25lcicgU2VjcmV0 + CiAgICAgICAgaWYgKAogICAgICAgICAgICBzZWxmLm91dF9tYXBbIkNTSV9DRVBIRlNfUFJPVklT + SU9ORVJfU0VDUkVUIl0KICAgICAgICAgICAgYW5kIHNlbGYub3V0X21hcFsiQ1NJX0NFUEhGU19Q + Uk9WSVNJT05FUl9TRUNSRVRfTkFNRSJdCiAgICAgICAgKToKICAgICAgICAgICAganNvbl9vdXQu + YXBwZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICJuYW1lIjogZiJy + b29rLXtzZWxmLm91dF9tYXBbJ0NTSV9DRVBIRlNfUFJPVklTSU9ORVJfU0VDUkVUX05BTUUnXX0i + LAogICAgICAgICAgICAgICAgICAgICJraW5kIjogIlNlY3JldCIsCiAgICAgICAgICAgICAgICAg + ICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICJhZG1pbklEIjogc2VsZi5vdXRf + bWFwWyJDU0lfQ0VQSEZTX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FIl0sCiAgICAgICAgICAgICAg + ICAgICAgICAgICJhZG1pbktleSI6IHNlbGYub3V0X21hcFsiQ1NJX0NFUEhGU19QUk9WSVNJT05F + Ul9TRUNSRVQiXSwKICAgICAgICAgICAgICAgICAgICB9LAogICAgICAgICAgICAgICAgfQogICAg + ICAgICAgICApCiAgICAgICAgIyBpZiAnQ1NJX0NFUEhGU19OT0RFX1NFQ1JFVCcgZXhpc3RzLCB0 + aGVuIG9ubHkgYWRkICdyb29rLWNzaS1jZXBoZnMtbm9kZScgU2VjcmV0CiAgICAgICAgaWYgKAog + ICAgICAgICAgICBzZWxmLm91dF9tYXBbIkNTSV9DRVBIRlNfTk9ERV9TRUNSRVQiXQogICAgICAg + ICAgICBhbmQgc2VsZi5vdXRfbWFwWyJDU0lfQ0VQSEZTX05PREVfU0VDUkVUX05BTUUiXQogICAg + ICAgICk6CiAgICAgICAgICAgIGpzb25fb3V0LmFwcGVuZCgKICAgICAgICAgICAgICAgIHsKICAg + ICAgICAgICAgICAgICAgICAibmFtZSI6IGYicm9vay17c2VsZi5vdXRfbWFwWydDU0lfQ0VQSEZT + X05PREVfU0VDUkVUX05BTUUnXX0iLAogICAgICAgICAgICAgICAgICAgICJraW5kIjogIlNlY3Jl + dCIsCiAgICAgICAgICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAg + ICJhZG1pbklEIjogc2VsZi5vdXRfbWFwWyJDU0lfQ0VQSEZTX05PREVfU0VDUkVUX05BTUUiXSwK + ICAgICAgICAgICAgICAgICAgICAgICAgImFkbWluS2V5Ijogc2VsZi5vdXRfbWFwWyJDU0lfQ0VQ + SEZTX05PREVfU0VDUkVUIl0sCiAgICAgICAgICAgICAgICAgICAgfSwKICAgICAgICAgICAgICAg + IH0KICAgICAgICAgICAgKQogICAgICAgICMgaWYgJ1JPT0tfRVhURVJOQUxfREFTSEJPQVJEX0xJ + TksnIGV4aXN0cywgdGhlbiBvbmx5IGFkZCAncm9vay1jZXBoLWRhc2hib2FyZC1saW5rJyBTZWNy + ZXQKICAgICAgICBpZiBzZWxmLm91dF9tYXBbIlJPT0tfRVhURVJOQUxfREFTSEJPQVJEX0xJTksi + XToKICAgICAgICAgICAganNvbl9vdXQuYXBwZW5kKAogICAgICAgICAgICAgICAgewogICAgICAg + ICAgICAgICAgICAgICJuYW1lIjogInJvb2stY2VwaC1kYXNoYm9hcmQtbGluayIsCiAgICAgICAg + ICAgICAgICAgICAgImtpbmQiOiAiU2VjcmV0IiwKICAgICAgICAgICAgICAgICAgICAiZGF0YSI6 + IHsKICAgICAgICAgICAgICAgICAgICAgICAgInVzZXJJRCI6ICJjZXBoLWRhc2hib2FyZC1saW5r + IiwKICAgICAgICAgICAgICAgICAgICAgICAgInVzZXJLZXkiOiBzZWxmLm91dF9tYXBbIlJPT0tf + RVhURVJOQUxfREFTSEJPQVJEX0xJTksiXSwKICAgICAgICAgICAgICAgICAgICB9LAogICAgICAg + ICAgICAgICAgfQogICAgICAgICAgICApCiAgICAgICAgIyBpZiAnUkFET1NfTkFNRVNQQUNFJyBl + eGlzdHMsIHRoZW4gb25seSBhZGQgdGhlICJSQURPU19OQU1FU1BBQ0UiIG5hbWVzcGFjZQogICAg + ICAgIGlmICgKICAgICAgICAgICAgc2VsZi5vdXRfbWFwWyJSQURPU19OQU1FU1BBQ0UiXQogICAg + ICAgICAgICBhbmQgc2VsZi5vdXRfbWFwWyJSRVNUUklDVEVEX0FVVEhfUEVSTUlTU0lPTiJdCiAg + ICAgICAgICAgIGFuZCBub3Qgc2VsZi5vdXRfbWFwWyJSQkRfTUVUQURBVEFfRUNfUE9PTF9OQU1F + Il0KICAgICAgICApOgogICAgICAgICAgICBqc29uX291dC5hcHBlbmQoCiAgICAgICAgICAgICAg + ICB7CiAgICAgICAgICAgICAgICAgICAgIm5hbWUiOiAicmFkb3MtbmFtZXNwYWNlIiwKICAgICAg + ICAgICAgICAgICAgICAia2luZCI6ICJDZXBoQmxvY2tQb29sUmFkb3NOYW1lc3BhY2UiLAogICAg + ICAgICAgICAgICAgICAgICJkYXRhIjogewogICAgICAgICAgICAgICAgICAgICAgICAicmFkb3NO + YW1lc3BhY2VOYW1lIjogc2VsZi5vdXRfbWFwWyJSQURPU19OQU1FU1BBQ0UiXSwKICAgICAgICAg + ICAgICAgICAgICAgICAgInBvb2wiOiBzZWxmLm91dF9tYXBbIlJCRF9QT09MX05BTUUiXSwKICAg + ICAgICAgICAgICAgICAgICB9LAogICAgICAgICAgICAgICAgfQogICAgICAgICAgICApCiAgICAg + ICAgICAgIGpzb25fb3V0LmFwcGVuZCgKICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAg + ICAgICAibmFtZSI6ICJjZXBoLXJiZC1yYWRvcy1uYW1lc3BhY2UiLAogICAgICAgICAgICAgICAg + ICAgICJraW5kIjogIlN0b3JhZ2VDbGFzcyIsCiAgICAgICAgICAgICAgICAgICAgImRhdGEiOiB7 + CiAgICAgICAgICAgICAgICAgICAgICAgICJwb29sIjogc2VsZi5vdXRfbWFwWyJSQkRfUE9PTF9O + QU1FIl0sCiAgICAgICAgICAgICAgICAgICAgICAgICJjc2kuc3RvcmFnZS5rOHMuaW8vcHJvdmlz + aW9uZXItc2VjcmV0LW5hbWUiOiBmInJvb2ste3NlbGYub3V0X21hcFsnQ1NJX1JCRF9QUk9WSVNJ + T05FUl9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAgICAgICAgICAgICAgICAgICJjc2kuc3RvcmFn + ZS5rOHMuaW8vY29udHJvbGxlci1leHBhbmQtc2VjcmV0LW5hbWUiOiBmInJvb2ste3NlbGYub3V0 + X21hcFsnQ1NJX1JCRF9QUk9WSVNJT05FUl9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAgICAgICAg + ICAgICAgICAgICJjc2kuc3RvcmFnZS5rOHMuaW8vbm9kZS1zdGFnZS1zZWNyZXQtbmFtZSI6IGYi + cm9vay17c2VsZi5vdXRfbWFwWydDU0lfUkJEX05PREVfU0VDUkVUX05BTUUnXX0iLAogICAgICAg + ICAgICAgICAgICAgIH0sCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICkKICAgICAgICBl + bHNlOgogICAgICAgICAgICBpZiBzZWxmLm91dF9tYXBbIlJCRF9NRVRBREFUQV9FQ19QT09MX05B + TUUiXToKICAgICAgICAgICAgICAgIGpzb25fb3V0LmFwcGVuZCgKICAgICAgICAgICAgICAgICAg + ICB7CiAgICAgICAgICAgICAgICAgICAgICAgICJuYW1lIjogImNlcGgtcmJkIiwKICAgICAgICAg + ICAgICAgICAgICAgICAgImtpbmQiOiAiU3RvcmFnZUNsYXNzIiwKICAgICAgICAgICAgICAgICAg + ICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAiZGF0YVBvb2wiOiBz + ZWxmLm91dF9tYXBbIlJCRF9QT09MX05BTUUiXSwKICAgICAgICAgICAgICAgICAgICAgICAgICAg + ICJwb29sIjogc2VsZi5vdXRfbWFwWyJSQkRfTUVUQURBVEFfRUNfUE9PTF9OQU1FIl0sCiAgICAg + ICAgICAgICAgICAgICAgICAgICAgICAiY3NpLnN0b3JhZ2UuazhzLmlvL3Byb3Zpc2lvbmVyLXNl + Y3JldC1uYW1lIjogZiJyb29rLXtzZWxmLm91dF9tYXBbJ0NTSV9SQkRfUFJPVklTSU9ORVJfU0VD + UkVUX05BTUUnXX0iLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgImNzaS5zdG9yYWdlLms4 + cy5pby9jb250cm9sbGVyLWV4cGFuZC1zZWNyZXQtbmFtZSI6IGYicm9vay17c2VsZi5vdXRfbWFw + WydDU0lfUkJEX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FJ119IiwKICAgICAgICAgICAgICAgICAg + ICAgICAgICAgICJjc2kuc3RvcmFnZS5rOHMuaW8vbm9kZS1zdGFnZS1zZWNyZXQtbmFtZSI6IGYi + cm9vay17c2VsZi5vdXRfbWFwWydDU0lfUkJEX05PREVfU0VDUkVUX05BTUUnXX0iLAogICAgICAg + ICAgICAgICAgICAgICAgICB9LAogICAgICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgICAg + ICkKICAgICAgICAgICAgZWxzZToKICAgICAgICAgICAgICAgIGpzb25fb3V0LmFwcGVuZCgKICAg + ICAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgICAgICJuYW1lIjogImNlcGgt + cmJkIiwKICAgICAgICAgICAgICAgICAgICAgICAgImtpbmQiOiAiU3RvcmFnZUNsYXNzIiwKICAg + ICAgICAgICAgICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICAg + ICAicG9vbCI6IHNlbGYub3V0X21hcFsiUkJEX1BPT0xfTkFNRSJdLAogICAgICAgICAgICAgICAg + ICAgICAgICAgICAgImNzaS5zdG9yYWdlLms4cy5pby9wcm92aXNpb25lci1zZWNyZXQtbmFtZSI6 + IGYicm9vay17c2VsZi5vdXRfbWFwWydDU0lfUkJEX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FJ119 + IiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjc2kuc3RvcmFnZS5rOHMuaW8vY29udHJv + bGxlci1leHBhbmQtc2VjcmV0LW5hbWUiOiBmInJvb2ste3NlbGYub3V0X21hcFsnQ1NJX1JCRF9Q + Uk9WSVNJT05FUl9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAi + Y3NpLnN0b3JhZ2UuazhzLmlvL25vZGUtc3RhZ2Utc2VjcmV0LW5hbWUiOiBmInJvb2ste3NlbGYu + b3V0X21hcFsnQ1NJX1JCRF9OT0RFX1NFQ1JFVF9OQU1FJ119IiwKICAgICAgICAgICAgICAgICAg + ICAgICAgfSwKICAgICAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICAgICApCgogICAgICAg + ICMgaWYgJ1RPUE9MT0dZX1BPT0xTJywgJ1RPUE9MT0dZX0ZBSUxVUkVfRE9NQUlOX0xBQkVMJywg + J1RPUE9MT0dZX0ZBSUxVUkVfRE9NQUlOX1ZBTFVFUycgIGV4aXN0cywKICAgICAgICAjIHRoZW4g + b25seSBhZGQgJ3RvcG9sb2d5JyBTdG9yYWdlQ2xhc3MKICAgICAgICBpZiAoCiAgICAgICAgICAg + IHNlbGYub3V0X21hcFsiVE9QT0xPR1lfUE9PTFMiXQogICAgICAgICAgICBhbmQgc2VsZi5vdXRf + bWFwWyJUT1BPTE9HWV9GQUlMVVJFX0RPTUFJTl9MQUJFTCJdCiAgICAgICAgICAgIGFuZCBzZWxm + Lm91dF9tYXBbIlRPUE9MT0dZX0ZBSUxVUkVfRE9NQUlOX1ZBTFVFUyJdCiAgICAgICAgKToKICAg + ICAgICAgICAganNvbl9vdXQuYXBwZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAg + ICAgICAgICJuYW1lIjogImNlcGgtcmJkLXRvcG9sb2d5IiwKICAgICAgICAgICAgICAgICAgICAi + a2luZCI6ICJTdG9yYWdlQ2xhc3MiLAogICAgICAgICAgICAgICAgICAgICJkYXRhIjogewogICAg + ICAgICAgICAgICAgICAgICAgICAidG9wb2xvZ3lGYWlsdXJlRG9tYWluTGFiZWwiOiBzZWxmLm91 + dF9tYXBbCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAiVE9QT0xPR1lfRkFJTFVSRV9ET01B + SU5fTEFCRUwiCiAgICAgICAgICAgICAgICAgICAgICAgIF0sCiAgICAgICAgICAgICAgICAgICAg + ICAgICJ0b3BvbG9neUZhaWx1cmVEb21haW5WYWx1ZXMiOiBzZWxmLm91dF9tYXBbCiAgICAgICAg + ICAgICAgICAgICAgICAgICAgICAiVE9QT0xPR1lfRkFJTFVSRV9ET01BSU5fVkFMVUVTIgogICAg + ICAgICAgICAgICAgICAgICAgICBdLAogICAgICAgICAgICAgICAgICAgICAgICAidG9wb2xvZ3lQ + b29scyI6IHNlbGYub3V0X21hcFsiVE9QT0xPR1lfUE9PTFMiXSwKICAgICAgICAgICAgICAgICAg + ICAgICAgInBvb2wiOiBzZWxmLm91dF9tYXBbIlJCRF9QT09MX05BTUUiXSwKICAgICAgICAgICAg + ICAgICAgICAgICAgImNzaS5zdG9yYWdlLms4cy5pby9wcm92aXNpb25lci1zZWNyZXQtbmFtZSI6 + IGYicm9vay17c2VsZi5vdXRfbWFwWydDU0lfUkJEX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FJ119 + IiwKICAgICAgICAgICAgICAgICAgICAgICAgImNzaS5zdG9yYWdlLms4cy5pby9jb250cm9sbGVy + LWV4cGFuZC1zZWNyZXQtbmFtZSI6IGYicm9vay17c2VsZi5vdXRfbWFwWydDU0lfUkJEX1BST1ZJ + U0lPTkVSX1NFQ1JFVF9OQU1FJ119IiwKICAgICAgICAgICAgICAgICAgICAgICAgImNzaS5zdG9y + YWdlLms4cy5pby9ub2RlLXN0YWdlLXNlY3JldC1uYW1lIjogZiJyb29rLXtzZWxmLm91dF9tYXBb + J0NTSV9SQkRfTk9ERV9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAgICAgICAgICAgICAgfSwKICAg + ICAgICAgICAgICAgIH0KICAgICAgICAgICAgKQoKICAgICAgICAjIGlmICdDRVBIRlNfRlNfTkFN + RScgZXhpc3RzLCB0aGVuIG9ubHkgYWRkICdjZXBoZnMnIFN0b3JhZ2VDbGFzcwogICAgICAgIGlm + IHNlbGYub3V0X21hcFsiQ0VQSEZTX0ZTX05BTUUiXToKICAgICAgICAgICAganNvbl9vdXQuYXBw + ZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICJuYW1lIjogImNlcGhm + cyIsCiAgICAgICAgICAgICAgICAgICAgImtpbmQiOiAiU3RvcmFnZUNsYXNzIiwKICAgICAgICAg + ICAgICAgICAgICAiZGF0YSI6IHsKICAgICAgICAgICAgICAgICAgICAgICAgImZzTmFtZSI6IHNl + bGYub3V0X21hcFsiQ0VQSEZTX0ZTX05BTUUiXSwKICAgICAgICAgICAgICAgICAgICAgICAgInBv + b2wiOiBzZWxmLm91dF9tYXBbIkNFUEhGU19QT09MX05BTUUiXSwKICAgICAgICAgICAgICAgICAg + ICAgICAgImNzaS5zdG9yYWdlLms4cy5pby9wcm92aXNpb25lci1zZWNyZXQtbmFtZSI6IGYicm9v + ay17c2VsZi5vdXRfbWFwWydDU0lfQ0VQSEZTX1BST1ZJU0lPTkVSX1NFQ1JFVF9OQU1FJ119IiwK + ICAgICAgICAgICAgICAgICAgICAgICAgImNzaS5zdG9yYWdlLms4cy5pby9jb250cm9sbGVyLWV4 + cGFuZC1zZWNyZXQtbmFtZSI6IGYicm9vay17c2VsZi5vdXRfbWFwWydDU0lfQ0VQSEZTX1BST1ZJ + U0lPTkVSX1NFQ1JFVF9OQU1FJ119IiwKICAgICAgICAgICAgICAgICAgICAgICAgImNzaS5zdG9y + YWdlLms4cy5pby9ub2RlLXN0YWdlLXNlY3JldC1uYW1lIjogZiJyb29rLXtzZWxmLm91dF9tYXBb + J0NTSV9DRVBIRlNfTk9ERV9TRUNSRVRfTkFNRSddfSIsCiAgICAgICAgICAgICAgICAgICAgfSwK + ICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgKQogICAgICAgICMgaWYgJ1JHV19FTkRQT0lO + VCcgZXhpc3RzLCB0aGVuIG9ubHkgYWRkICdjZXBoLXJndycgU3RvcmFnZUNsYXNzCiAgICAgICAg + aWYgc2VsZi5vdXRfbWFwWyJSR1dfRU5EUE9JTlQiXToKICAgICAgICAgICAganNvbl9vdXQuYXBw + ZW5kKAogICAgICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICJuYW1lIjogImNlcGgt + cmd3IiwKICAgICAgICAgICAgICAgICAgICAia2luZCI6ICJTdG9yYWdlQ2xhc3MiLAogICAgICAg + ICAgICAgICAgICAgICJkYXRhIjogewogICAgICAgICAgICAgICAgICAgICAgICAiZW5kcG9pbnQi + OiBzZWxmLm91dF9tYXBbIlJHV19FTkRQT0lOVCJdLAogICAgICAgICAgICAgICAgICAgICAgICAi + cG9vbFByZWZpeCI6IHNlbGYub3V0X21hcFsiUkdXX1BPT0xfUFJFRklYIl0sCiAgICAgICAgICAg + ICAgICAgICAgfSwKICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgKQogICAgICAgICAgICBq + c29uX291dC5hcHBlbmQoCiAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgIm5h + bWUiOiAicmd3LWFkbWluLW9wcy11c2VyIiwKICAgICAgICAgICAgICAgICAgICAia2luZCI6ICJT + ZWNyZXQiLAogICAgICAgICAgICAgICAgICAgICJkYXRhIjogewogICAgICAgICAgICAgICAgICAg + ICAgICAiYWNjZXNzS2V5Ijogc2VsZi5vdXRfbWFwWyJSR1dfQURNSU5fT1BTX1VTRVJfQUNDRVNT + X0tFWSJdLAogICAgICAgICAgICAgICAgICAgICAgICAic2VjcmV0S2V5Ijogc2VsZi5vdXRfbWFw + WyJSR1dfQURNSU5fT1BTX1VTRVJfU0VDUkVUX0tFWSJdLAogICAgICAgICAgICAgICAgICAgIH0s + CiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICkKICAgICAgICAjIGlmICdSR1dfVExTX0NF + UlQnIGV4aXN0cywgdGhlbiBvbmx5IGFkZCB0aGUgImNlcGgtcmd3LXRscy1jZXJ0IiBzZWNyZXQK + ICAgICAgICBpZiBzZWxmLm91dF9tYXBbIlJHV19UTFNfQ0VSVCJdOgogICAgICAgICAgICBqc29u + X291dC5hcHBlbmQoCiAgICAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgIm5hbWUi + OiAiY2VwaC1yZ3ctdGxzLWNlcnQiLAogICAgICAgICAgICAgICAgICAgICJraW5kIjogIlNlY3Jl + dCIsCiAgICAgICAgICAgICAgICAgICAgImRhdGEiOiB7CiAgICAgICAgICAgICAgICAgICAgICAg + ICJjZXJ0Ijogc2VsZi5vdXRfbWFwWyJSR1dfVExTX0NFUlQiXSwKICAgICAgICAgICAgICAgICAg + ICB9LAogICAgICAgICAgICAgICAgfQogICAgICAgICAgICApCgogICAgICAgIHJldHVybiBqc29u + LmR1bXBzKGpzb25fb3V0KSArIExJTkVTRVAKCiAgICBkZWYgdXBncmFkZV91c2Vyc19wZXJtaXNz + aW9ucyhzZWxmKToKICAgICAgICB1c2VycyA9IFsKICAgICAgICAgICAgImNsaWVudC5jc2ktY2Vw + aGZzLW5vZGUiLAogICAgICAgICAgICAiY2xpZW50LmNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLAog + ICAgICAgICAgICAiY2xpZW50LmNzaS1yYmQtbm9kZSIsCiAgICAgICAgICAgICJjbGllbnQuY3Np + LXJiZC1wcm92aXNpb25lciIsCiAgICAgICAgICAgICJjbGllbnQuaGVhbHRoY2hlY2tlciIsCiAg + ICAgICAgXQogICAgICAgIGlmIHNlbGYucnVuX2FzX3VzZXIgIT0gIiIgYW5kIHNlbGYucnVuX2Fz + X3VzZXIgbm90IGluIHVzZXJzOgogICAgICAgICAgICB1c2Vycy5hcHBlbmQoc2VsZi5ydW5fYXNf + dXNlcikKICAgICAgICBmb3IgdXNlciBpbiB1c2VyczoKICAgICAgICAgICAgc2VsZi51cGdyYWRl + X3VzZXJfcGVybWlzc2lvbnModXNlcikKCiAgICBkZWYgZ2V0X3Jnd19wb29sX25hbWVfZHVyaW5n + X3VwZ3JhZGUoc2VsZiwgdXNlciwgY2Fwcyk6CiAgICAgICAgaWYgdXNlciA9PSAiY2xpZW50Lmhl + YWx0aGNoZWNrZXIiOgogICAgICAgICAgICAjIHdoZW4gYWRtaW4gaGFzIG5vdCBwcm92aWRlZCBy + Z3cgcG9vbCBuYW1lIGR1cmluZyB1cGdyYWRlLAogICAgICAgICAgICAjIGdldCB0aGUgcmd3IHBv + b2wgbmFtZSBmcm9tIGNsaWVudC5oZWFsdGhjaGVja2VyIHVzZXIgd2hpY2ggd2FzIHVzZWQgZHVy + aW5nIGNvbm5lY3Rpb24KICAgICAgICAgICAgaWYgbm90IHNlbGYuX2FyZ19wYXJzZXIucmd3X3Bv + b2xfcHJlZml4OgogICAgICAgICAgICAgICAgIyBUbyBnZXQgdmFsdWUgJ2RlZmF1bHQnIHdoaWNo + IGlzIHJndyBwb29sIG5hbWUgZnJvbSAnYWxsb3cgcnd4IHBvb2w9ZGVmYXVsdC5yZ3cubWV0YScK + ICAgICAgICAgICAgICAgIHBhdHRlcm4gPSByInBvb2w9KC4qPylcLnJnd1wubWV0YSIKICAgICAg + ICAgICAgICAgIG1hdGNoID0gcmUuc2VhcmNoKHBhdHRlcm4sIGNhcHMpCiAgICAgICAgICAgICAg + ICBpZiBtYXRjaDoKICAgICAgICAgICAgICAgICAgICBzZWxmLl9hcmdfcGFyc2VyLnJnd19wb29s + X3ByZWZpeCA9IG1hdGNoLmdyb3VwKDEpCiAgICAgICAgICAgICAgICBlbHNlOgogICAgICAgICAg + ICAgICAgICAgIHJhaXNlIEV4ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAg + ICAgICAgICAgICJmYWlsZWQgdG8gZ2V0IHJndyBwb29sIG5hbWUgZm9yIHVwZ3JhZGUiCiAgICAg + ICAgICAgICAgICAgICAgKQoKICAgIGRlZiB1cGdyYWRlX3VzZXJfcGVybWlzc2lvbnMoc2VsZiwg + dXNlcik6CiAgICAgICAgIyBjaGVjayB3aGV0aGVyIHRoZSBnaXZlbiB1c2VyIGV4aXN0cyBvciBu + b3QKICAgICAgICBjbWRfanNvbiA9IHsicHJlZml4IjogImF1dGggZ2V0IiwgImVudGl0eSI6IGYi + e3VzZXJ9IiwgImZvcm1hdCI6ICJqc29uIn0KICAgICAgICByZXRfdmFsLCBqc29uX291dCwgZXJy + X21zZyA9IHNlbGYuX2NvbW1vbl9jbWRfanNvbl9nZW4oY21kX2pzb24pCiAgICAgICAgaWYgcmV0 + X3ZhbCAhPSAwIG9yIGxlbihqc29uX291dCkgPT0gMDoKICAgICAgICAgICAgcHJpbnQoZiJ1c2Vy + IHt1c2VyfSBub3QgZm91bmQgZm9yIHVwZ3JhZGluZy4iKQogICAgICAgICAgICByZXR1cm4KICAg + ICAgICBleGlzdGluZ19jYXBzID0ganNvbl9vdXRbMF1bImNhcHMiXQogICAgICAgIHNlbGYuZ2V0 + X3Jnd19wb29sX25hbWVfZHVyaW5nX3VwZ3JhZGUodXNlciwgc3RyKGV4aXN0aW5nX2NhcHMpKQog + ICAgICAgIG5ld19jYXAsIF8gPSBzZWxmLmdldF9jYXBzX2FuZF9lbnRpdHkodXNlcikKICAgICAg + ICBjYXBfa2V5cyA9IFsibW9uIiwgIm1nciIsICJvc2QiLCAibWRzIl0KICAgICAgICBjYXBzID0g + W10KICAgICAgICBmb3IgZWFjaENhcCBpbiBjYXBfa2V5czoKICAgICAgICAgICAgY3VyX2NhcF92 + YWx1ZXMgPSBleGlzdGluZ19jYXBzLmdldChlYWNoQ2FwLCAiIikKICAgICAgICAgICAgbmV3X2Nh + cF92YWx1ZXMgPSBuZXdfY2FwLmdldChlYWNoQ2FwLCAiIikKICAgICAgICAgICAgY3VyX2NhcF9w + ZXJtX2xpc3QgPSBbCiAgICAgICAgICAgICAgICB4LnN0cmlwKCkgZm9yIHggaW4gY3VyX2NhcF92 + YWx1ZXMuc3BsaXQoIiwiKSBpZiB4LnN0cmlwKCkKICAgICAgICAgICAgXQogICAgICAgICAgICBu + ZXdfY2FwX3Blcm1fbGlzdCA9IFsKICAgICAgICAgICAgICAgIHguc3RyaXAoKSBmb3IgeCBpbiBu + ZXdfY2FwX3ZhbHVlcy5zcGxpdCgiLCIpIGlmIHguc3RyaXAoKQogICAgICAgICAgICBdCiAgICAg + ICAgICAgICMgYXBwZW5kIG5ld19jYXBfbGlzdCB0byBjdXJfY2FwX2xpc3QgdG8gbWFpbnRhaW4g + dGhlIG9yZGVyIG9mIGNhcHMKICAgICAgICAgICAgY3VyX2NhcF9wZXJtX2xpc3QuZXh0ZW5kKG5l + d19jYXBfcGVybV9saXN0KQogICAgICAgICAgICAjIGVsaW1pbmF0ZSBkdXBsaWNhdGVzIHdpdGhv + dXQgdXNpbmcgJ3NldCcKICAgICAgICAgICAgIyBzZXQgcmUtb3JkZXJzIGl0ZW1zIGluIHRoZSBs + aXN0IGFuZCB3ZSBoYXZlIHRvIGtlZXAgdGhlIG9yZGVyCiAgICAgICAgICAgIG5ld19jYXBfbGlz + dCA9IFtdCiAgICAgICAgICAgIFtuZXdfY2FwX2xpc3QuYXBwZW5kKHgpIGZvciB4IGluIGN1cl9j + YXBfcGVybV9saXN0IGlmIHggbm90IGluIG5ld19jYXBfbGlzdF0KICAgICAgICAgICAgZXhpc3Rp + bmdfY2Fwc1tlYWNoQ2FwXSA9ICIsICIuam9pbihuZXdfY2FwX2xpc3QpCiAgICAgICAgICAgIGlm + IGV4aXN0aW5nX2NhcHNbZWFjaENhcF06CiAgICAgICAgICAgICAgICBjYXBzLmFwcGVuZChlYWNo + Q2FwKQogICAgICAgICAgICAgICAgY2Fwcy5hcHBlbmQoZXhpc3RpbmdfY2Fwc1tlYWNoQ2FwXSkK + ICAgICAgICBjbWRfanNvbiA9IHsKICAgICAgICAgICAgInByZWZpeCI6ICJhdXRoIGNhcHMiLAog + ICAgICAgICAgICAiZW50aXR5IjogdXNlciwKICAgICAgICAgICAgImNhcHMiOiBjYXBzLAogICAg + ICAgICAgICAiZm9ybWF0IjogImpzb24iLAogICAgICAgIH0KICAgICAgICByZXRfdmFsLCBqc29u + X291dCwgZXJyX21zZyA9IHNlbGYuX2NvbW1vbl9jbWRfanNvbl9nZW4oY21kX2pzb24pCiAgICAg + ICAgaWYgcmV0X3ZhbCAhPSAwOgogICAgICAgICAgICByYWlzZSBFeGVjdXRpb25GYWlsdXJlRXhj + ZXB0aW9uKAogICAgICAgICAgICAgICAgZiInYXV0aCBjYXBzIHt1c2VyfScgY29tbWFuZCBmYWls + ZWQuXG4gRXJyb3I6IHtlcnJfbXNnfSIKICAgICAgICAgICAgKQogICAgICAgIHByaW50KGYiVXBk + YXRlZCB1c2VyIHt1c2VyfSBzdWNjZXNzZnVsbHkuIikKCiAgICBkZWYgbWFpbihzZWxmKToKICAg + ICAgICBnZW5lcmF0ZWRfb3V0cHV0ID0gIiIKICAgICAgICBpZiBzZWxmLl9hcmdfcGFyc2VyLnVw + Z3JhZGU6CiAgICAgICAgICAgIHNlbGYudXBncmFkZV91c2Vyc19wZXJtaXNzaW9ucygpCiAgICAg + ICAgZWxpZiBzZWxmLl9hcmdfcGFyc2VyLmZvcm1hdCA9PSAianNvbiI6CiAgICAgICAgICAgIGdl + bmVyYXRlZF9vdXRwdXQgPSBzZWxmLmdlbl9qc29uX291dCgpCiAgICAgICAgZWxpZiBzZWxmLl9h + cmdfcGFyc2VyLmZvcm1hdCA9PSAiYmFzaCI6CiAgICAgICAgICAgIGdlbmVyYXRlZF9vdXRwdXQg + PSBzZWxmLmdlbl9zaGVsbF9vdXQoKQogICAgICAgIGVsc2U6CiAgICAgICAgICAgIHJhaXNlIEV4 + ZWN1dGlvbkZhaWx1cmVFeGNlcHRpb24oCiAgICAgICAgICAgICAgICBmIlVuc3VwcG9ydGVkIGZv + cm1hdDoge3NlbGYuX2FyZ19wYXJzZXIuZm9ybWF0fSIKICAgICAgICAgICAgKQogICAgICAgIHBy + aW50KGdlbmVyYXRlZF9vdXRwdXQpCiAgICAgICAgaWYgc2VsZi5vdXRwdXRfZmlsZSBhbmQgZ2Vu + ZXJhdGVkX291dHB1dDoKICAgICAgICAgICAgZk91dCA9IG9wZW4oc2VsZi5vdXRwdXRfZmlsZSwg + bW9kZT0idyIsIGVuY29kaW5nPSJVVEYtOCIpCiAgICAgICAgICAgIGZPdXQud3JpdGUoZ2VuZXJh + dGVkX291dHB1dCkKICAgICAgICAgICAgZk91dC5jbG9zZSgpCgoKIyMjIyMjIyMjIyMjIyMjIyMj + IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjCiMjIyMjIyMjIyMjIyMjIyMjIyMjIyBNQUlO + ICMjIyMjIyMjIyMjIyMjIyMjIyMjIwojIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj + IyMjIyMjIyMjIyMjIyMKaWYgX19uYW1lX18gPT0gIl9fbWFpbl9fIjoKICAgIHJqT2JqID0gUmFk + b3NKU09OKCkKICAgIHRyeToKICAgICAgICByak9iai5tYWluKCkKICAgIGV4Y2VwdCBFeGVjdXRp + b25GYWlsdXJlRXhjZXB0aW9uIGFzIGVycjoKICAgICAgICBwcmludChmIkV4ZWN1dGlvbiBGYWls + ZWQ6IHtlcnJ9IikKICAgICAgICByYWlzZSBlcnIKICAgIGV4Y2VwdCBLZXlFcnJvciBhcyBrRXJy + OgogICAgICAgIHByaW50KGYiS2V5RXJyb3I6IHtrRXJyfSIpCiAgICBleGNlcHQgT1NFcnJvciBh + cyBvc0VycjoKICAgICAgICBwcmludChmIkVycm9yIHdoaWxlIHRyeWluZyB0byBvdXRwdXQgdGhl + IGRhdGE6IHtvc0Vycn0iKQogICAgZmluYWxseToKICAgICAgICByak9iai5zaHV0ZG93bigpCg== + name: rook-ceph.v{{.RookOperatorCsvVersion}} + namespace: placeholder + relatedImages: + - image: docker.io/rook/ceph:master + name: rook-container + - image: quay.io/ceph/ceph:v18.2.0 + name: ceph-container + - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 + name: csiaddons-sidecar +spec: + apiservicedefinitions: {} + customresourcedefinitions: + owned: + - kind: CephCluster + name: cephclusters.ceph.rook.io + version: v1 + displayName: Ceph Cluster + description: Represents a Ceph cluster. + - kind: CephBlockPool + name: cephblockpools.ceph.rook.io + version: v1 + displayName: Ceph Block Pool + description: Represents a Ceph Block Pool. + - kind: CephObjectStore + name: cephobjectstores.ceph.rook.io + version: v1 + displayName: Ceph Object Store + description: Represents a Ceph Object Store. + specDescriptors: + - description: Coding Chunks + displayName: Coding Chunks + path: dataPool.erasureCoded.codingChunks + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool + - urn:alm:descriptor:com.tectonic.ui:number + - description: Data Chunks + displayName: Data Chunks + path: dataPool.erasureCoded.dataChunks + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool + - urn:alm:descriptor:com.tectonic.ui:number + - description: failureDomain + displayName: failureDomain + path: dataPool.failureDomain + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool + - urn:alm:descriptor:com.tectonic.ui:text + - description: Size + displayName: Size + path: dataPool.replicated.size + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:dataPool + - urn:alm:descriptor:com.tectonic.ui:number + - description: Annotations + displayName: Annotations + path: gateway.annotations + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:io.kubernetes:annotations + - description: Instances + displayName: Instances + path: gateway.instances + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:com.tectonic.ui:number + - description: Resources + displayName: Resources + path: gateway.resources + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:com.tectonic.ui:resourceRequirements + - description: placement + displayName: placement + path: gateway.placement + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:io.kubernetes:placement + - description: securePort + displayName: securePort + path: gateway.securePort + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:io.kubernetes:securePort + - description: sslCertificateRef + displayName: sslCertificateRef + path: gateway.sslCertificateRef + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:io.kubernetes:sslCertificateRef + - description: Type + displayName: Type + path: gateway.type + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:gateway + - urn:alm:descriptor:com.tectonic.ui:text + - description: Coding Chunks + displayName: Coding Chunks + path: metadataPool.erasureCoded.codingChunks + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool + - urn:alm:descriptor:com.tectonic.ui:number + - description: Data Chunks + displayName: Data Chunks + path: metadataPool.erasureCoded.dataChunks + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool + - urn:alm:descriptor:com.tectonic.ui:number + - description: failureDomain + displayName: failureDomain + path: metadataPool.failureDomain + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool + - urn:alm:descriptor:com.tectonic.ui:text + - description: Size + displayName: Size + path: metadataPool.replicated.size + x-descriptors: + - urn:alm:descriptor:com.tectonic.ui:fieldGroup:metadataPool + - urn:alm:descriptor:com.tectonic.ui:number + - kind: CephObjectStoreUser + name: cephobjectstoreusers.ceph.rook.io + version: v1 + displayName: Ceph Object Store User + description: Represents a Ceph Object Store User. + - kind: CephNFS + name: cephnfses.ceph.rook.io + version: v1 + displayName: Ceph NFS + description: Represents a cluster of Ceph NFS ganesha gateways. + - kind: CephClient + name: cephclients.ceph.rook.io + version: v1 + displayName: Ceph Client + description: Represents a Ceph User. + - kind: CephFilesystem + name: cephfilesystems.ceph.rook.io + version: v1 + displayName: Ceph Filesystem + description: Represents a Ceph Filesystem. + - kind: CephFilesystemMirror + name: cephfilesystemmirrors.ceph.rook.io + version: v1 + displayName: Ceph Filesystem Mirror + description: Represents a Ceph Filesystem Mirror. + - kind: CephRBDMirror + name: cephrbdmirrors.ceph.rook.io + version: v1 + displayName: Ceph RBD Mirror + description: Represents a Ceph RBD Mirror. + - kind: CephObjectRealm + name: cephobjectrealms.ceph.rook.io + version: v1 + displayName: Ceph Object Store Realm + description: Represents a Ceph Object Store Realm. + - kind: CephObjectZoneGroup + name: cephobjectzonegroups.ceph.rook.io + version: v1 + displayName: Ceph Object Store Zone Group + description: Represents a Ceph Object Store Zone Group. + - kind: CephObjectZone + name: cephobjectzones.ceph.rook.io + version: v1 + displayName: Ceph Object Store Zone + description: Represents a Ceph Object Store Zone. + - kind: CephBucketNotification + name: cephbucketnotifications.ceph.rook.io + version: v1 + displayName: Ceph Bucket Notification + description: Represents a Ceph Bucket Notification. + - kind: CephBucketTopic + name: cephbuckettopics.ceph.rook.io + version: v1 + displayName: Ceph Bucket Topic + description: Represents a Ceph Bucket Topic. + - kind: CephFilesystemSubVolumeGroup + name: cephfilesystemsubvolumegroups.ceph.rook.io + version: v1 + displayName: Ceph Filesystem SubVolumeGroup + description: Represents a Ceph Filesystem SubVolumeGroup. + - kind: CephBlockPoolRadosNamespace + name: cephblockpoolradosnamespaces.ceph.rook.io + version: v1 + displayName: Ceph BlockPool Rados Namespace + description: Represents a Ceph BlockPool Rados Namespace. + - kind: CephCOSIDriver + name: cephcosidrivers.ceph.rook.io + version: v1 + displayName: Ceph COSI Driver + description: Represents a Ceph COSI Driver. + description: |2 + + The Rook-Ceph storage operator packages, deploys, manages, upgrades and scales Ceph storage for providing persistent storage to infrastructure services (Logging, Metrics, Registry) as well as stateful applications in Kubernetes clusters. + + ## Rook-Ceph Storage Operator + + Rook runs as a cloud-native service in Kubernetes clusters for optimal integration with applications in need of storage, and handles the heavy-lifting behind the scenes such as provisioning and management. + Rook orchestrates battle-tested open-source storage technology Ceph, which has years of production deployments and runs some of the worlds largest clusters. + + Ceph is a massively scalable, software-defined, cloud native storage platform that offers block, file and object storage services. + Ceph can be used to back a wide variety of applications including relational databases, NoSQL databases, CI/CD tool-sets, messaging, AI/ML and analytics applications. + Ceph is a proven storage platform that backs some of the world's largest storage deployments and has a large vibrant open source community backing the project. + + ## Supported features + * **High Availability and resiliency** - Ceph has no single point of failures (SPOF) and all its components work natively in a highly available fashion + * **Data Protection** - Ceph periodically scrub for inconsistent objects and repair them if necessary, making sure your replicas are always coherent + * **Consistent storage platform across hybrid cloud** - Ceph can be deployed anywhere (on-premise or bare metal) and thus offers a similar experience regardless + * **Block, File & Object storage service** - Ceph can expose your data through several storage interfaces, solving all the application use cases + * **Scale up/down** - addition and removal of storage is fully covered by the operator. + * **Dashboard** - The Operator deploys a dashboard for monitoring and introspecting your cluster. + + ## Before you start + https://rook.io/docs/rook/v1.0/k8s-pre-reqs.html + displayName: Rook-Ceph + icon: + - base64data: PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPCEtLSBHZW5lcmF0b3I6IEFkb2JlIElsbHVzdHJhdG9yIDIzLjAuMiwgU1ZHIEV4cG9ydCBQbHVnLUluIC4gU1ZHIFZlcnNpb246IDYuMDAgQnVpbGQgMCkgIC0tPgo8c3ZnIHZlcnNpb249IjEuMSIgaWQ9IkxheWVyXzEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IgoJIHZpZXdCb3g9IjAgMCA3MCA3MCIgc3R5bGU9ImVuYWJsZS1iYWNrZ3JvdW5kOm5ldyAwIDAgNzAgNzA7IiB4bWw6c3BhY2U9InByZXNlcnZlIj4KPHN0eWxlIHR5cGU9InRleHQvY3NzIj4KCS5zdDB7ZmlsbDojMkIyQjJCO30KPC9zdHlsZT4KPGc+Cgk8Zz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTUwLjUsNjcuNkgxOS45Yy04LDAtMTQuNS02LjUtMTQuNS0xNC41VjI5LjJjMC0xLjEsMC45LTIuMSwyLjEtMi4xaDU1LjRjMS4xLDAsMi4xLDAuOSwyLjEsMi4xdjIzLjkKCQkJCUM2NSw2MS4xLDU4LjUsNjcuNiw1MC41LDY3LjZ6IE05LjYsMzEuMnYyMS45YzAsNS43LDQuNiwxMC4zLDEwLjMsMTAuM2gzMC42YzUuNywwLDEwLjMtNC42LDEwLjMtMTAuM1YzMS4ySDkuNnoiLz4KCQk8L2c+CgkJPGc+CgkJCTxwYXRoIGNsYXNzPSJzdDAiIGQ9Ik00Mi40LDU2LjdIMjhjLTEuMSwwLTIuMS0wLjktMi4xLTIuMXYtNy4yYzAtNS4xLDQuMi05LjMsOS4zLTkuM3M5LjMsNC4yLDkuMyw5LjN2Ny4yCgkJCQlDNDQuNSw1NS43LDQzLjYsNTYuNyw0Mi40LDU2Ljd6IE0zMCw1Mi41aDEwLjN2LTUuMmMwLTIuOS0yLjMtNS4yLTUuMi01LjJjLTIuOSwwLTUuMiwyLjMtNS4yLDUuMlY1Mi41eiIvPgoJCTwvZz4KCQk8Zz4KCQkJPHBhdGggY2xhc3M9InN0MCIgZD0iTTYyLjksMjMuMkM2Mi45LDIzLjIsNjIuOSwyMy4yLDYyLjksMjMuMmwtMTEuMSwwYy0xLjEsMC0yLjEtMC45LTIuMS0yLjFjMC0xLjEsMC45LTIuMSwyLjEtMi4xCgkJCQljMCwwLDAsMCwwLDBsOS4xLDBWNi43aC02Ljl2My41YzAsMC41LTAuMiwxLjEtMC42LDEuNWMtMC40LDAuNC0wLjksMC42LTEuNSwwLjZsMCwwbC0xMS4xLDBjLTEuMSwwLTIuMS0wLjktMi4xLTIuMVY2LjdoLTYuOQoJCQkJdjMuNWMwLDEuMS0wLjksMi4xLTIuMSwyLjFsLTExLjEsMGMtMC41LDAtMS4xLTAuMi0xLjUtMC42Yy0wLjQtMC40LTAuNi0wLjktMC42LTEuNVY2LjdIOS42djEyLjRoOWMxLjEsMCwyLjEsMC45LDIuMSwyLjEKCQkJCXMtMC45LDIuMS0yLjEsMi4xaC0xMWMtMS4xLDAtMi4xLTAuOS0yLjEtMi4xVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDcsMFY0LjYKCQkJCWMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMWMxLjEsMCwyLjEsMC45LDIuMSwyLjF2My41bDYuOSwwVjQuNmMwLTEuMSwwLjktMi4xLDIuMS0yLjFoMTEuMUM2NCwyLjYsNjUsMy41LDY1LDQuNnYxNi41CgkJCQljMCwwLjUtMC4yLDEuMS0wLjYsMS41QzY0LDIzLDYzLjQsMjMuMiw2Mi45LDIzLjJ6Ii8+CgkJPC9nPgoJPC9nPgo8L2c+Cjwvc3ZnPg== + mediatype: image/svg+xml + install: + spec: + clusterPermissions: + - rules: + - apiGroups: + - "" + resources: + - configmaps + - nodes + - nodes/proxy + - persistentvolumes + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - create + - patch + - list + - get + - watch + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + serviceAccountName: rook-ceph-mgr + - rules: + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + serviceAccountName: rook-ceph-osd + - rules: + - apiGroups: + - "" + resources: + - pods + - nodes + - nodes/proxy + - secrets + - configmaps + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + - persistentvolumes + - persistentvolumeclaims + - endpoints + - services + verbs: + - get + - list + - watch + - patch + - create + - update + - delete + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + - apiGroups: + - batch + resources: + - jobs + - cronjobs + verbs: + - get + - list + - watch + - create + - update + - delete + - deletecollection + - apiGroups: + - ceph.rook.io + resources: + - cephclients + - cephclusters + - cephblockpools + - cephfilesystems + - cephnfses + - cephobjectstores + - cephobjectstoreusers + - cephobjectrealms + - cephobjectzonegroups + - cephobjectzones + - cephbuckettopics + - cephbucketnotifications + - cephrbdmirrors + - cephfilesystemmirrors + - cephfilesystemsubvolumegroups + - cephblockpoolradosnamespaces + - cephcosidrivers + verbs: + - get + - list + - watch + - update + - apiGroups: + - ceph.rook.io + resources: + - cephclients/status + - cephclusters/status + - cephblockpools/status + - cephfilesystems/status + - cephnfses/status + - cephobjectstores/status + - cephobjectstoreusers/status + - cephobjectrealms/status + - cephobjectzonegroups/status + - cephobjectzones/status + - cephbuckettopics/status + - cephbucketnotifications/status + - cephrbdmirrors/status + - cephfilesystemmirrors/status + - cephfilesystemsubvolumegroups/status + - cephblockpoolradosnamespaces/status + verbs: + - update + - apiGroups: + - ceph.rook.io + resources: + - cephclients/finalizers + - cephclusters/finalizers + - cephblockpools/finalizers + - cephfilesystems/finalizers + - cephnfses/finalizers + - cephobjectstores/finalizers + - cephobjectstoreusers/finalizers + - cephobjectrealms/finalizers + - cephobjectzonegroups/finalizers + - cephobjectzones/finalizers + - cephbuckettopics/finalizers + - cephbucketnotifications/finalizers + - cephrbdmirrors/finalizers + - cephfilesystemmirrors/finalizers + - cephfilesystemsubvolumegroups/finalizers + - cephblockpoolradosnamespaces/finalizers + verbs: + - update + - apiGroups: + - policy + - apps + - extensions + resources: + - poddisruptionbudgets + - deployments + - replicasets + verbs: + - get + - list + - watch + - create + - update + - delete + - deletecollection + - apiGroups: + - apps + resources: + - deployments/finalizers + verbs: + - update + - apiGroups: + - healthchecking.openshift.io + resources: + - machinedisruptionbudgets + verbs: + - get + - list + - watch + - create + - update + - delete + - apiGroups: + - machine.openshift.io + resources: + - machines + verbs: + - get + - list + - watch + - create + - update + - delete + - apiGroups: + - storage.k8s.io + resources: + - csidrivers + verbs: + - create + - delete + - get + - update + - apiGroups: + - k8s.cni.cncf.io + resources: + - network-attachment-definitions + verbs: + - get + - apiGroups: + - "" + resources: + - secrets + - configmaps + verbs: + - get + - create + - update + - delete + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - apiGroups: + - objectbucket.io + resources: + - objectbucketclaims + verbs: + - list + - watch + - get + - update + - apiGroups: + - objectbucket.io + resources: + - objectbuckets + verbs: + - list + - watch + - get + - create + - update + - delete + - apiGroups: + - objectbucket.io + resources: + - objectbucketclaims/status + - objectbuckets/status + verbs: + - update + - apiGroups: + - objectbucket.io + resources: + - objectbucketclaims/finalizers + - objectbuckets/finalizers + verbs: + - update + - apiGroups: + - "" + resources: + - pods + - pods/log + verbs: + - get + - list + - apiGroups: + - "" + resources: + - pods/exec + verbs: + - create + - apiGroups: + - csiaddons.openshift.io + resources: + - networkfences + verbs: + - create + - get + - update + - delete + - watch + - list + - deletecollection + - apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + serviceAccountName: rook-ceph-system + - rules: + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + serviceAccountName: rook-csi-cephfs-plugin-sa + - rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - create + - update + - delete + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - list + - watch + - patch + - update + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - list + - watch + - create + - update + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - watch + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments/status + verbs: + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims/status + verbs: + - patch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - get + - list + - watch + - update + - patch + - create + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list + - watch + - patch + - update + - create + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents/status + verbs: + - update + - patch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotcontents + verbs: + - get + - list + - watch + - update + - patch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotcontents/status + verbs: + - update + - patch + serviceAccountName: rook-csi-cephfs-provisioner-sa + - rules: + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + serviceAccountName: rook-csi-nfs-plugin-sa + - rules: + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - create + - update + - delete + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - list + - watch + - patch + - update + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - get + - list + - watch + - create + - update + - patch + - apiGroups: + - storage.k8s.io + resources: + - csinodes + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + - watch + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - list + - watch + - create + - update + - patch + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list + - watch + - update + - patch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents/status + verbs: + - update + - patch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - get + - list + - apiGroups: + - "" + resources: + - persistentvolumeclaims/status + verbs: + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - watch + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments/status + verbs: + - patch + serviceAccountName: rook-csi-nfs-provisioner-sa + - rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + serviceAccountName: rook-csi-rbd-plugin-sa + - rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - create + - update + - delete + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - list + - watch + - update + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - list + - watch + - create + - update + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - watch + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments/status + verbs: + - patch + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + - watch + - apiGroups: + - storage.k8s.io + resources: + - csinodes + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - persistentvolumeclaims/status + verbs: + - patch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - get + - list + - watch + - update + - patch + - create + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list + - watch + - patch + - update + - create + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents/status + verbs: + - update + - patch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotcontents + verbs: + - get + - list + - watch + - update + - patch + - apiGroups: + - groupsnapshot.storage.k8s.io + resources: + - volumegroupsnapshotcontents/status + verbs: + - update + - patch + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - list + - watch + - apiGroups: + - storage.k8s.io + resources: + - csinodes + verbs: + - get + - list + - watch + serviceAccountName: rook-csi-rbd-provisioner-sa + - rules: + - verbs: + - use + apiGroups: + - security.openshift.io + resources: + - securitycontextconstraints + resourceNames: + - privileged + serviceAccountName: rook-ceph-system + deployments: + - label: + app.kubernetes.io/component: rook-ceph-operator + app.kubernetes.io/instance: rook-ceph + app.kubernetes.io/name: rook-ceph + app.kubernetes.io/part-of: rook-ceph-operator + operator: rook + storage-backend: ceph + name: rook-ceph-operator + spec: + replicas: 1 + selector: + matchLabels: + app: rook-ceph-operator + strategy: + type: Recreate + template: + metadata: + labels: + app: rook-ceph-operator + spec: + containers: + - args: + - ceph + - operator + env: + - name: ROOK_CURRENT_NAMESPACE_ONLY + valueFrom: + configMapKeyRef: + key: ROOK_CURRENT_NAMESPACE_ONLY + name: ocs-operator-config + - name: CSI_REMOVE_HOLDER_PODS + valueFrom: + configMapKeyRef: + key: CSI_REMOVE_HOLDER_PODS + name: ocs-operator-config + - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS + value: "false" + - name: ROOK_LOG_LEVEL + value: INFO + - name: ROOK_CEPH_STATUS_CHECK_INTERVAL + value: 60s + - name: ROOK_MON_HEALTHCHECK_INTERVAL + value: 45s + - name: ROOK_MON_OUT_TIMEOUT + value: 600s + - name: ROOK_DISCOVER_DEVICES_INTERVAL + value: 60m + - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED + value: "true" + - name: ROOK_ENABLE_SELINUX_RELABELING + value: "true" + - name: ROOK_ENABLE_FSGROUP + value: "true" + - name: ROOK_ENABLE_FLEX_DRIVER + value: "false" + - name: ROOK_ENABLE_DISCOVERY_DAEMON + value: "false" + - name: ROOK_ENABLE_MACHINE_DISRUPTION_BUDGET + value: "false" + - name: ROOK_DISABLE_DEVICE_HOTPLUG + value: "true" + - name: ROOK_CSI_ALLOW_UNSUPPORTED_VERSION + value: "true" + - name: ROOK_DISABLE_ADMISSION_CONTROLLER + value: "true" + - name: ROOK_CSIADDONS_IMAGE + value: quay.io/csiaddons/k8s-sidecar:v0.6.0 + - name: ROOK_OBC_PROVISIONER_NAME_PREFIX + value: openshift-storage + - name: CSI_ENABLE_METADATA + value: "false" + - name: CSI_PLUGIN_PRIORITY_CLASSNAME + value: system-node-critical + - name: CSI_PROVISIONER_PRIORITY_CLASSNAME + value: system-cluster-critical + - name: CSI_CLUSTER_NAME + valueFrom: + configMapKeyRef: + key: CSI_CLUSTER_NAME + name: ocs-operator-config + - name: CSI_DRIVER_NAME_PREFIX + value: openshift-storage + - name: CSI_ENABLE_TOPOLOGY + valueFrom: + configMapKeyRef: + key: CSI_ENABLE_TOPOLOGY + name: ocs-operator-config + - name: CSI_TOPOLOGY_DOMAIN_LABELS + valueFrom: + configMapKeyRef: + key: CSI_TOPOLOGY_DOMAIN_LABELS + name: ocs-operator-config + - name: ROOK_CSI_ENABLE_NFS + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_NFS + name: ocs-operator-config + - name: CSI_PROVISIONER_TOLERATIONS + value: |2- + + - key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + effect: NoSchedule + - name: CSI_PLUGIN_TOLERATIONS + value: |2- + + - key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + effect: NoSchedule + - name: CSI_LOG_LEVEL + value: "5" + - name: CSI_SIDECAR_LOG_LEVEL + value: "1" + - name: CSI_ENABLE_CSIADDONS + value: "true" + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: ROOK_OBC_WATCH_OPERATOR_NAMESPACE + value: "true" + image: {{.RookOperatorImage}} + name: rook-ceph-operator + resources: {} + securityContext: + runAsGroup: 2016 + runAsNonRoot: true + runAsUser: 2016 + volumeMounts: + - mountPath: /var/lib/rook + name: rook-config + - mountPath: /etc/ceph + name: default-config-dir + serviceAccountName: rook-ceph-system + tolerations: + - effect: NoSchedule + key: node.ocs.openshift.io/storage + operator: Equal + value: "true" + volumes: + - emptyDir: {} + name: rook-config + - emptyDir: {} + name: default-config-dir + permissions: + - rules: + - apiGroups: + - "" + resources: + - pods + - configmaps + verbs: + - get + - list + - watch + - create + - update + - delete + serviceAccountName: rook-ceph-cmd-reporter + - rules: + - apiGroups: + - "" + resources: + - "" + verbs: + - "" + serviceAccountName: rook-ceph-default + - rules: + - apiGroups: + - "" + resources: + - pods + - services + - pods/log + verbs: + - get + - list + - watch + - create + - update + - delete + - apiGroups: + - batch + resources: + - jobs + verbs: + - get + - list + - watch + - create + - update + - delete + - apiGroups: + - ceph.rook.io + resources: + - cephclients + - cephclusters + - cephblockpools + - cephfilesystems + - cephnfses + - cephobjectstores + - cephobjectstoreusers + - cephobjectrealms + - cephobjectzonegroups + - cephobjectzones + - cephbuckettopics + - cephbucketnotifications + - cephrbdmirrors + - cephfilesystemmirrors + - cephfilesystemsubvolumegroups + - cephblockpoolradosnamespaces + - cephcosidrivers + verbs: + - get + - list + - watch + - create + - update + - delete + - patch + - apiGroups: + - apps + resources: + - deployments/scale + - deployments + verbs: + - patch + - delete + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - delete + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + serviceAccountName: rook-ceph-mgr + - rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - update + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + - create + - update + - delete + - apiGroups: + - ceph.rook.io + resources: + - cephclusters + - cephclusters/finalizers + verbs: + - get + - list + - create + - update + - delete + serviceAccountName: rook-ceph-osd + - rules: + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - apiGroups: + - apps + resources: + - deployments + verbs: + - get + - delete + - apiGroups: + - batch + resources: + - jobs + verbs: + - get + - list + - delete + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - update + - delete + - list + serviceAccountName: rook-ceph-purge-osd + - rules: + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + serviceAccountName: rook-ceph-rgw + - rules: + - apiGroups: + - "" + - apps + - extensions + resources: + - secrets + - pods + - pods/log + - services + - configmaps + - deployments + - daemonsets + verbs: + - get + - list + - watch + - patch + - create + - update + - delete + - apiGroups: + - "" + resources: + - pods + - configmaps + - services + verbs: + - get + - list + - watch + - patch + - create + - update + - delete + - apiGroups: + - apps + - extensions + resources: + - daemonsets + - statefulsets + - deployments + verbs: + - get + - list + - watch + - create + - update + - delete + - deletecollection + - apiGroups: + - batch + resources: + - cronjobs + verbs: + - delete + - apiGroups: + - cert-manager.io + resources: + - certificates + - issuers + verbs: + - get + - create + - delete + - apiGroups: + - multicluster.x-k8s.io + resources: + - serviceexports + verbs: + - get + - create + serviceAccountName: rook-ceph-system + - rules: + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create + serviceAccountName: rook-csi-cephfs-provisioner-sa + - rules: + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create + serviceAccountName: rook-csi-rbd-plugin-sa + - rules: + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create + serviceAccountName: rook-csi-rbd-provisioner-sa + strategy: deployment + installModes: + - type: OwnNamespace + supported: true + - type: SingleNamespace + supported: true + - type: MultiNamespace + supported: false + - type: AllNamespaces + supported: false + keywords: + - rook + - ceph + - storage + - object storage + - open source + - block storage + - shared filesystem + links: + - name: Source Code + url: https://github.com/red-hat-storage/rook + maintainers: + - name: Red Hat Support + email: ocs-support@redhat.com + maturity: alpha + provider: + name: Provider Name + url: https://your.domain + version: {{.RookOperatorCsvVersion}} + minKubeVersion: 1.16.0 From 50cce657dc94be625fe96a7399d2c5402fd91dd2 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Mon, 4 Mar 2024 13:44:02 +0530 Subject: [PATCH 55/65] build: added new rook bundle creation script Added Dockerfile to create rook-ceph-operator-bundle under the rook repo. Updated the Makefile to have the 'bundle' command, that reference the 'gen-bundle.sh' script under `build/bundle`. Moved the rook csv under build/csv/ceph directory to keep all the manifests at one location for bundle creation Signed-off-by: Nikhil-Ladha --- Dockerfile.bundle | 13 ++++++++++++ Makefile | 4 ++++ build/bundle/annotations.yaml | 6 ++++++ build/bundle/gen-bundle.sh | 21 +++++++++++++++++++ build/common.sh | 9 ++++++++ ...-ceph-operator.clusterserviceversion.yaml} | 8 +++---- build/csv/csv-gen.sh | 16 +++++++------- deploy/olm/assemble/metadata-common.yaml | 2 +- 8 files changed, 67 insertions(+), 12 deletions(-) create mode 100644 Dockerfile.bundle create mode 100644 build/bundle/annotations.yaml create mode 100755 build/bundle/gen-bundle.sh rename build/csv/{rook-ceph.clusterserviceversion.yaml => ceph/rook-ceph-operator.clusterserviceversion.yaml} (99%) diff --git a/Dockerfile.bundle b/Dockerfile.bundle new file mode 100644 index 000000000000..30fbb1adc42b --- /dev/null +++ b/Dockerfile.bundle @@ -0,0 +1,13 @@ +FROM scratch + +# Core bundle labels. +LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1 +LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/ +LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/ +LABEL operators.operatorframework.io.bundle.package.v1=rook-ceph-operator +LABEL operators.operatorframework.io.bundle.channels.v1=alpha +LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha + +# Copy files to locations specified by labels. +COPY build/csv/ceph /manifests/ +COPY build/bundle/annotations.yaml /metadata/ diff --git a/Makefile b/Makefile index 574a585aed8f..5d310ea4ff5c 100644 --- a/Makefile +++ b/Makefile @@ -184,6 +184,10 @@ gen-csv: export NO_OB_OBC_VOL_GEN=true gen-csv: csv-clean crds ## Generate a CSV file for OLM. $(MAKE) -C images/ceph csv +bundle: + @echo generate rook bundle + @build/bundle/gen-bundle.sh + csv-clean: ## Remove existing OLM files. @$(MAKE) -C images/ceph csv-clean diff --git a/build/bundle/annotations.yaml b/build/bundle/annotations.yaml new file mode 100644 index 000000000000..1d626e5c76ac --- /dev/null +++ b/build/bundle/annotations.yaml @@ -0,0 +1,6 @@ +annotations: + operators.operatorframework.io.bundle.mediatype.v1: registry+v1 + operators.operatorframework.io.bundle.manifests.v1: manifests/ + operators.operatorframework.io.bundle.metadata.v1: metadata/ + operators.operatorframework.io.bundle.package.v1: rook-ceph-operator + operators.operatorframework.io.bundle.channels.v1: alpha diff --git a/build/bundle/gen-bundle.sh b/build/bundle/gen-bundle.sh new file mode 100755 index 000000000000..3fdaa4c7ab06 --- /dev/null +++ b/build/bundle/gen-bundle.sh @@ -0,0 +1,21 @@ +#!/usr/bin/env bash +set -e + +source "build/common.sh" + +# Use the available container management tool +if [ -z "$DOCKERCMD" ]; then + DOCKERCMD=$(command -v docker || echo "") +fi +if [ -z "$DOCKERCMD" ]; then + DOCKERCMD=$(command -v podman || echo "") +fi + +if [ -z "$DOCKERCMD" ]; then + echo -e '\033[1;31m' "podman or docker not found on system" '\033[0m' + exit 1 +fi + +${DOCKERCMD} build --platform="${GOOS}"/"${GOARCH}" --no-cache -t "$BUNDLE_IMAGE" -f Dockerfile.bundle . +echo +echo "Run '${DOCKERCMD} push ${BUNDLE_IMAGE}' to push operator bundle to image registry." diff --git a/build/common.sh b/build/common.sh index 719c2de969f8..57c7285bd5eb 100644 --- a/build/common.sh +++ b/build/common.sh @@ -24,6 +24,15 @@ scriptdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" export OUTPUT_DIR=${BUILD_ROOT}/_output export WORK_DIR=${BUILD_ROOT}/.work export CACHE_DIR=${BUILD_ROOT}/.cache +export GOOS +GOOS=$(go env GOOS) +export GOARCH +GOARCH=$(go env GOARCH) +DEFAULT_CSV_VERSION="4.15.0" +CSV_VERSION="${CSV_VERSION:-${DEFAULT_CSV_VERSION}}" +export ROOK_IMAGE="docker.io/rook/ceph:v1.13.0.399.g9c0d795e2" +DEFAULT_BUNDLE_IMAGE=rook/rook-ceph-operator-bundle:"${VERSION}" +BUNDLE_IMAGE="${BUNDLE_IMAGE:-${DEFAULT_BUNDLE_IMAGE}}" function ver() { local full_ver maj min bug build diff --git a/build/csv/rook-ceph.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml similarity index 99% rename from build/csv/rook-ceph.clusterserviceversion.yaml rename to build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index b39e47e6a496..055a64898576 100644 --- a/build/csv/rook-ceph.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -201,7 +201,7 @@ metadata: operators.operatorframework.io/project_layout: unknown tectonic-visibility: ocs repository: https://github.com/red-hat-storage/rook - containerImage: '{{.RookOperatorImage}}' + containerImage: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 externalClusterScript: |- IiIiCkNvcHlyaWdodCAyMDIwIFRoZSBSb29rIEF1dGhvcnMuIEFsbCByaWdodHMgcmVzZXJ2ZWQu CgpMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxp @@ -1860,7 +1860,7 @@ metadata: OgogICAgICAgIHByaW50KGYiS2V5RXJyb3I6IHtrRXJyfSIpCiAgICBleGNlcHQgT1NFcnJvciBh cyBvc0VycjoKICAgICAgICBwcmludChmIkVycm9yIHdoaWxlIHRyeWluZyB0byBvdXRwdXQgdGhl IGRhdGE6IHtvc0Vycn0iKQogICAgZmluYWxseToKICAgICAgICByak9iai5zaHV0ZG93bigpCg== - name: rook-ceph.v{{.RookOperatorCsvVersion}} + name: rook-ceph-operator.v4.15.0 namespace: placeholder relatedImages: - image: docker.io/rook/ceph:master @@ -3034,7 +3034,7 @@ spec: fieldPath: metadata.namespace - name: ROOK_OBC_WATCH_OPERATOR_NAMESPACE value: "true" - image: {{.RookOperatorImage}} + image: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 name: rook-ceph-operator resources: {} securityContext: @@ -3374,5 +3374,5 @@ spec: provider: name: Provider Name url: https://your.domain - version: {{.RookOperatorCsvVersion}} + version: 4.15.0 minKubeVersion: 1.16.0 diff --git a/build/csv/csv-gen.sh b/build/csv/csv-gen.sh index cab017e01dc4..3589c9ca3831 100755 --- a/build/csv/csv-gen.sh +++ b/build/csv/csv-gen.sh @@ -1,6 +1,8 @@ #!/usr/bin/env bash set -e +source "../../build/common.sh" + ############# # VARIABLES # ############# @@ -13,7 +15,7 @@ YQ_CMD_DELETE=("$yq" delete -i) YQ_CMD_MERGE_OVERWRITE=("$yq" merge --inplace --overwrite --prettyPrint) YQ_CMD_MERGE=("$yq" merge --arrays=append --inplace) YQ_CMD_WRITE=("$yq" write --inplace -P) -CSV_FILE_NAME="../../build/csv/ceph/$PLATFORM/manifests/rook-ceph.clusterserviceversion.yaml" +CSV_FILE_NAME="../../build/csv/ceph/$PLATFORM/manifests/rook-ceph-operator.clusterserviceversion.yaml" CEPH_EXTERNAL_SCRIPT_FILE="../../deploy/examples/create-external-cluster-resources.py" ASSEMBLE_FILE_COMMON="../../deploy/olm/assemble/metadata-common.yaml" ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml" @@ -23,7 +25,7 @@ ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml" ############# function generate_csv() { - kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-default,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter + kubectl kustomize ../../deploy/examples/ | "$operator_sdk" generate bundle --package="rook-ceph-operator" --output-dir="../../build/csv/ceph/$PLATFORM" --extra-service-accounts=rook-ceph-default,rook-csi-rbd-provisioner-sa,rook-csi-rbd-plugin-sa,rook-csi-cephfs-provisioner-sa,rook-csi-nfs-provisioner-sa,rook-csi-nfs-plugin-sa,rook-csi-cephfs-plugin-sa,rook-ceph-system,rook-ceph-rgw,rook-ceph-purge-osd,rook-ceph-osd,rook-ceph-mgr,rook-ceph-cmd-reporter # cleanup to get the expected state before merging the real data from assembles "${YQ_CMD_DELETE[@]}" "$CSV_FILE_NAME" 'spec.icon[*]' @@ -33,7 +35,7 @@ function generate_csv() { "${YQ_CMD_MERGE_OVERWRITE[@]}" "$CSV_FILE_NAME" "$ASSEMBLE_FILE_COMMON" "${YQ_CMD_WRITE[@]}" "$CSV_FILE_NAME" metadata.annotations.externalClusterScript "$(base64 <$CEPH_EXTERNAL_SCRIPT_FILE)" - "${YQ_CMD_WRITE[@]}" "$CSV_FILE_NAME" metadata.name "rook-ceph.v${VERSION}" + "${YQ_CMD_WRITE[@]}" "$CSV_FILE_NAME" metadata.name "rook-ceph-operator.v${CSV_VERSION}" "${YQ_CMD_MERGE[@]}" "$CSV_FILE_NAME" "$ASSEMBLE_FILE_OCP" @@ -47,11 +49,11 @@ function generate_csv() { return fi - sed -i 's/image: rook\/ceph:.*/image: {{.RookOperatorImage}}/g' "$CSV_FILE_NAME" - sed -i 's/name: rook-ceph.v.*/name: rook-ceph.v{{.RookOperatorCsvVersion}}/g' "$CSV_FILE_NAME" - sed -i 's/version: 0.0.0/version: {{.RookOperatorCsvVersion}}/g' "$CSV_FILE_NAME" + sed -i "s|containerImage: rook/ceph:.*|containerImage: $ROOK_IMAGE|" "$CSV_FILE_NAME" + sed -i "s|image: rook/ceph:.*|image: $ROOK_IMAGE|" "$CSV_FILE_NAME" + sed -i "s/name: rook-ceph.v.*/name: rook-ceph-operator.v$CSV_VERSION/g" "$CSV_FILE_NAME" + sed -i "s/version: 0.0.0/version: $CSV_VERSION/g" "$CSV_FILE_NAME" - mv "$CSV_FILE_NAME" "../../build/csv/" mv "../../build/csv/ceph/$PLATFORM/manifests/"* "../../build/csv/ceph/" rm -rf "../../build/csv/ceph/$PLATFORM" } diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml index af4a42c22a93..03aa760c8ec8 100644 --- a/deploy/olm/assemble/metadata-common.yaml +++ b/deploy/olm/assemble/metadata-common.yaml @@ -233,7 +233,7 @@ metadata: annotations: tectonic-visibility: ocs repository: https://github.com/red-hat-storage/rook - containerImage: "{{.RookOperatorImage}}" + containerImage: rook/ceph:master alm-examples: |- [ { From 3da2302f42899880438e23004ba17e66c58e9506 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Thu, 21 Mar 2024 14:04:10 +0530 Subject: [PATCH 56/65] csi: update csiaddons/k8s-sidecar image Update csiadons/k8s-sidecar image to 0.8.0 in csv Signed-off-by: Nikhil-Ladha --- build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml | 2 +- deploy/examples/operator-openshift.yaml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index 055a64898576..bc32a2ed88ff 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -2969,7 +2969,7 @@ spec: - name: ROOK_DISABLE_ADMISSION_CONTROLLER value: "true" - name: ROOK_CSIADDONS_IMAGE - value: quay.io/csiaddons/k8s-sidecar:v0.6.0 + value: quay.io/csiaddons/k8s-sidecar:v0.8.0 - name: ROOK_OBC_PROVISIONER_NAME_PREFIX value: openshift-storage - name: CSI_ENABLE_METADATA diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 21a7a0340786..63fa24c2d2aa 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -718,7 +718,7 @@ spec: - name: ROOK_DISABLE_ADMISSION_CONTROLLER value: "true" - name: ROOK_CSIADDONS_IMAGE - value: quay.io/csiaddons/k8s-sidecar:v0.6.0 + value: quay.io/csiaddons/k8s-sidecar:v0.8.0 - name: ROOK_OBC_PROVISIONER_NAME_PREFIX value: openshift-storage - name: CSI_ENABLE_METADATA From af9b17f780a615dc15e23ec46d1f22b508067f88 Mon Sep 17 00:00:00 2001 From: Leela Venkaiah G Date: Thu, 21 Mar 2024 18:45:18 +0530 Subject: [PATCH 57/65] build: add new env value ref for operator Added the new stanza to `deploy/examples/operator-openshift.yaml` and ran `make gen-csv` We are going to update the configmap from ocs-operator to disable the CSI during runtime and these values are required for rook to realize the update. Signed-off-by: Leela Venkaiah G --- .../ceph/rook-ceph-operator.clusterserviceversion.yaml | 10 ++++++++++ deploy/examples/operator-openshift.yaml | 10 ++++++++++ 2 files changed, 20 insertions(+) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index bc32a2ed88ff..c49c1760384a 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -3000,6 +3000,16 @@ spec: configMapKeyRef: key: ROOK_CSI_ENABLE_NFS name: ocs-operator-config + - name: ROOK_CSI_ENABLE_CEPHFS + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_CEPHFS + name: ocs-operator-config + - name: ROOK_CSI_ENABLE_RBD + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_RBD + name: ocs-operator-config - name: CSI_PROVISIONER_TOLERATIONS value: |2- diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 63fa24c2d2aa..015c2748cbf7 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -749,6 +749,16 @@ spec: configMapKeyRef: key: ROOK_CSI_ENABLE_NFS name: ocs-operator-config + - name: ROOK_CSI_ENABLE_CEPHFS + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_CEPHFS + name: ocs-operator-config + - name: ROOK_CSI_ENABLE_RBD + valueFrom: + configMapKeyRef: + key: ROOK_CSI_ENABLE_RBD + name: ocs-operator-config - name: CSI_PROVISIONER_TOLERATIONS value: |2- From a8ec71646accbfc58b8abafc95c8328a91b752d0 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Fri, 22 Mar 2024 09:19:55 +0530 Subject: [PATCH 58/65] build: export ROOK_IMAGE env variable Update script to export ROOK_IMAGE env variable so that it can be updated with custom values during the build Signed-off-by: Nikhil-Ladha --- build/common.sh | 3 ++- deploy/olm/assemble/metadata-common.yaml | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/build/common.sh b/build/common.sh index 57c7285bd5eb..978fda9fedbb 100644 --- a/build/common.sh +++ b/build/common.sh @@ -30,7 +30,8 @@ export GOARCH GOARCH=$(go env GOARCH) DEFAULT_CSV_VERSION="4.15.0" CSV_VERSION="${CSV_VERSION:-${DEFAULT_CSV_VERSION}}" -export ROOK_IMAGE="docker.io/rook/ceph:v1.13.0.399.g9c0d795e2" +LATEST_ROOK_IMAGE="docker.io/rook/ceph:v1.13.0.399.g9c0d795e2" +ROOK_IMAGE=${ROOK_IMAGE:-${LATEST_ROOK_IMAGE}} DEFAULT_BUNDLE_IMAGE=rook/rook-ceph-operator-bundle:"${VERSION}" BUNDLE_IMAGE="${BUNDLE_IMAGE:-${DEFAULT_BUNDLE_IMAGE}}" diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml index 03aa760c8ec8..833543c0f725 100644 --- a/deploy/olm/assemble/metadata-common.yaml +++ b/deploy/olm/assemble/metadata-common.yaml @@ -429,7 +429,7 @@ metadata: } ] relatedImages: - - image: docker.io/rook/ceph:master + - image: rook/ceph:master name: rook-container - image: quay.io/ceph/ceph:v18.2.0 name: ceph-container From 1398c5ac78c78085df3b85fd7b8c101a57e9b525 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Fri, 22 Mar 2024 09:49:51 +0530 Subject: [PATCH 59/65] build: add generated changes for csv add generated changes for csv Signed-off-by: Nikhil-Ladha --- build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index bc32a2ed88ff..02cc7384f0ba 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -1863,7 +1863,7 @@ metadata: name: rook-ceph-operator.v4.15.0 namespace: placeholder relatedImages: - - image: docker.io/rook/ceph:master + - image: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 name: rook-container - image: quay.io/ceph/ceph:v18.2.0 name: ceph-container From 3dcd61192fc3d7c8b9b602fceb4d7a5ad1f4f7e6 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Fri, 22 Mar 2024 17:14:32 +0530 Subject: [PATCH 60/65] build: add all the csi image version in csv add all the csi related images version in csv which ocs-operator csv merger tool was changing. Signed-off-by: subhamkrai --- build/csv/csv-gen.sh | 25 ++++++++++++++++++++++++ deploy/examples/operator-openshift.yaml | 12 ++++++++++++ deploy/olm/assemble/metadata-common.yaml | 24 ++++++++++++++++------- 3 files changed, 54 insertions(+), 7 deletions(-) diff --git a/build/csv/csv-gen.sh b/build/csv/csv-gen.sh index 3589c9ca3831..68e4169af88c 100755 --- a/build/csv/csv-gen.sh +++ b/build/csv/csv-gen.sh @@ -20,6 +20,22 @@ CEPH_EXTERNAL_SCRIPT_FILE="../../deploy/examples/create-external-cluster-resourc ASSEMBLE_FILE_COMMON="../../deploy/olm/assemble/metadata-common.yaml" ASSEMBLE_FILE_OCP="../../deploy/olm/assemble/metadata-ocp.yaml" +LATEST_ROOK_CSI_CEPH_IMAGE="quay.io/cephcsi/cephcsi:v3.10.2" +LATEST_ROOK_CSI_REGISTRAR_IMAGE="registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0" +LATEST_ROOK_CSI_RESIZER_IMAGE="registry.k8s.io/sig-storage/csi-resizer:v1.10.0" +LATEST_ROOK_CSI_PROVISIONER_IMAGE="registry.k8s.io/sig-storage/csi-provisioner:v4.0.0" +LATEST_ROOK_CSI_SNAPSHOTTER_IMAGE="registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1" +LATEST_ROOK_CSI_ATTACHER_IMAGE="registry.k8s.io/sig-storage/csi-attacher:v4.5.0" +LATEST_ROOK_CSIADDONS_IMAGE="quay.io/csiaddons/k8s-sidecar:v0.8.0" + +ROOK_CSI_CEPH_IMAGE=${ROOK_CSI_CEPH_IMAGE:-${LATEST_ROOK_CSI_CEPH_IMAGE}} +ROOK_CSI_REGISTRAR_IMAGE=${ROOK_CSI_REGISTRAR_IMAGE:-${LATEST_ROOK_CSI_REGISTRAR_IMAGE}} +ROOK_CSI_RESIZER_IMAGE=${ROOK_CSI_RESIZER_IMAGE:-${LATEST_ROOK_CSI_RESIZER_IMAGE}} +ROOK_CSI_PROVISIONER_IMAGE=${ROOK_CSI_PROVISIONER_IMAGE:-${LATEST_ROOK_CSI_PROVISIONER_IMAGE}} +ROOK_CSI_SNAPSHOTTER_IMAGE=${ROOK_CSI_SNAPSHOTTER_IMAGE:-${LATEST_ROOK_CSI_SNAPSHOTTER_IMAGE}} +ROOK_CSI_ATTACHER_IMAGE=${ROOK_CSI_ATTACHER_IMAGE:-${LATEST_ROOK_CSI_ATTACHER_IMAGE}} +ROOK_CSIADDONS_IMAGE=${ROOK_CSIADDONS_IMAGE:-${LATEST_ROOK_CSIADDONS_IMAGE}} + ############# # FUNCTIONS # ############# @@ -54,6 +70,15 @@ function generate_csv() { sed -i "s/name: rook-ceph.v.*/name: rook-ceph-operator.v$CSV_VERSION/g" "$CSV_FILE_NAME" sed -i "s/version: 0.0.0/version: $CSV_VERSION/g" "$CSV_FILE_NAME" + # Update the csi version according to the downstream build env change + sed -i "s|$LATEST_ROOK_CSI_CEPH_IMAGE|$ROOK_CSI_CEPH_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSI_REGISTRAR_IMAGE|$ROOK_CSI_REGISTRAR_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSI_RESIZER_IMAGE|$ROOK_CSI_RESIZER_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSI_PROVISIONER_IMAGE|$ROOK_CSI_PROVISIONER_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSI_SNAPSHOTTER_IMAGE|$ROOK_CSI_SNAPSHOTTER_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSI_ATTACHER_IMAGE|$ROOK_CSI_ATTACHER_IMAGE|g" "$CSV_FILE_NAME" + sed -i "s|$LATEST_ROOK_CSIADDONS_IMAGE|$ROOK_CSIADDONS_IMAGE|g" "$CSV_FILE_NAME" + mv "../../build/csv/ceph/$PLATFORM/manifests/"* "../../build/csv/ceph/" rm -rf "../../build/csv/ceph/$PLATFORM" } diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 015c2748cbf7..cff5fe43a960 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -719,6 +719,18 @@ spec: value: "true" - name: ROOK_CSIADDONS_IMAGE value: quay.io/csiaddons/k8s-sidecar:v0.8.0 + - name: ROOK_CSI_CEPH_IMAGE + value: quay.io/cephcsi/cephcsi:v3.10.2 + - name: ROOK_CSI_REGISTRAR_IMAGE + value: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0 + - name: ROOK_CSI_RESIZER_IMAGE + value: registry.k8s.io/sig-storage/csi-resizer:v1.10.0 + - name: ROOK_CSI_PROVISIONER_IMAGE + value: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 + - name: ROOK_CSI_SNAPSHOTTER_IMAGE + value: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 + - name: ROOK_CSI_ATTACHER_IMAGE + value: registry.k8s.io/sig-storage/csi-attacher:v4.5.0 - name: ROOK_OBC_PROVISIONER_NAME_PREFIX value: openshift-storage - name: CSI_ENABLE_METADATA diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml index 833543c0f725..ced7d6df07af 100644 --- a/deploy/olm/assemble/metadata-common.yaml +++ b/deploy/olm/assemble/metadata-common.yaml @@ -228,6 +228,23 @@ spec: supported: false - type: AllNamespaces supported: false + relatedImages: + - image: rook/ceph:master + name: rook-container + - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 + name: csiaddons-sidecar + - image: quay.io/cephcsi/cephcsi:v3.10.2 + name: ceph-csi + - image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0 + name: csi-node-driver-registrar + - image: registry.k8s.io/sig-storage/csi-resizer:v1.10.0 + name: csi-resizer + - image: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 + name: csi-provisioner + - image: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 + name: csi-snapshotter + - image: registry.k8s.io/sig-storage/csi-attacher:v4.5.0 + name: csi-attacher metadata: annotations: @@ -428,10 +445,3 @@ metadata: } } ] - relatedImages: - - image: rook/ceph:master - name: rook-container - - image: quay.io/ceph/ceph:v18.2.0 - name: ceph-container - - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 - name: csiaddons-sidecar From 03829444a3fd73c5948adec7d010c765e064be90 Mon Sep 17 00:00:00 2001 From: subhamkrai Date: Fri, 22 Mar 2024 17:17:20 +0530 Subject: [PATCH 61/65] build: these are auto generated csv changes these are auto generated csv changes when run `make gen-csv`. Signed-off-by: subhamkrai --- ...k-ceph-operator.clusterserviceversion.yaml | 36 +++++++++++++++---- 1 file changed, 29 insertions(+), 7 deletions(-) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index b425d8cac20a..41bd0db7abb6 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -1862,13 +1862,6 @@ metadata: IGRhdGE6IHtvc0Vycn0iKQogICAgZmluYWxseToKICAgICAgICByak9iai5zaHV0ZG93bigpCg== name: rook-ceph-operator.v4.15.0 namespace: placeholder - relatedImages: - - image: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 - name: rook-container - - image: quay.io/ceph/ceph:v18.2.0 - name: ceph-container - - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 - name: csiaddons-sidecar spec: apiservicedefinitions: {} customresourcedefinitions: @@ -2970,6 +2963,18 @@ spec: value: "true" - name: ROOK_CSIADDONS_IMAGE value: quay.io/csiaddons/k8s-sidecar:v0.8.0 + - name: ROOK_CSI_CEPH_IMAGE + value: quay.io/cephcsi/cephcsi:v3.10.2 + - name: ROOK_CSI_REGISTRAR_IMAGE + value: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0 + - name: ROOK_CSI_RESIZER_IMAGE + value: registry.k8s.io/sig-storage/csi-resizer:v1.10.0 + - name: ROOK_CSI_PROVISIONER_IMAGE + value: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 + - name: ROOK_CSI_SNAPSHOTTER_IMAGE + value: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 + - name: ROOK_CSI_ATTACHER_IMAGE + value: registry.k8s.io/sig-storage/csi-attacher:v4.5.0 - name: ROOK_OBC_PROVISIONER_NAME_PREFIX value: openshift-storage - name: CSI_ENABLE_METADATA @@ -3386,3 +3391,20 @@ spec: url: https://your.domain version: 4.15.0 minKubeVersion: 1.16.0 + relatedImages: + - image: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 + name: rook-container + - image: quay.io/csiaddons/k8s-sidecar:v0.8.0 + name: csiaddons-sidecar + - image: quay.io/cephcsi/cephcsi:v3.10.2 + name: ceph-csi + - image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0 + name: csi-node-driver-registrar + - image: registry.k8s.io/sig-storage/csi-resizer:v1.10.0 + name: csi-resizer + - image: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 + name: csi-provisioner + - image: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 + name: csi-snapshotter + - image: registry.k8s.io/sig-storage/csi-attacher:v4.5.0 + name: csi-attacher From 798d0816170d2eff6f8b620c35e10cdf5d1e78ee Mon Sep 17 00:00:00 2001 From: parth-gr Date: Thu, 21 Mar 2024 20:39:35 +0530 Subject: [PATCH 62/65] csi: add a new flag to disable csi driver added a new flag ROOK_CSI_DISABLE_DRIVER to disable csi controller. Signed-off-by: parth-gr (cherry picked from commit a72e02945978a2a115207d6ea40e14223d8af4a6) --- Documentation/Helm-Charts/operator-chart.md | 1 + .../charts/rook-ceph/templates/configmap.yaml | 1 + deploy/charts/rook-ceph/values.yaml | 3 ++ deploy/examples/operator-openshift.yaml | 2 + deploy/examples/operator.yaml | 2 + pkg/operator/ceph/csi/controller.go | 46 +++++++++++-------- 6 files changed, 37 insertions(+), 18 deletions(-) diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index 1b7eb2ad5a78..8e190a2cb151 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -80,6 +80,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.csiRBDPluginVolume` | The volume of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDPluginVolumeMount` | The volume mounts of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDProvisionerResource` | CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true` | see values.yaml | +| `csi.disableCsiDriver` | Disable the CSI driver. | `"false"` | | `csi.disableHolderPods` | Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network without needing hosts to the network. Holder pods are being deprecated. See issue for details: https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true". | `true` | | `csi.enableCSIEncryption` | Enable Ceph CSI PVC encryption support | `false` | | `csi.enableCSIHostNetwork` | Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance | `true` | diff --git a/deploy/charts/rook-ceph/templates/configmap.yaml b/deploy/charts/rook-ceph/templates/configmap.yaml index e38ddc9f22a9..c6f385451421 100644 --- a/deploy/charts/rook-ceph/templates/configmap.yaml +++ b/deploy/charts/rook-ceph/templates/configmap.yaml @@ -20,6 +20,7 @@ data: {{- if .Values.csi }} ROOK_CSI_ENABLE_RBD: {{ .Values.csi.enableRbdDriver | quote }} ROOK_CSI_ENABLE_CEPHFS: {{ .Values.csi.enableCephfsDriver | quote }} + ROOK_CSI_DISABLE_DRIVER: {{ .Values.csi.disableCsiDriver | quote }} CSI_ENABLE_CEPHFS_SNAPSHOTTER: {{ .Values.csi.enableCephfsSnapshotter | quote }} CSI_ENABLE_NFS_SNAPSHOTTER: {{ .Values.csi.enableNFSSnapshotter | quote }} CSI_ENABLE_RBD_SNAPSHOTTER: {{ .Values.csi.enableRBDSnapshotter | quote }} diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 0493361a803e..abbfba44a949 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -81,6 +81,9 @@ csi: enableRbdDriver: true # -- Enable Ceph CSI CephFS driver enableCephfsDriver: true + # -- Disable the CSI driver. + disableCsiDriver: "false" + # -- Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary # in some network configurations where the SDN does not provide access to an external cluster or # there is significant drop in read/write performance diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index cff5fe43a960..09f9544da852 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -121,6 +121,8 @@ data: ROOK_CSI_ENABLE_RBD: "true" # Enable the CSI NFS driver. To start another version of the CSI driver, see image properties below. ROOK_CSI_ENABLE_NFS: "false" + # Disable the CSI driver. + ROOK_CSI_DISABLE_DRIVER: "false" # Set to true to enable Ceph CSI pvc encryption support. CSI_ENABLE_ENCRYPTION: "false" diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index bdc0b53103e9..9a0d35cf8c68 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -35,6 +35,8 @@ data: ROOK_CSI_ENABLE_RBD: "true" # Enable the CSI NFS driver. To start another version of the CSI driver, see image properties below. ROOK_CSI_ENABLE_NFS: "false" + # Disable the CSI driver. + ROOK_CSI_DISABLE_DRIVER: "false" # Set to true to enable Ceph CSI pvc encryption support. CSI_ENABLE_ENCRYPTION: "false" diff --git a/pkg/operator/ceph/csi/controller.go b/pkg/operator/ceph/csi/controller.go index 1c56183c2981..8a31f36f7905 100644 --- a/pkg/operator/ceph/csi/controller.go +++ b/pkg/operator/ceph/csi/controller.go @@ -137,6 +137,34 @@ func (r *ReconcileCSI) reconcile(request reconcile.Request) (reconcile.Result, e // reconcileResult is used to communicate the result of the reconciliation back to the caller var reconcileResult reconcile.Result + // Fetch the operator's configmap. We force the NamespaceName to the operator since the request + // could be a CephCluster. If so the NamespaceName will be the one from the cluster and thus the + // CM won't be found + opNamespaceName := types.NamespacedName{Name: opcontroller.OperatorSettingConfigMapName, Namespace: r.opConfig.OperatorNamespace} + opConfig := &v1.ConfigMap{} + err := r.client.Get(r.opManagerContext, opNamespaceName, opConfig) + if err != nil { + if kerrors.IsNotFound(err) { + logger.Debug("operator's configmap resource not found. will use default value or env var.") + r.opConfig.Parameters = make(map[string]string) + } else { + // Error reading the object - requeue the request. + return opcontroller.ImmediateRetryResult, errors.Wrap(err, "failed to get operator's configmap") + } + } else { + // Populate the operator's config + r.opConfig.Parameters = opConfig.Data + } + + // do not recocnile if csi driver is disabled + disableCSI, err := strconv.ParseBool(k8sutil.GetValue(r.opConfig.Parameters, "ROOK_CSI_DISABLE_DRIVER", "false")) + if err != nil { + return reconcile.Result{}, errors.Wrap(err, "unable to parse value for 'ROOK_CSI_DISABLE_DRIVER") + } else if disableCSI { + logger.Info("ceph csi driver is disabled") + return reconcile.Result{}, nil + } + serverVersion, err := r.context.Clientset.Discovery().ServerVersion() if err != nil { return opcontroller.ImmediateRetryResult, errors.Wrap(err, "failed to get server version") @@ -171,24 +199,6 @@ func (r *ReconcileCSI) reconcile(request reconcile.Request) (reconcile.Result, e return reconcile.Result{}, nil } - // Fetch the operator's configmap. We force the NamespaceName to the operator since the request - // could be a CephCluster. If so the NamespaceName will be the one from the cluster and thus the - // CM won't be found - opNamespaceName := types.NamespacedName{Name: opcontroller.OperatorSettingConfigMapName, Namespace: r.opConfig.OperatorNamespace} - opConfig := &v1.ConfigMap{} - err = r.client.Get(r.opManagerContext, opNamespaceName, opConfig) - if err != nil { - if kerrors.IsNotFound(err) { - logger.Debug("operator's configmap resource not found. will use default value or env var.") - r.opConfig.Parameters = make(map[string]string) - } else { - // Error reading the object - requeue the request. - return opcontroller.ImmediateRetryResult, errors.Wrap(err, "failed to get operator's configmap") - } - } else { - // Populate the operator's config - r.opConfig.Parameters = opConfig.Data - } csiHostNetworkEnabled, err := strconv.ParseBool(k8sutil.GetValue(r.opConfig.Parameters, "CSI_ENABLE_HOST_NETWORK", "true")) if err != nil { From 92c8d3d8fbd8b5a9592b2e098aaa9eb1f791cc9e Mon Sep 17 00:00:00 2001 From: parth-gr Date: Tue, 26 Mar 2024 17:01:24 +0530 Subject: [PATCH 63/65] build: add new env value ref for operator Signed-off-by: Leela Venkaiah G (cherry picked from commit 1bb9ccedd1181a5b697ce9f73396a0c62e3c4267) Signed-off-by: parth-gr --- .../ceph/rook-ceph-operator.clusterserviceversion.yaml | 9 ++------- deploy/examples/operator-openshift.yaml | 9 ++------- 2 files changed, 4 insertions(+), 14 deletions(-) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index 41bd0db7abb6..86829b82a277 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -3005,15 +3005,10 @@ spec: configMapKeyRef: key: ROOK_CSI_ENABLE_NFS name: ocs-operator-config - - name: ROOK_CSI_ENABLE_CEPHFS + - name: ROOK_CSI_DISABLE_DRIVER valueFrom: configMapKeyRef: - key: ROOK_CSI_ENABLE_CEPHFS - name: ocs-operator-config - - name: ROOK_CSI_ENABLE_RBD - valueFrom: - configMapKeyRef: - key: ROOK_CSI_ENABLE_RBD + key: ROOK_CSI_DISABLE_DRIVER name: ocs-operator-config - name: CSI_PROVISIONER_TOLERATIONS value: |2- diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 09f9544da852..5b080a14726a 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -763,15 +763,10 @@ spec: configMapKeyRef: key: ROOK_CSI_ENABLE_NFS name: ocs-operator-config - - name: ROOK_CSI_ENABLE_CEPHFS + - name: ROOK_CSI_DISABLE_DRIVER valueFrom: configMapKeyRef: - key: ROOK_CSI_ENABLE_CEPHFS - name: ocs-operator-config - - name: ROOK_CSI_ENABLE_RBD - valueFrom: - configMapKeyRef: - key: ROOK_CSI_ENABLE_RBD + key: ROOK_CSI_DISABLE_DRIVER name: ocs-operator-config - name: CSI_PROVISIONER_TOLERATIONS value: |2- From fcf6d67540b2089d1cba300bfe65b148ccaecfb9 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Wed, 27 Mar 2024 11:11:51 +0530 Subject: [PATCH 64/65] build: fix provider name in csv Moved provider field to metadata-common.yaml file, such that it overrides the default in the rook csv Signed-off-by: Nikhil-Ladha --- deploy/olm/assemble/metadata-common.yaml | 3 +++ deploy/olm/assemble/metadata-ocp.yaml | 2 -- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/deploy/olm/assemble/metadata-common.yaml b/deploy/olm/assemble/metadata-common.yaml index ced7d6df07af..cfe0ca85dab8 100644 --- a/deploy/olm/assemble/metadata-common.yaml +++ b/deploy/olm/assemble/metadata-common.yaml @@ -212,6 +212,9 @@ spec: "block storage", "shared filesystem", ] + provider: + name: Red Hat + url: https://www.redhat.com minKubeVersion: 1.16.0 links: - name: Source Code diff --git a/deploy/olm/assemble/metadata-ocp.yaml b/deploy/olm/assemble/metadata-ocp.yaml index e6f1971144de..6cc211d98421 100644 --- a/deploy/olm/assemble/metadata-ocp.yaml +++ b/deploy/olm/assemble/metadata-ocp.yaml @@ -15,5 +15,3 @@ spec: maintainers: - name: Red Hat Support email: ocs-support@redhat.com - provider: - name: Red Hat From 33cc17b533cd9deb523560d119ae88979ac3af91 Mon Sep 17 00:00:00 2001 From: Nikhil-Ladha Date: Wed, 27 Mar 2024 11:12:40 +0530 Subject: [PATCH 65/65] build: add generated changes add generated changes for csv Signed-off-by: Nikhil-Ladha --- build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index 86829b82a277..7ee310336b33 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -3382,8 +3382,8 @@ spec: email: ocs-support@redhat.com maturity: alpha provider: - name: Provider Name - url: https://your.domain + name: Red Hat + url: https://www.redhat.com version: 4.15.0 minKubeVersion: 1.16.0 relatedImages: