From f3834444be8e420269ef70ba25cef44b70e9eca8 Mon Sep 17 00:00:00 2001 From: Corentin Debains Date: Mon, 9 Dec 2024 13:44:24 -0800 Subject: [PATCH 1/4] Create management-cluster-position-statement.md --- .../management-cluster-position-statement.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 sig-multicluster/management-cluster-position-statement.md diff --git a/sig-multicluster/management-cluster-position-statement.md b/sig-multicluster/management-cluster-position-statement.md new file mode 100644 index 00000000000..f3c2beedcc9 --- /dev/null +++ b/sig-multicluster/management-cluster-position-statement.md @@ -0,0 +1,46 @@ +# Management Cluster - SIG Multicluster Position Statement + +Author: Corentin Debains (**[@corentone](https://github.com/corentone)**), Google +Last Edit: 2024/12/09 +Status: DRAFT + +## Goal +To establish a standard definition for a central cluster that is leveraged by multicluster +controllers to manage multicluster applications or features across an inventory of clusters. + +## Context +Multicluster controllers have always needed a place to run. This may happen in external +proprietary control-planes but for more generic platforms, it has been natural for the +Kubernetes community to leverage a Kubernetes Cluster and the existing api-machinery +available. There has been a variety of examples of which we can quote ArgoCD, MultiKueue +or any of the Federation effort (Karmada, KubeAdmiral), all of them not-naming the "location" +where they run or not aligning on the name (Admin cluster, Hub Cluster, Manager Cluster...). +The [ClusterInventory](https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md) +(ClusterProfile CRDs) is also the starting point for a lot of multicluster controllers and, +being a CRD, it requires an api-machinery to host it. + +## Definition + +A (multicluster) management cluster is a Kubernetes cluster that acts as a +control-plane for other Kubernetes clusters (named Workload Clusters to differentiate +them). It MUST have visibility over the available clusters and MAY have administrative +privileges over them. It SHOULD not be part of workload clusters to provide a better +security isolation, especially when it has any administrative privileges over them. +There MAY be multiple management clusters overseeing the same set of Workload Clusters +and it is left to the administrator to guarantee that they don't compete in their +management tasks. There SHOULD be a single clusterset managed by a management cluster. +Management clusters can be used for both control-plane or data-plane features. + + +### Rationale on some specific points of the definition + +* Multiple management clusters: While it often makes sense to have a single "Brain" overseeing + a Fleet of Clusters, there is a need for flexibility over the number of management clusters. To + allow redundancy to improve reliability, to allow sharding of responsibility (for regionalized + controllers), to allow for separation of functionality (security-enforcer management cluster vs + config-delivery management cluster), to allow for migrations (from old management cluster to new + management cluster) and likely more. +* Management cluster also being part of the workload-running Fleet: We do recommend that the + management cluster(s) be isolated from the running Workload Fleet for security and management + concerns. But there may be specific cases or applications that require to mix the two. For example, + controllers that take a "leader-election" approach and want a smaller footprint. From 90a99ebecf46a2bab487216fdd378f5296bb7979 Mon Sep 17 00:00:00 2001 From: Corentin Debains Date: Tue, 28 Jan 2025 11:35:29 -0800 Subject: [PATCH 2/4] Update and rename management-cluster-position-statement.md to hub-cluster-position-statement.md * added link to clusterset doc * added a context sentence of workload vs hub * removed "visibility" to make about api, metrics or workload access. * Added a rationale on infra vs app controllers. There may be hub clusters that run infra multicluster, but there may also be some that run application infra things. That means the personas accessing them (and their permission) may be different. --- .../hub-cluster-position-statement.md | 57 +++++++++++++++++++ .../management-cluster-position-statement.md | 46 --------------- 2 files changed, 57 insertions(+), 46 deletions(-) create mode 100644 sig-multicluster/hub-cluster-position-statement.md delete mode 100644 sig-multicluster/management-cluster-position-statement.md diff --git a/sig-multicluster/hub-cluster-position-statement.md b/sig-multicluster/hub-cluster-position-statement.md new file mode 100644 index 00000000000..c6f06e8b4b6 --- /dev/null +++ b/sig-multicluster/hub-cluster-position-statement.md @@ -0,0 +1,57 @@ +# Hub Cluster - SIG Multicluster Position Statement + +Author: Corentin Debains (**[@corentone](https://github.com/corentone)**), Google +Last Edit: 2025/01/25 +Status: DRAFT + +## Goal +To establish a standard definition for a central cluster that is leveraged by multicluster +controllers to manage multicluster applications or features across an inventory of clusters. + +## Context +Multicluster controllers have always needed a place to run. This may happen in external +proprietary control-planes but for more generic platforms, it has been natural for the +Kubernetes community to leverage a Kubernetes Cluster and the existing api-machinery +available. There has been a variety of examples of which we can quote ArgoCD, MultiKueue +or any of the Federation effort (Karmada, KubeAdmiral), all of them not-naming the "location" +where they run or not aligning on the name (Admin cluster, Management Cluster, Command Cluster, Manager Cluster...). +The [ClusterInventory](https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md) +(ClusterProfile CRDs) is also the starting point for a lot of multicluster controllers and, +being a CRD, it requires an api-machinery to host it. The functionality of this cluster is also +defined in separation to what a "workload" cluster does, which is to run the business applications, +when hub runs infrastructure components. + +## Definition + +A (multicluster) hub cluster is a Kubernetes cluster that acts as a +control-plane for other Kubernetes clusters (named Workload Clusters to differentiate +them). It MUST have the ClusterProfiles written on it MAY have access to the api, metrics or +workloads of the workload clusters and MAY have administrative privileges over them. It +SHOULD not be part of workload clusters or running as mixed mode (workload and hub) to provide a better +security isolation, especially when it has any administrative privileges over them. +There MAY be multiple hub clusters overseeing the same set of Workload Clusters +and it is left to the administrator to guarantee that they don't compete in their +management tasks. There SHOULD be a single [clusterset](https://multicluster.sigs.k8s.io/api-types/cluster-set/) +managed by a hub cluster. Hub clusters can be used for multicluster controllers relative to platform-running features, +for example: managing the clusters, or application-running features, for example: scheduling business +applications dynamically. + +### Rationale on some specific points of the definition + +* Multiple hub clusters: While it often makes sense to have a single "Brain" overseeing + a Fleet of Clusters, there is a need for flexibility over the number of hub clusters. To + allow redundancy to improve reliability, to allow sharding of responsibility (for regionalized + controllers), to allow for separation of functionality (security-enforcer hub cluster vs + config-delivery hub cluster), to allow for migrations (from old hub cluster to new + hub cluster) and likely more. +* Hub cluster also being part of the workload-running Fleet: We do recommend that the + hub cluster(s) be isolated from the running Workload Fleet for security and hub + concerns. But there may be specific cases or applications that require to mix the two. For example, + controllers that take a "leader-election" approach and want a smaller footprint. +* Application-running features vs platform-running features: Hub clusters can runcontrollers + that are catering to a "Platform" type of user, effectively using a central cluster to manage other clusters and + other infrastructure. For example, centrally monitoring health of clusters of a clusterset. It can also run + controllers that are helping run business applications globally. For example, having a definition of a multicluster + application and scheduling replicas of the application to the different clusters of the clusterset. + This means that access control to the hub cluster and permissions given to controllers on the hub + clusters must be carefully designed. diff --git a/sig-multicluster/management-cluster-position-statement.md b/sig-multicluster/management-cluster-position-statement.md deleted file mode 100644 index f3c2beedcc9..00000000000 --- a/sig-multicluster/management-cluster-position-statement.md +++ /dev/null @@ -1,46 +0,0 @@ -# Management Cluster - SIG Multicluster Position Statement - -Author: Corentin Debains (**[@corentone](https://github.com/corentone)**), Google -Last Edit: 2024/12/09 -Status: DRAFT - -## Goal -To establish a standard definition for a central cluster that is leveraged by multicluster -controllers to manage multicluster applications or features across an inventory of clusters. - -## Context -Multicluster controllers have always needed a place to run. This may happen in external -proprietary control-planes but for more generic platforms, it has been natural for the -Kubernetes community to leverage a Kubernetes Cluster and the existing api-machinery -available. There has been a variety of examples of which we can quote ArgoCD, MultiKueue -or any of the Federation effort (Karmada, KubeAdmiral), all of them not-naming the "location" -where they run or not aligning on the name (Admin cluster, Hub Cluster, Manager Cluster...). -The [ClusterInventory](https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md) -(ClusterProfile CRDs) is also the starting point for a lot of multicluster controllers and, -being a CRD, it requires an api-machinery to host it. - -## Definition - -A (multicluster) management cluster is a Kubernetes cluster that acts as a -control-plane for other Kubernetes clusters (named Workload Clusters to differentiate -them). It MUST have visibility over the available clusters and MAY have administrative -privileges over them. It SHOULD not be part of workload clusters to provide a better -security isolation, especially when it has any administrative privileges over them. -There MAY be multiple management clusters overseeing the same set of Workload Clusters -and it is left to the administrator to guarantee that they don't compete in their -management tasks. There SHOULD be a single clusterset managed by a management cluster. -Management clusters can be used for both control-plane or data-plane features. - - -### Rationale on some specific points of the definition - -* Multiple management clusters: While it often makes sense to have a single "Brain" overseeing - a Fleet of Clusters, there is a need for flexibility over the number of management clusters. To - allow redundancy to improve reliability, to allow sharding of responsibility (for regionalized - controllers), to allow for separation of functionality (security-enforcer management cluster vs - config-delivery management cluster), to allow for migrations (from old management cluster to new - management cluster) and likely more. -* Management cluster also being part of the workload-running Fleet: We do recommend that the - management cluster(s) be isolated from the running Workload Fleet for security and management - concerns. But there may be specific cases or applications that require to mix the two. For example, - controllers that take a "leader-election" approach and want a smaller footprint. From d0fbe3d418f63e3aebf4484cc30773b424a50adb Mon Sep 17 00:00:00 2001 From: Corentin Debains Date: Tue, 4 Feb 2025 09:35:20 -0800 Subject: [PATCH 3/4] Update sig-multicluster/hub-cluster-position-statement.md Co-authored-by: Ryan Zhang --- sig-multicluster/hub-cluster-position-statement.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sig-multicluster/hub-cluster-position-statement.md b/sig-multicluster/hub-cluster-position-statement.md index c6f06e8b4b6..a782bd9564d 100644 --- a/sig-multicluster/hub-cluster-position-statement.md +++ b/sig-multicluster/hub-cluster-position-statement.md @@ -48,7 +48,7 @@ applications dynamically. hub cluster(s) be isolated from the running Workload Fleet for security and hub concerns. But there may be specific cases or applications that require to mix the two. For example, controllers that take a "leader-election" approach and want a smaller footprint. -* Application-running features vs platform-running features: Hub clusters can runcontrollers +* Application-running features vs platform-running features: Hub clusters can run controllers that are catering to a "Platform" type of user, effectively using a central cluster to manage other clusters and other infrastructure. For example, centrally monitoring health of clusters of a clusterset. It can also run controllers that are helping run business applications globally. For example, having a definition of a multicluster From f5dc4558c6915590786ce9b9212d27d74a738bee Mon Sep 17 00:00:00 2001 From: Corentin Debains Date: Tue, 18 Feb 2025 10:59:17 -0800 Subject: [PATCH 4/4] Update hub-cluster-position-statement.md --- sig-multicluster/hub-cluster-position-statement.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sig-multicluster/hub-cluster-position-statement.md b/sig-multicluster/hub-cluster-position-statement.md index a782bd9564d..752aa508cba 100644 --- a/sig-multicluster/hub-cluster-position-statement.md +++ b/sig-multicluster/hub-cluster-position-statement.md @@ -12,7 +12,7 @@ controllers to manage multicluster applications or features across an inventory Multicluster controllers have always needed a place to run. This may happen in external proprietary control-planes but for more generic platforms, it has been natural for the Kubernetes community to leverage a Kubernetes Cluster and the existing api-machinery -available. There has been a variety of examples of which we can quote ArgoCD, MultiKueue +available. There has been a variety of examples of which we can quote ArgoCD, MultiKueue, Open Cluster Management or any of the Federation effort (Karmada, KubeAdmiral), all of them not-naming the "location" where they run or not aligning on the name (Admin cluster, Management Cluster, Command Cluster, Manager Cluster...). The [ClusterInventory](https://github.com/kubernetes/enhancements/blob/master/keps/sig-multicluster/4322-cluster-inventory/README.md) @@ -24,7 +24,7 @@ when hub runs infrastructure components. ## Definition A (multicluster) hub cluster is a Kubernetes cluster that acts as a -control-plane for other Kubernetes clusters (named Workload Clusters to differentiate +control-plane for other Kubernetes clusters (named Workload [Execution] Clusters to differentiate them). It MUST have the ClusterProfiles written on it MAY have access to the api, metrics or workloads of the workload clusters and MAY have administrative privileges over them. It SHOULD not be part of workload clusters or running as mixed mode (workload and hub) to provide a better