You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 24, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: docs/design/custom-container-images.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
The existing AKS Engine Kubernetes component container image configuration surface area presents obstacles in the way of:
6
6
7
7
1. quickly testing/validating specific container images across the set of Kubernetes components in a working cluster; and
8
-
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated k8s.gcr.io container images.
8
+
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated registry.k8s.io container images.
9
9
10
10
## Proximate Problem Statements
11
11
@@ -14,15 +14,15 @@ The existing AKS Engine Kubernetes component container image configuration surfa
14
14
-https://github.com/Azure/aks-engine/issues/2378
15
15
2. At present, the "blessed" component configuration image URIs are maintained via a concatenation of two properties:
16
16
- A "base URI" property (`KubernetesImageBase` is the property that has the widest impact across the set of component images)
17
-
- e.g., `"k8s.gcr.io/"`
17
+
- e.g., `"registry.k8s.io/"`
18
18
- A hardcoded string that represents the right-most concatenation substring of the fully qualified image reference URI
19
19
- e.g., `"kube-proxy:v1.16.1"`
20
20
21
-
In summary, in order to render `"k8s.gcr.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"k8s.gcr.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
21
+
In summary, in order to render `"registry.k8s.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"registry.k8s.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
22
22
23
23
In practice, this means that the `KubernetesImageBase` property is effectively a "Kubernetes component image registry mirror base URI" property, and in fact this is exactly how that property is leveraged, to redirect container image references to proximate origin URIs when building clusters in non-public cloud environments (e.g., China Cloud, Azure Stack).
24
24
25
-
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a k8s.gcr.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
25
+
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a registry.k8s.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
26
26
27
27
# A Proposed Solution
28
28
@@ -98,9 +98,9 @@ In summary, we will introduce a new "components" configuration interface (a sibl
98
98
99
99
~
100
100
101
-
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the k8s.gcr.io container registry origin, or a mirror that follows its specification".
101
+
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the registry.k8s.io container registry origin, or a mirror that follows its specification".
102
102
103
-
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "k8s.gcr.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of k8s.gcr.io.
103
+
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "registry.k8s.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of registry.k8s.io.
104
104
105
105
What we can do is add a "mirror type" (or "mirror flavor", if you prefer) configuration context to the existing `KubernetesImageBase` property, allowing us to maintain easy backwards-compatibility (by keeping that property valid), and then adapt the underlying hardcoded "image URI substring" values to be sensitive to that context.
106
106
@@ -111,12 +111,12 @@ Concretely, we could add a new sibling (of KubernetesImageBase) configuration pr
111
111
112
112
The value of that property tells the template generation code flows to generate container image reference URI strings according to one of the known specifications supported by AKS Engine:
The above solution would support a per-environment migration from the current, known-working k8s.gcr.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
119
+
The above solution would support a per-environment migration from the current, known-working registry.k8s.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
azureuser@k8s-master-31453872-0:~$ for control_plane_vm in $(kubectl get nodes | grep k8s-master | awk '{print $1}'); do ssh $control_plane_vm "sudo sed -i 's|v1.15.3|v1.15.6|g' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml"; done
140
140
141
141
Authorized uses only. All activity may be monitored and reported.
@@ -144,7 +144,7 @@ Authorized uses only. All activity may be monitored and reported.
144
144
145
145
Authorized uses only. All activity may be monitored and reported.
The above validated that we *weren't* using the latest `cluster-autoscaler`, and so we changed the addon spec on each control plane VM in the `/etc/kubernetes/addons/` directory so that we would load 1.15.6 instead.
Copy file name to clipboardExpand all lines: docs/topics/clusterdefinitions.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ $ aks-engine get-versions
69
69
| gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/)|
70
70
| kubeletConfig | no | Configure various runtime configuration for kubelet. See `kubeletConfig`[below](#feat-kubelet-config)|
71
71
| kubeReservedCgroup | no | The name of a systemd slice to create for containment of both kubelet and the container runtime. When this value is a non-empty string, a file will be dropped at `/etc/systemd/system/$KUBE_RESERVED_CGROUP.slice` creating a systemd slice. Both kubelet and docker will run in this slice. This should not point to an existing systemd slice. If this value is unspecified or specified as the empty string, kubelet and the container runtime will run in the system slice by default. |
72
-
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `k8s.gcr.io/`|
72
+
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `registry.k8s.io/`|
73
73
| loadBalancerSku | no | Sku of Load Balancer and Public IP. Candidate values are: `basic` and `standard`. If not set, it will be default to "standard". NOTE: Because VMs behind standard SKU load balancer will not be able to access the internet without an outbound rule configured with at least one frontend IP, AKS Engine creates a Load Balancer with an outbound rule and with agent nodes added to the backend pool during cluster creation, as described in the [Outbound NAT for internal Standard Load Balancer scenarios doc](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-rules-overview#outbound-nat-for-internal-standard-load-balancer-scenarios)|
74
74
| loadBalancerOutboundIPs | no | Number of outbound IP addresses (e.g., 3) to use in Standard LoadBalancer configuration. If not set, AKS Engine will configure a single outbound IP address. You may want more than one outbound IP address if you are running a large cluster that is processing lots of connections. See [here](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#multifesnat) for more documentation about how adding more outbound IP addresses can increase the number of SNAT ports available for use by the Standard Load Balancer in your cluster. Note: this value is only configurable at cluster creation time, it can not be changed using `aks-engine upgrade`.|
75
75
| networkPlugin | no | Specifies the network plugin implementation for the cluster. Valid values are:<br>`"azure"` (default), which provides an Azure native networking experience <br>`"kubenet"` for k8s software networking implementation. <br> `"cilium"` for using the default Cilium CNI IPAM (requires the `"cilium"` networkPolicy as well)<br> `"antrea"` for using the Antrea network plugin (requires the `"antrea"` networkPolicy as well) |
0 commit comments