Skip to content
This repository was archived by the owner on Oct 24, 2023. It is now read-only.

Commit fb9d128

Browse files
authored
Use registry.k8s.io for components (#5071)
1 parent 4abc935 commit fb9d128

34 files changed

+106
-106
lines changed

docs/design/custom-container-images.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
The existing AKS Engine Kubernetes component container image configuration surface area presents obstacles in the way of:
66

77
1. quickly testing/validating specific container images across the set of Kubernetes components in a working cluster; and
8-
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated k8s.gcr.io container images.
8+
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated registry.k8s.io container images.
99

1010
## Proximate Problem Statements
1111

@@ -14,15 +14,15 @@ The existing AKS Engine Kubernetes component container image configuration surfa
1414
- https://github.com/Azure/aks-engine/issues/2378
1515
2. At present, the "blessed" component configuration image URIs are maintained via a concatenation of two properties:
1616
- A "base URI" property (`KubernetesImageBase` is the property that has the widest impact across the set of component images)
17-
- e.g., `"k8s.gcr.io/"`
17+
- e.g., `"registry.k8s.io/"`
1818
- A hardcoded string that represents the right-most concatenation substring of the fully qualified image reference URI
1919
- e.g., `"kube-proxy:v1.16.1"`
2020

21-
In summary, in order to render `"k8s.gcr.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"k8s.gcr.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
21+
In summary, in order to render `"registry.k8s.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"registry.k8s.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
2222

2323
In practice, this means that the `KubernetesImageBase` property is effectively a "Kubernetes component image registry mirror base URI" property, and in fact this is exactly how that property is leveraged, to redirect container image references to proximate origin URIs when building clusters in non-public cloud environments (e.g., China Cloud, Azure Stack).
2424

25-
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a k8s.gcr.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
25+
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a registry.k8s.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
2626

2727
# A Proposed Solution
2828

@@ -98,9 +98,9 @@ In summary, we will introduce a new "components" configuration interface (a sibl
9898

9999
~
100100

101-
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the k8s.gcr.io container registry origin, or a mirror that follows its specification".
101+
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the registry.k8s.io container registry origin, or a mirror that follows its specification".
102102

103-
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "k8s.gcr.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of k8s.gcr.io.
103+
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "registry.k8s.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of registry.k8s.io.
104104

105105
What we can do is add a "mirror type" (or "mirror flavor", if you prefer) configuration context to the existing `KubernetesImageBase` property, allowing us to maintain easy backwards-compatibility (by keeping that property valid), and then adapt the underlying hardcoded "image URI substring" values to be sensitive to that context.
106106

@@ -111,12 +111,12 @@ Concretely, we could add a new sibling (of KubernetesImageBase) configuration pr
111111

112112
The value of that property tells the template generation code flows to generate container image reference URI strings according to one of the known specifications supported by AKS Engine:
113113

114-
- k8s.gcr.io
115-
- e.g., `"k8s.gcr.io/kube-addon-manager-amd64:v9.0.2"`
114+
- registry.k8s.io
115+
- e.g., `"registry.k8s.io/kube-addon-manager-amd64:v9.0.2"`
116116
- mcr.microsoft.com/oss/kubernetes
117117
- e.g., `"mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.0.2"`
118118

119-
The above solution would support a per-environment migration from the current, known-working k8s.gcr.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
119+
The above solution would support a per-environment migration from the current, known-working registry.k8s.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
120120

121121
# A Proposed Implementation
122122

docs/topics/azure-api-throttling.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ So, assuming we've waited 30 minutes or so, let's update the controller-manager
110110

111111
```
112112
azureuser@k8s-master-31453872-0:~$ grep 1.15.7 /opt/azure/kube-controller-manager.yaml
113-
image: k8s.gcr.io/hyperkube-amd64:v1.15.7
113+
image: registry.k8s.io/hyperkube-amd64:v1.15.7
114114
```
115115

116116
Let's update the spec on all control plane VMs:
@@ -124,7 +124,7 @@ Authorized uses only. All activity may be monitored and reported.
124124
125125
Authorized uses only. All activity may be monitored and reported.
126126
azureuser@k8s-master-31453872-0:~$ grep 1.15.12 /opt/azure/kube-controller-manager.yaml
127-
image: k8s.gcr.io/hyperkube-amd64:v1.15.12
127+
image: registry.k8s.io/hyperkube-amd64:v1.15.12
128128
```
129129

130130
(Again, if you're using `cloud-controller-manager`, substitute the correct `cloud-controller-manager.yaml` file name.)
@@ -135,7 +135,7 @@ Now, if we're running the `cluster-autoscaler` addon on this cluster let's make
135135

136136
```
137137
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
138-
- image: k8s.gcr.io/cluster-autoscaler:v1.15.3
138+
- image: registry.k8s.io/cluster-autoscaler:v1.15.3
139139
azureuser@k8s-master-31453872-0:~$ for control_plane_vm in $(kubectl get nodes | grep k8s-master | awk '{print $1}'); do ssh $control_plane_vm "sudo sed -i 's|v1.15.3|v1.15.6|g' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml"; done
140140
141141
Authorized uses only. All activity may be monitored and reported.
@@ -144,7 +144,7 @@ Authorized uses only. All activity may be monitored and reported.
144144
145145
Authorized uses only. All activity may be monitored and reported.
146146
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
147-
- image: k8s.gcr.io/cluster-autoscaler:v1.15.6
147+
- image: registry.k8s.io/cluster-autoscaler:v1.15.6
148148
```
149149

150150
The above validated that we *weren't* using the latest `cluster-autoscaler`, and so we changed the addon spec on each control plane VM in the `/etc/kubernetes/addons/` directory so that we would load 1.15.6 instead.

docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ spec:
337337
supplementalGroups: [ 65534 ]
338338
fsGroup: 65534
339339
containers:
340-
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
340+
- image: registry.k8s.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
341341
name: autoscaler
342342
command:
343343
- /cluster-proportional-autoscaler

docs/topics/clusterdefinitions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ $ aks-engine get-versions
6969
| gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/) |
7070
| kubeletConfig | no | Configure various runtime configuration for kubelet. See `kubeletConfig` [below](#feat-kubelet-config) |
7171
| kubeReservedCgroup | no | The name of a systemd slice to create for containment of both kubelet and the container runtime. When this value is a non-empty string, a file will be dropped at `/etc/systemd/system/$KUBE_RESERVED_CGROUP.slice` creating a systemd slice. Both kubelet and docker will run in this slice. This should not point to an existing systemd slice. If this value is unspecified or specified as the empty string, kubelet and the container runtime will run in the system slice by default. |
72-
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `k8s.gcr.io/` |
72+
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `registry.k8s.io/` |
7373
| loadBalancerSku | no | Sku of Load Balancer and Public IP. Candidate values are: `basic` and `standard`. If not set, it will be default to "standard". NOTE: Because VMs behind standard SKU load balancer will not be able to access the internet without an outbound rule configured with at least one frontend IP, AKS Engine creates a Load Balancer with an outbound rule and with agent nodes added to the backend pool during cluster creation, as described in the [Outbound NAT for internal Standard Load Balancer scenarios doc](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-rules-overview#outbound-nat-for-internal-standard-load-balancer-scenarios) |
7474
| loadBalancerOutboundIPs | no | Number of outbound IP addresses (e.g., 3) to use in Standard LoadBalancer configuration. If not set, AKS Engine will configure a single outbound IP address. You may want more than one outbound IP address if you are running a large cluster that is processing lots of connections. See [here](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#multifesnat) for more documentation about how adding more outbound IP addresses can increase the number of SNAT ports available for use by the Standard Load Balancer in your cluster. Note: this value is only configurable at cluster creation time, it can not be changed using `aks-engine upgrade`.|
7575
| networkPlugin | no | Specifies the network plugin implementation for the cluster. Valid values are:<br>`"azure"` (default), which provides an Azure native networking experience <br>`"kubenet"` for k8s software networking implementation. <br> `"cilium"` for using the default Cilium CNI IPAM (requires the `"cilium"` networkPolicy as well)<br> `"antrea"` for using the Antrea network plugin (requires the `"antrea"` networkPolicy as well) |

examples/addons/node-problem-detector/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ To test node-problem-detector in a running cluster, you can inject messages into
6969
| Name | Required | Description | Default Value |
7070
| -------------- | -------- | --------------------------------- | ----------------------------------------- |
7171
| name | no | container name | "node-problem-detector" |
72-
| image | no | image | "k8s.gcr.io/node-problem-detector:v0.8.1" |
72+
| image | no | image | "registry.k8s.io/node-problem-detector:v0.8.1" |
7373
| cpuRequests | no | cpu requests for the container | "20m" |
7474
| memoryRequests | no | memory requests for the container | "20Mi" |
7575
| cpuLimits | no | cpu limits for the container | "200m" |

extensions/prometheus-grafana-k8s/v1/prometheus_values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@ kubeStateMetrics:
194194
## kube-state-metrics container image
195195
##
196196
image:
197-
repository: k8s.gcr.io/kube-state-metrics
197+
repository: registry.k8s.io/kube-state-metrics
198198
tag: v1.2.0
199199
pullPolicy: IfNotPresent
200200

pkg/api/azenvtypes.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ type AzureEnvironmentSpecConfig struct {
1414
// KubernetesSpecConfig is the kubernetes container images used.
1515
type KubernetesSpecConfig struct {
1616
AzureTelemetryPID string `json:"azureTelemetryPID,omitempty"`
17-
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream k8s.gcr.io
17+
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream registry.k8s.io
1818
KubernetesImageBase string `json:"kubernetesImageBase,omitempty"`
1919
TillerImageBase string `json:"tillerImageBase,omitempty"`
2020
ACIConnectorImageBase string `json:"aciConnectorImageBase,omitempty"` // Deprecated
@@ -66,7 +66,7 @@ const (
6666
var (
6767
// DefaultKubernetesSpecConfig is the default Docker image source of Kubernetes
6868
DefaultKubernetesSpecConfig = KubernetesSpecConfig{
69-
KubernetesImageBase: "k8s.gcr.io/",
69+
KubernetesImageBase: "registry.k8s.io/",
7070
TillerImageBase: "mcr.microsoft.com/",
7171
NVIDIAImageBase: "mcr.microsoft.com/",
7272
CalicoImageBase: "mcr.microsoft.com/oss/calico/",

pkg/api/defaults_test.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1198,7 +1198,7 @@ func TestKubernetesImageBase(t *testing.T) {
11981198
mockCS.Location = "westus2"
11991199
cloudSpecConfig = mockCS.GetCloudSpecConfig()
12001200
properties = mockCS.Properties
1201-
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
1201+
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
12021202
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBaseType = ""
12031203
mockCS.setOrchestratorDefaults(true, false)
12041204
if properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase != cloudSpecConfig.KubernetesSpecConfig.MCRKubernetesImageBase {

0 commit comments

Comments
 (0)