Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add Windows support to kruise-daemon #1909

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

ppbits
Copy link
Contributor

@ppbits ppbits commented Feb 10, 2025

Ⅰ. Describe what this PR does

Update kruise-daemon to support Windows

Ⅱ. Does this pull request fix one issue?

NONE

Ⅲ. Describe how to verify it

Tested on an AKS cluster with both Linux and Windows nodes:

  • CBL-Mariner/Linux
  • Windows Server 2022 Datacenter
  • Windows Server 2019 Datacenter
PS C:\Git\kruise> kubectl get nodes -o wide
NAME                                STATUS                     ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
aks-agentpool-87866486-vmss000000   Ready                      <none>   60s     v1.31.3   10.224.0.5    <none>        CBL-Mariner/Linux                5.15.173.1-1.cm2   containerd://1.6.26
aks-agentpool-87866486-vmss000001   Ready,SchedulingDisabled   <none>   13d     v1.31.3   10.224.0.4    <none>        CBL-Mariner/Linux                5.15.173.1-1.cm2   containerd://1.6.26
aks-agentpool-87866486-vmss000007   Ready                      <none>   3m16s   v1.31.3   10.224.0.8    <none>        CBL-Mariner/Linux                5.15.173.1-1.cm2   containerd://1.6.26
akswin000000                        Ready                      <none>   7d      v1.31.3   10.224.0.6    <none>        Windows Server 2022 Datacenter   10.0.20348.3091    containerd://1.7.20+azure
akswin19000000                      Ready                      <none>   7d      v1.31.3   10.224.0.7    <none>        Windows Server 2019 Datacenter   10.0.17763.6775    containerd://1.7.20+azure

PS C:\Git\kruise> kubectl get pods -n kruise-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE                                NOMINATED NODE   READINESS GATES
kruise-controller-manager-7874cf74-rn8tp   1/1     Running   0          6m18s   10.244.0.102   aks-agentpool-87866486-vmss000000   <none>           <none>
kruise-controller-manager-7874cf74-wr7qh   1/1     Running   0          6m18s   10.244.0.238   aks-agentpool-87866486-vmss000000   <none>           <none>
kruise-daemon-linux-dg77p                  1/1     Running   0          2m41s   10.224.0.4     aks-agentpool-87866486-vmss000001   <none>           <none>
kruise-daemon-linux-kjfzb                  1/1     Running   0          6m56s   10.224.0.5     aks-agentpool-87866486-vmss000000   <none>           <none>
kruise-daemon-win-9d5th                    1/1     Running   0          2d1h    10.224.0.7     akswin19000000                      <none>           <none>
kruise-daemon-win-c5zq2                    1/1     Running   0          2d1h    10.224.0.6     akswin000000                        <none>           <none>

Logs from Windows kruise-daemon look good:

PS C:\Git\kruise> kubectl logs kruise-daemon-win-9d5th -n kruise-system
I0207 22:46:07.514158    8972 feature_gate.go:254] feature gates: {map[ImagePullJobGate:true]}
I0207 22:46:08.695570    8972 daemon.go:97] "Starting daemon" nodeName="akswin19000000"
I0207 22:46:09.286534    8972 cri.go:45] "Connecting to image service" endpoint="npipe://./pipe/containerd-containerd"
I0207 22:46:09.287831    8972 clientconn.go:305] "[core] [Channel #1]Channel created\n"
I0207 22:46:09.288810    8972 logging.go:39] "[core] [Channel #1]original dial target is: \"//./pipe/containerd-containerd\"\n"
I0207 22:46:09.288861    8972 logging.go:39] "[core] [Channel #1]parsed dial target is: resolver.Target{URL:url.URL{Scheme:\"\", Opaque:\"\", User:(*url.Userinfo)(nil), Host:\".\", Path:\"/pipe/containerd-containerd\", RawPath:\"\", OmitHost:false, ForceQuery:false, RawQuery:\"\", Fragment:\"\", RawFragment:\"\"}}\n"
I0207 22:46:09.288924    8972 logging.go:39] "[core] [Channel #1]fallback to scheme \"passthrough\"\n"
I0207 22:46:09.288924    8972 logging.go:39] "[core] [Channel #1]parsed dial target is: passthrough://///./pipe/containerd-containerd\n"
I0207 22:46:09.288924    8972 logging.go:39] "[core] [Channel #1]Channel authority set to \"localhost\"\n"
I0207 22:46:09.291272    8972 logging.go:39] "[core] [Channel #1]Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"//./pipe/containerd-containerd\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"//./pipe/containerd-containerd\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)\n"
I0207 22:46:09.291272    8972 logging.go:39] "[core] [Channel #1]Channel switches to new LB policy \"pick_first\"\n"
I0207 22:46:09.292444    8972 pickfirst.go:123] "[core] [pick-first-lb 0xc000821d40] Received new config {\n  \"shuffleAddressList\": false\n}, resolver state {\n  \"Addresses\": [\n    {\n      \"Addr\": \"//./pipe/containerd-containerd\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"//./pipe/containerd-containerd\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n}\n"
I0207 22:46:09.292521    8972 clientconn.go:846] "[core] [<nil> SubChannel #2]Subchannel created\n"
I0207 22:46:09.292521    8972 logging.go:39] "[core] [Channel #1]Channel Connectivity change to CONNECTING\n"
I0207 22:46:09.292521    8972 clientconn.go:305] "[core] [Channel #1]Channel exiting idle mode\n"
I0207 22:46:09.292521    8972 helpers.go:258] "Finding the CRI API image version"
I0207 22:46:09.292521    8972 logging.go:39] "[core] [<nil> SubChannel #2]Subchannel Connectivity change to CONNECTING\n"
I0207 22:46:09.292521    8972 logging.go:39] "[core] [<nil> SubChannel #2]Subchannel picks a new address \"//./pipe/containerd-containerd\" to connect\n"
I0207 22:46:09.293057    8972 pickfirst.go:166] "[core] [pick-first-lb 0xc000821d40] Received SubConn state update: 0xc000821dd0, {ConnectivityState:CONNECTING ConnectionError:<nil>}\n"
I0207 22:46:09.294921    8972 logging.go:39] "[core] [<nil> SubChannel #2]Subchannel Connectivity change to READY\n"
I0207 22:46:09.294921    8972 pickfirst.go:166] "[core] [pick-first-lb 0xc000821d40] Received SubConn state update: 0xc000821dd0, {ConnectivityState:READY ConnectionError:<nil>}\n"
I0207 22:46:09.294921    8972 logging.go:39] "[core] [Channel #1]Channel Connectivity change to READY\n"
I0207 22:46:09.303478    8972 helpers.go:264] "Using CRI v1 image API"
I0207 22:46:09.306263    8972 remote_runtime.go:80] "Connecting to runtime service" endpoint="npipe://./pipe/containerd-containerd"
I0207 22:46:09.306825    8972 clientconn.go:305] "[core] [Channel #4]Channel created\n"
I0207 22:46:09.306825    8972 logging.go:39] "[core] [Channel #4]original dial target is: \"//./pipe/containerd-containerd\"\n"
I0207 22:46:09.306825    8972 logging.go:39] "[core] [Channel #4]parsed dial target is: resolver.Target{URL:url.URL{Scheme:\"\", Opaque:\"\", User:(*url.Userinfo)(nil), Host:\".\", Path:\"/pipe/containerd-containerd\", RawPath:\"\", OmitHost:false, ForceQuery:false, RawQuery:\"\", Fragment:\"\", RawFragment:\"\"}}\n"
I0207 22:46:09.306825    8972 logging.go:39] "[core] [Channel #4]fallback to scheme \"passthrough\"\n"
I0207 22:46:09.306825    8972 logging.go:39] "[core] [Channel #4]parsed dial target is: passthrough://///./pipe/containerd-containerd\n"
I0207 22:46:09.306825    8972 logging.go:39] "[core] [Channel #4]Channel authority set to \"%2F%2F.%2Fpipe%2Fcontainerd-containerd\"\n"
I0207 22:46:09.307350    8972 logging.go:39] "[core] [Channel #4]Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"//./pipe/containerd-containerd\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"//./pipe/containerd-containerd\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)\n"
I0207 22:46:09.307487    8972 logging.go:39] "[core] [Channel #4]Channel switches to new LB policy \"pick_first\"\n"
I0207 22:46:09.307521    8972 pickfirst.go:123] "[core] [pick-first-lb 0xc00054e690] Received new config {\n  \"shuffleAddressList\": false\n}, resolver state {\n  \"Addresses\": [\n    {\n      \"Addr\": \"//./pipe/containerd-containerd\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"//./pipe/containerd-containerd\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n}\n"
I0207 22:46:09.307521    8972 clientconn.go:846] "[core] [<nil> SubChannel #5]Subchannel created\n"
I0207 22:46:09.307521    8972 logging.go:39] "[core] [Channel #4]Channel Connectivity change to CONNECTING\n"
I0207 22:46:09.307521    8972 clientconn.go:305] "[core] [Channel #4]Channel exiting idle mode\n"
I0207 22:46:09.307521    8972 remote_runtime.go:136] "Validating the CRI v1 API runtime version"
I0207 22:46:09.307521    8972 logging.go:39] "[core] [<nil> SubChannel #5]Subchannel Connectivity change to CONNECTING\n"
I0207 22:46:09.307521    8972 pickfirst.go:166] "[core] [pick-first-lb 0xc00054e690] Received SubConn state update: 0xc00054e6f0, {ConnectivityState:CONNECTING ConnectionError:<nil>}\n"
I0207 22:46:09.307521    8972 logging.go:39] "[core] [<nil> SubChannel #5]Subchannel picks a new address \"//./pipe/containerd-containerd\" to connect\n"
I0207 22:46:09.308172    8972 logging.go:39] "[core] [<nil> SubChannel #5]Subchannel Connectivity change to READY\n"
I0207 22:46:09.308235    8972 pickfirst.go:166] "[core] [pick-first-lb 0xc00054e690] Received SubConn state update: 0xc00054e6f0, {ConnectivityState:READY ConnectionError:<nil>}\n"
I0207 22:46:09.308287    8972 logging.go:39] "[core] [Channel #4]Channel Connectivity change to READY\n"
I0207 22:46:09.309221    8972 remote_runtime.go:143] "Validated CRI v1 runtime API"
I0207 22:46:09.309831    8972 factory.go:113] "Add runtime" runtimeName="containerd" runtimeURI="" runtimeRemoteURI="npipe://./pipe/containerd-containerd"
I0207 22:46:09.316845    8972 main.go:85] "No plugin config file found, skipping" configFile="/kruise/CredentialProviderPlugin.yaml"
I0207 22:46:09.317491    8972 parse.go:31] make and set new docker keyring
I0207 22:46:09.317491    8972 plugins.go:73] Registering credential provider: .dockercfg
I0207 22:46:09.874252    8972 reflector.go:296] Starting reflector *v1.Pod (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.874349    8972 reflector.go:332] Listing and watching *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.911272    8972 reflector.go:359] Caches populated for *v1.Pod from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.930810    8972 pod_probe_controller.go:196] Starting informer for NodePodProbe
I0207 22:46:09.930810    8972 container_meta_controller.go:202] Starting containermeta Controller
I0207 22:46:09.931484    8972 reflector.go:296] Starting reflector *v1alpha1.NodePodProbe (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.931484    8972 reflector.go:332] Listing and watching *v1alpha1.NodePodProbe from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.931484    8972 imagepuller_controller.go:153] Starting informer for NodeImage
I0207 22:46:09.931521    8972 reflector.go:296] Starting reflector *v1alpha1.NodeImage (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.931521    8972 reflector.go:332] Listing and watching *v1alpha1.NodeImage from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.930810    8972 crr_daemon_controller.go:168] Starting informer for ContainerRecreateRequest
I0207 22:46:09.931521    8972 container_meta_controller.go:211] Started containermeta Controller successfully
I0207 22:46:09.931521    8972 reflector.go:296] Starting reflector *v1alpha1.ContainerRecreateRequest (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.931521    8972 reflector.go:332] Listing and watching *v1alpha1.ContainerRecreateRequest from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.948700    8972 reflector.go:359] Caches populated for *v1alpha1.NodePodProbe from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.948823    8972 reflector.go:359] Caches populated for *v1alpha1.NodeImage from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:09.949364    8972 reflector.go:359] Caches populated for *v1alpha1.ContainerRecreateRequest from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232
I0207 22:46:10.460160    8972 crr_daemon_controller.go:174] Starting crr daemon controller
I0207 22:46:10.460160    8972 crr_daemon_controller.go:182] Started crr daemon controller successfully
I0207 22:46:10.460160    8972 pod_probe_controller.go:202] Starting NodePodProbe controller
I0207 22:46:10.460160    8972 pod_probe_controller.go:214] Started NodePodProbe controller successfully
I0207 22:46:10.460160    8972 imagepuller_controller.go:159] Starting puller controller
I0207 22:46:10.460160    8972 imagepuller_controller.go:166] Started puller controller successfully
I0207 22:46:10.460160    8972 imagepuller_controller.go:209] "Start syncing" name="akswin19000000"
I0207 22:46:10.469992    8972 imagepuller_worker.go:78] "sync puller" spec="{\"kind\":\"NodeImage\",\"apiVersion\":\"apps.kruise.io/v1alpha1\",\"metadata\":{\"name\":\"akswin19000000\",\"uid\":\"998606ba-a971-4fed-b625-b9a7a0f78637\",\"resourceVersion\":\"31017335\",\"generation\":1,\"creationTimestamp\":\"2025-02-03T00:29:27Z\",\"labels\":{\"agentpool\":\"win19\",\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/os\":\"windows\",\"kubernetes.azure.com/agentpool\":\"win19\",\"kubernetes.azure.com/azure-cni-overlay\":\"true\",\"kubernetes.azure.com/cluster\":\"MC_pepeng-aks-1_pepeng-aks-1_centralus\",\"kubernetes.azure.com/consolidated-additional-properties\":\"2a233691-e1c5-11ef-9664-1a9800288062\",\"kubernetes.azure.com/kubelet-identity-client-id\":\"ceff8493-3adf-48dd-b8f6-a771cc585f8d\",\"kubernetes.azure.com/mode\":\"user\",\"kubernetes.azure.com/network-name\":\"aks-vnet-32584312\",\"kubernetes.azure.com/network-policy\":\"none\",\"kubernetes.azure.com/network-resourcegroup\":\"pepeng-aks-1\",\"kubernetes.azure.com/network-subnet\":\"aks-subnet\",\"kubernetes.azure.com/network-subscription\":\"39675fbf-5b47-472e-9bb9-5570c6edbd4f\",\"kubernetes.azure.com/node-image-version\":\"AKSWindows-2019-containerd-17763.6775.250117\",\"kubernetes.azure.com/nodenetwork-vnetguid\":\"a77c0001-505b-425f-8612-28afe0e102fc\",\"kubernetes.azure.com/nodepool-type\":\"VirtualMachineScaleSets\",\"kubernetes.azure.com/os-sku\":\"Windows2019\",\"kubernetes.azure.com/podnetwork-type\":\"overlay\",\"kubernetes.azure.com/role\":\"agent\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"akswin19000000\",\"kubernetes.io/os\":\"windows\",\"node.kubernetes.io/instance-type\":\"Standard_D8s_v3\",\"node.kubernetes.io/windows-build\":\"10.0.17763\",\"topology.disk.csi.azure.com/zone\":\"\",\"topology.kubernetes.io/region\":\"centralus\",\"topology.kubernetes.io/zone\":\"0\"},\"managedFields\":[{\"manager\":\"kruise-manager\",\"operation\":\"Update\",\"apiVersion\":\"apps.kruise.io/v1alpha1\",\"time\":\"2025-02-03T00:29:43Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:agentpool\":{},\"f:beta.kubernetes.io/arch\":{},\"f:beta.kubernetes.io/os\":{},\"f:kubernetes.azure.com/agentpool\":{},\"f:kubernetes.azure.com/azure-cni-overlay\":{},\"f:kubernetes.azure.com/cluster\":{},\"f:kubernetes.azure.com/consolidated-additional-properties\":{},\"f:kubernetes.azure.com/kubelet-identity-client-id\":{},\"f:kubernetes.azure.com/mode\":{},\"f:kubernetes.azure.com/network-name\":{},\"f:kubernetes.azure.com/network-policy\":{},\"f:kubernetes.azure.com/network-resourcegroup\":{},\"f:kubernetes.azure.com/network-subnet\":{},\"f:kubernetes.azure.com/network-subscription\":{},\"f:kubernetes.azure.com/node-image-version\":{},\"f:kubernetes.azure.com/nodenetwork-vnetguid\":{},\"f:kubernetes.azure.com/nodepool-type\":{},\"f:kubernetes.azure.com/os-sku\":{},\"f:kubernetes.azure.com/podnetwork-type\":{},\"f:kubernetes.azure.com/role\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubernetes.io/os\":{},\"f:node.kubernetes.io/instance-type\":{},\"f:node.kubernetes.io/windows-build\":{},\"f:topology.disk.csi.azure.com/zone\":{},\"f:topology.kubernetes.io/region\":{},\"f:topology.kubernetes.io/zone\":{}}},\"f:spec\":{}}},{\"manager\":\"kruise-daemon\",\"operation\":\"Update\",\"apiVersion\":\"apps.kruise.io/v1alpha1\",\"time\":\"2025-02-07T00:01:53Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:status\":{\".\":{},\"f:desired\":{},\"f:failed\":{},\"f:pulling\":{},\"f:succeeded\":{}}},\"subresource\":\"status\"}]},\"spec\":{},\"status\":{\"desired\":0,\"succeeded\":0,\"failed\":0,\"pulling\":0}}"
I0207 22:46:10.469992    8972 utils.go:116] "Updating status" status="{\"desired\":0,\"succeeded\":0,\"failed\":0,\"pulling\":0}"
I0207 22:46:10.489270    8972 pod_probe_controller.go:366] "NodePodProbe(%s) update status success" nodeName="akswin19000000" from="{}" to="{}"
I0207 22:46:10.491657    8972 imagepuller_controller.go:214] "Finished syncing" name="akswin19000000"
I0207 22:54:29.952755    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.ContainerRecreateRequest total 10 items received
I0207 22:54:33.951486    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodePodProbe total 8 items received
I0207 22:54:36.915054    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1.Pod total 14 items received
I0207 22:55:48.952404    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodeImage total 5 items received
I0207 23:01:29.942127    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodePodProbe total 5 items received
I0207 23:02:25.918376    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1.Pod total 15 items received
I0207 23:03:07.944845    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.ContainerRecreateRequest total 8 items received
I0207 23:05:30.952611    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodeImage total 4 items received
I0207 23:09:45.916013    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1.Pod total 15 items received
I0207 23:10:45.954331    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodePodProbe total 3 items received
I0207 23:11:29.967417    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.ContainerRecreateRequest total 6 items received
I0207 23:12:24.947925    8972 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1alpha1.NodeImage total 1 items received
I0207 23:12:26.479159    8972 imagepuller_controller.go:209] "Start syncing" name="akswin19000000"
...

We also tested with a ContainerRecreateRequest CR targeting a Windows container, and the container was successful recreated by the Windows kruise-daemon.

Ⅳ. Special notes for reviews

Copy link

codecov bot commented Feb 10, 2025

Codecov Report

Attention: Patch coverage is 0% with 63 lines in your changes missing coverage. Please review.

Project coverage is 37.35%. Comparing base (0d0031a) to head (5e45dd7).
Report is 152 commits behind head on master.

Files with missing lines Patch % Lines
pkg/daemon/criruntime/factory_unix.go 0.00% 56 Missing ⚠️
pkg/daemon/criruntime/factory.go 0.00% 3 Missing ⚠️
pkg/daemon/criruntime/imageruntime/cri.go 0.00% 3 Missing ⚠️
pkg/daemon/daemon.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           master    #1909       +/-   ##
===========================================
- Coverage   47.91%   37.35%   -10.57%     
===========================================
  Files         162      408      +246     
  Lines       23491    36149    +12658     
===========================================
+ Hits        11256    13503     +2247     
- Misses      11014    21256    +10242     
- Partials     1221     1390      +169     
Flag Coverage Δ
unittests 37.35% <0.00%> (-10.57%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ppbits ppbits marked this pull request as draft February 10, 2025 01:07
@ppbits ppbits changed the title Add Windows support to kruise-daemon [WIP] Add Windows support to kruise-daemon Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant