Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

watch of *v1alpha1.BundleDeployment ended with: an error #3343

Open
afahmy11 opened this issue Feb 13, 2025 · 2 comments
Open

watch of *v1alpha1.BundleDeployment ended with: an error #3343

afahmy11 opened this issue Feb 13, 2025 · 2 comments

Comments

@afahmy11
Copy link

the below two log entry and error are continuously appearing in my rancher/fleet-agent:v0.10.1 container.

i am not sure why they happen and how to fix them.

{"level":"info","ts":"2025-02-13T11:05:23Z","logger":"bundledeployment.DeployBundle","msg":"Deployed bundle","controller":"bundledeployment","controllerGroup":"fleet.cattle.io","controllerKind":"BundleDeployment","BundleDeployment":{"name":"fleet-agent-apps-eks-dev","namespace":"cluster-fleet-default-apps-eks-dev-2c944470dd33"},"namespace":"cluster-fleet-default-apps-eks-dev-2c944470dd33","name":"fleet-agent-apps-eks-dev","reconcileID":"2ef1f393-bb88-4cf6-8c49-3329d744c79f","deploymentID":"s-065a84ed35f19515dcdc494b3a6eddd16b313e4b5c615459e1caa5029d876:8eaf3c183fe289136e7a2da3ceafa5c435e8f098e386324237baeb8811cf8b21","appliedDeploymentID":"s-065a84ed35f19515dcdc494b3a6eddd16b313e4b5c615459e1caa5029d876:8eaf3c183fe289136e7a2da3ceafa5c435e8f098e386324237baeb8811cf8b21","release":"cattle-fleet-system/fleet-agent-apps-eks-dev:7","appliedDeploymentID":"s-065a84ed35f19515dcdc494b3a6eddd16b313e4b5c615459e1caa5029d876:8eaf3c183fe289136e7a2da3ceafa5c435e8f098e386324237baeb8811cf8b21"}

W0213 11:06:23.440904 1 reflector.go:470] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: watch of *v1alpha1.BundleDeployment ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 79; INTERNAL_ERROR; received from peer") has prevented the request from succeeding

@kkaempf
Copy link
Collaborator

kkaempf commented Feb 13, 2025

Please tell us more about your environment, like the Rancher version in use.

Can you share steps helping us to reproduce your issue ?

@afahmy11
Copy link
Author

thanks,
the error happens in fleet-agent-0 "rancher/fleet-agent:v0.11.3" pod in the managed K8s cluster.
the managing rancher instance "v2.10.2" is running in another cluster

both are AWS EKS kubernetes clusters version 1.31

for the steps, i am not really sure what triggers it, but it appears every 2-4 minutes in the fleet-agent-0 log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 🆕 New
Development

No branches or pull requests

2 participants