You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2024. It is now read-only.
I'll grant that the need for this isn't immediately obvious, but I think I can cite some defensible reasons for it...
I've been investigating how best to enable functional and integration tests for the router. These really cannot be accomplished without a k8s cluster to test against, so I have also investigated what it would take to run a containerized k8s cluster. It's not that hard to do, but there are certain difficulties one encounters. See deis/router#140 for some discussion of that. One such difficulty is that the k8s apiserver only binds to localhost:8080 on the docker host (not 0.0.0.0:8080). This can be problematic for anyone using docker-machine as their docker host is essentially remote and their kubectl client would be unable to talk to the apiserver.
By including the kubectl binary in the containerized development environment, any test targets or other processes that must manipulate the containerized k8s test cluster could also be containerized, and in the case of anyone running docker-machine, this moves these processes to the "remote" docker host where the fact that the apiserver is bound to localhost:8080 is no longer a problem because it's accessible.
Including kubectl in the dev environment isn't any kind of magic bullet, but it's a good step toward making these sort of tests more feasible at the component level.
I'm interested to know what others think about possibly including kubectl.
The text was updated successfully, but these errors were encountered:
I'll grant that the need for this isn't immediately obvious, but I think I can cite some defensible reasons for it...
I've been investigating how best to enable functional and integration tests for the router. These really cannot be accomplished without a k8s cluster to test against, so I have also investigated what it would take to run a containerized k8s cluster. It's not that hard to do, but there are certain difficulties one encounters. See deis/router#140 for some discussion of that. One such difficulty is that the k8s apiserver only binds to
localhost:8080
on the docker host (not0.0.0.0:8080
). This can be problematic for anyone usingdocker-machine
as their docker host is essentially remote and theirkubectl
client would be unable to talk to the apiserver.By including the
kubectl
binary in the containerized development environment, any test targets or other processes that must manipulate the containerized k8s test cluster could also be containerized, and in the case of anyone runningdocker-machine
, this moves these processes to the "remote" docker host where the fact that the apiserver is bound tolocalhost:8080
is no longer a problem because it's accessible.Including
kubectl
in the dev environment isn't any kind of magic bullet, but it's a good step toward making these sort of tests more feasible at the component level.I'm interested to know what others think about possibly including
kubectl
.The text was updated successfully, but these errors were encountered: