-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support TCP load balancing #151
Comments
At one time there was a branch that enabled TCP support. However, most people don't realize the implications for TCP load balancing. For example, you have to dedicate ports on specific machines because you cannot use HTTP host headers. It's doable but limits what Interlock can do as far as ease of use. Typically you don't want to expose these to the end user. Is the intent here to expose externally or just load balancing over internal TCP services? |
For specific needs you could run an instance of HAProxy / NGINX with a custom template as a work around? I do share the desire to do TCP LB for things such as databases or redis. I also think that if someone is comfortable with TCP load balancing they should understand how to configure that. That being said, what if interlock provided a way to specify |
We would also need a way to specify the backend port, and tcp-check entries (my use case being providing failover-awareness for Redis replication for applications that don't support Sentinel) |
With the stable release of 1.12 open source and commercial support is this feature still needed? This can be accomplished via a static proxy in front of a cluster of Docker Swarm 1.12 worker nodes. |
@arhea If all you need is a way to round-robin balance the service, my guess is, it isn't. If there's some kind of master / slave relationship happening in the background (and you would rather not wait for a container to be respawned AND have to setup a HA volume driver), however, it would spare the clients from having to have knowledge of it if that "logic" can be implemented in haproxy/nginx. |
@je-al - you are correct, for more complex load balancing strategies a HAProxy or Nginx is still useful. Any reason, this couldn't be a static config within a proxy to point back to different ports on the cluster. For example, PostgreSQL Master - 5432 HAProxy -> 5432 / 5433 which are available via the mesh networking? Reduces the moving parts within the system. |
I'm guessing that could work, but it does mean that you'd need to setup a "service" per member of the set, and reconfigure it whenever you want to "scale". Think of having two entries, one which balances "out" to only give you the master, and one where it round-robins among slaves, to "offload" reads. |
There is a use case for supporting TCP load balancing (e.g RabbitMQ cluster). Can it be added to Interlock ? or is this a feature of the lb itself (haproxy/nginx..etc).
The text was updated successfully, but these errors were encountered: