Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support TCP load balancing #151

Open
nicolaka opened this issue May 16, 2016 · 7 comments
Open

Support TCP load balancing #151

nicolaka opened this issue May 16, 2016 · 7 comments

Comments

@nicolaka
Copy link

There is a use case for supporting TCP load balancing (e.g RabbitMQ cluster). Can it be added to Interlock ? or is this a feature of the lb itself (haproxy/nginx..etc).

@ehazlett
Copy link
Owner

At one time there was a branch that enabled TCP support. However, most people don't realize the implications for TCP load balancing. For example, you have to dedicate ports on specific machines because you cannot use HTTP host headers. It's doable but limits what Interlock can do as far as ease of use.

Typically you don't want to expose these to the end user. Is the intent here to expose externally or just load balancing over internal TCP services?

@arhea
Copy link
Contributor

arhea commented May 20, 2016

For specific needs you could run an instance of HAProxy / NGINX with a custom template as a work around? I do share the desire to do TCP LB for things such as databases or redis. I also think that if someone is comfortable with TCP load balancing they should understand how to configure that. That being said, what if interlock provided a way to specify mode=tcp and external_port=5432. So rather than matching on the hostname it uses the external_port as the aggregation point? The same way hostnames can collide or load balance, someone has to make sure they don't bind different services to the same external port. Thoughts?

@je-al
Copy link

je-al commented Nov 29, 2016

We would also need a way to specify the backend port, and tcp-check entries (my use case being providing failover-awareness for Redis replication for applications that don't support Sentinel)

@arhea
Copy link
Contributor

arhea commented Nov 29, 2016

With the stable release of 1.12 open source and commercial support is this feature still needed? This can be accomplished via a static proxy in front of a cluster of Docker Swarm 1.12 worker nodes.

@je-al
Copy link

je-al commented Nov 29, 2016

@arhea If all you need is a way to round-robin balance the service, my guess is, it isn't. If there's some kind of master / slave relationship happening in the background (and you would rather not wait for a container to be respawned AND have to setup a HA volume driver), however, it would spare the clients from having to have knowledge of it if that "logic" can be implemented in haproxy/nginx.

@arhea
Copy link
Contributor

arhea commented Nov 29, 2016

@je-al - you are correct, for more complex load balancing strategies a HAProxy or Nginx is still useful. Any reason, this couldn't be a static config within a proxy to point back to different ports on the cluster. For example,

PostgreSQL Master - 5432
PostgreSQL Slave - 5433

HAProxy -> 5432 / 5433 which are available via the mesh networking?

Reduces the moving parts within the system.

@je-al
Copy link

je-al commented Nov 29, 2016

I'm guessing that could work, but it does mean that you'd need to setup a "service" per member of the set, and reconfigure it whenever you want to "scale". Think of having two entries, one which balances "out" to only give you the master, and one where it round-robins among slaves, to "offload" reads.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants