Conversation
Co-authored-by: Steven Jin <stevenjin8@gmail.com>
| "ingress-nginx only supports TCP-level timeouts; i2gw has made a best-effort translation to Gateway API timeouts.request." + | ||
| " Please verify that this meets your needs. See documentation: https://gateway-api.sigs.k8s.io/guides/http-timeouts/", |
There was a problem hiding this comment.
So essentially you're taking the highest timeout value out of connect/send/read and multiplying by 10 to use that as the full HTTP request timeout?
There was a problem hiding this comment.
yes, but definitely open to any other suggestions.
There was a problem hiding this comment.
Since nginx doesn't natively support full HTTP request timeouts, there's really no great solution.
There was a problem hiding this comment.
@sjberman How did y'all implement the Gateway API timeouts in https://github.com/nginx/nginx-gateway-fabric? (Or have you not implemented timeouts due to this difficulty?)
There was a problem hiding this comment.
We haven't implemented it yet.
Users can inject raw nginx config, so they can set the TCP level timeouts (and we'll probably expose these soon in our own Policy), but no support for the Gateway API timeouts.
There was a problem hiding this comment.
Reference material in https://gateway-api.sigs.k8s.io/geps/gep-1742/ and kubernetes-sigs/gateway-api#1741 might be helpful for suggesting anything we could do to get a closer approximation, @kflynn any ideas from your research a while back?
There was a problem hiding this comment.
If the ultimate goal in Gateway API is to define a timeout from the point at which a client's request hits the gateway to the time that the gateway responds back to the client, that's just something that nginx won't be able to support today. Now, we may be able to work with the core nginx team to see if it can be built, or write our own custom nginx module, but that has yet to be prioritized.
The existing nginx timeouts just won't suffice for the desire of a true HTTP request timeout.
proxy_connect_timeout is the time that an upstream has to accept a connection
proxy_read_timeout is the time for reading a response from the upstream. It's set only between two successive read operations, not for the transmission of the whole response. So if a response comes back in parts, as long as each part is within that timeout, we'll read the response forever.
proxy_send_timeout is the time for transmitting a request to the upstream. Same rules as read timeout.
It's more likely that a client side connection would timeout first.
|
Refs #270 |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kkk777-7, Stevenjin8 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
LGTM, thanks! /lgtm |
|
/lgtm |
depends on #288
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?: