You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Added a new global `set_connection_limit!` function for controlling the global connection limit that will be applied to all requests
This is one way to resolve#1033. I added a deprecation warning when passing `connect_limit` to individual requests. So usage would be:
calling `HTTP.set_connection_limit!` and any time this is called, it changes the global value.
* Add a try-finally in keepalive! around our global IO lock usage just for good house-keeping
* Refactored `try_with_timeout` to use a `Threads.Condition` and `Threads.@spawn` instead of the non-threaded versions; seems cleaner
and allows us to avoid the usage of `@async` when not needed. Note that I included a change in StreamRequest.jl however that wraps
all the actual write/read IO operations in a `fetch(@async dostuff())` because this will currently prevent code in this task from
migrating across threads, which is important for OpenSSL usage where error handling is done per-thread. I don't love the solution,
but it seems ok for now.
* I refactored a few of the stream IO functions so that we always know the number of bytes downloaded, whether in memory or written to
an IO, so we can log them and use them in verbose logging to give bit-rate calculations
* Ok, the big one: I rewrote the internal implementation of ConnectionPool.ConnectionPools.Pod `acquire`/`release` functions; under really
heavy workloads, there was a ton of contention on the Pod lock. I also observed at least one "hang" where GDB backtraces seemed to indicate
that somehow a task failed/died/hung while trying to make a new connection _while holding the Pod lock_, which then meant that no other
requests could ever make progress. The new implementation includes a lock-free "fastpath" where an existing connection that can be re-used
doesn't require any lock-taking. It uses a lock-free concurrent Stack implementation copied from JuliaConcurrent/ConcurrentCollections.jl (
doesn't seem actively maintained and it's not much code, so just copied). The rest of the `acquire`/`release` code is now modeled after
Base.Event in how releasing always acquires the lock and slow-path acquires also take the lock to ensure fairness and no deadlocks.
I've included some benchmark results on a variety of heavy workloads [here](https://everlasting-mahogany-a5f.notion.site/Issue-heavy-load-perf-degradation-1cd275c75037481a9cd6378b8303cfb3)
that show some great improvements, a bulk of which are attributable to reducing contention when acquiring/releasing connections during requests.
The other key change included in this rewrite is that we ensure we _do not_ hold any locks while _making new connections_ to avoid the
possibility of the lock ever getting "stuck", and because it's not necessary: the pod is in charge of just keeping track of numbers and
doesn't need to worry about whether the connection was actually made yet or not (if it fails, it will be immediately released back and retried).
Overall, the code is also _much_ simpler, which I think is a huge win, because the old code was always pretty scary to have to dig into.
requested_buffer_capacity = (buf.append ? buf.size : (buf.ptr -1)) + n
284
-
requested_buffer_capacity >length(buf.data) &&throw(ArgumentError("Unable to grow response stream IOBuffer large enough for response body size"))
284
+
requested_buffer_capacity >length(buf.data) &&throw(ArgumentError("Unable to grow response stream IOBuffer $(length(buf.data))large enough for response body size: $requested_buffer_capacity"))
285
285
end
286
286
287
287
function Base.readbytes!(http::Stream, buf::Base.GenericIOBuffer, n=bytesavailable(http))
@@ -299,19 +299,25 @@ function Base.readbytes!(http::Stream, buf::Base.GenericIOBuffer, n=bytesavailab
0 commit comments