You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Work around a deadlock in hts_tpool_process_flush.
There is a very rare deadlock in hts_tpool_process_flush.
The test harness step can trigger this:
./test_view -@4 -o seqs_per_slice=100 -o no_ref=1 -o multi_seq_per_slice=-1 -S -C multi_ref.tmp.sam
[This occured for me about 1 in 750 times on one host. Adding a sleep
in cram_io.c's cram_close function, just before the "if (fd->pool &&"
check, makes this trigger much more frequently.]
As an explanation: the tpool_worker threads have a check on
&& q->qsize - q->n_output > p->tsize - p->nwaiting
This was added in bd32ec6 and is designed to avoid CPU clock-scaling
issues when running many more threads than we can keep active. To
avoid exhausting the input queue and alternating between many dormant
threads and active threads, we only take a job off the input queue if
we have more in the queue than the workers currently executing
something.
However, when flushing, we need to leave a bit more leeway. It's
possible that we haven't yet consumed the results, so the output queue
is full, meaning the flush isn't something that we can call and just
wait on unless we have a separate thread that is able to drain the
queue. (This is valid for SAM and BAM, but not CRAM where the
result consumer is also the same thread that calls the flush).
Hts_tpool_process_flush attempts to fix this, albeit with an ironic
comment of "Shouldn't be possible to get here, but just in case". It
can, and it matters, but it's not sufficient. The condition was:
if (q->qsize < q->n_output + q->n_input + q->n_processing)
We can see from tpool_worker above however that it checks
"p->tsize - p->nwaiting", so it's not just a case of qsize vs n_output.
Adding "p->tsize" to our qsize check above avoids this potential
deadlock, but the main cause is elsewhere (spotted by Rob).
The primary issue is the initial in tpool_worker didn't consider
that the current worker thread isn't executing,
So "p->tsize - p->nwaiting - 1" is the correct assessment. This does
also cure the deadlock, but @daviesrob suggested "q->n_processing"
(which can be anywhere from 0 to p->tsize - p->nwaiting - 1, hence also
fixing the same issue) is a better fix and also now matches the
Hts_tpool_process_flush logic.
I've verified this with ~100,000 tests of the test_view command
above. It's harder to state with certainty that it doesn't alter the
initial aim of bd32ec6 as more modern systems appear to be
considerably less affected by frequency scaling issues, but the
effect is still measureable, albeit now very slight (~5% wall-clock
time differences). This change appears to not undo that improvment.
Co-authored-by: Rob Davies <[email protected]>
0 commit comments