Skip to content

panic "entered unreachable code" triggered when iterating over mpsc Receiver #40156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
spacejam opened this issue Feb 28, 2017 · 9 comments
Closed
Labels
C-bug Category: This is a bug. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.

Comments

@spacejam
Copy link

spacejam commented Feb 28, 2017

'internal error: entered unreachable code', /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/sync/mpsc/mod.rs:884

encountered while testing race conditions on a lock-free log store that I'm building:
https://github.com/spacejam/rsdb/tree/a709924150374c340aab0ec9bdfb79194a3191db

triggered by running cargo test log -- --nocapture while simultaneously running a shell script that shuffles thread niceness to trigger different thread interleavings for teasing out races:

#!/bin/sh                                                                                                                                                                                      
while true; do                                                                                                                                                                                 
  PID=`pgrep rsdb`                                                                                                                                                                               
  TIDS=`ls /proc/$PID/task`                                                                                                                                                                    
  TID=`echo $TIDS |  tr " " "\n" | shuf -n1`                                                                                                                                                   
  NICE=$((`shuf -i 0-39 -n 1` - 20))                                                                                                                                                           
  echo "renicing $TID to $NICE"                                                                                                                                                                
  renice -n $NICE -p $TID                                                                                                                                                                      
  sleep 0.1                                                                                                                                                                                    
done 
@spacejam
Copy link
Author

spacejam commented Feb 28, 2017

rustc --version
rustc 1.15.1 (021bd294c 2017-02-08)

@steveklabnik steveklabnik added A-libs T-libs-api Relevant to the library API team, which will review and decide on the PR/issue. labels Mar 1, 2017
@alexcrichton
Copy link
Member

My guess is that this is the same as #39364, but I wouldn't be certain.

@alexcrichton
Copy link
Member

Thanks for the report @spacejam! Unfortunately I can't seem to reproduce locally... Given the similarity to #39364 though I doubt this is related to unsafe code and it seems like it's most likely a bug in channel.s

@alexcrichton
Copy link
Member

Does this typically happen near the end of the tests? E.g. is it likely when one end of the channel is being dropped? Or do you think it's just happening in the middle? I looked at the code and it's just doing a vanilla send/recv so I'm surprised it's generating an error...

@Mark-Simulacrum Mark-Simulacrum added the C-bug Category: This is a bug. label Jul 27, 2017
@tmiasko
Copy link
Contributor

tmiasko commented Aug 6, 2020

The Sender is !Sync so the attached code is unsound and introduces a data
race by using a single sender from multiple threads without synchronization:

#[derive(Clone)]
pub struct Reservation {
    ...
    plunger: Arc<Sender<ResOrShutdown>>,
    ...
}

unsafe impl Send for Reservation {}

@spacejam
Copy link
Author

spacejam commented Aug 9, 2020

@tmiasko sled::Reservation is Send, as is std::sync::mpsc::Sender:

// The send port can be sent from place to place, so long as it
// is not used to send non-sendable things.
#[stable(feature = "rust1", since = "1.0.0")]
unsafe impl<T: Send> Send for Sender<T> {}

Reservation can't be used from multiple threads without synchronization because it is !Sync.

@tmiasko
Copy link
Contributor

tmiasko commented Aug 9, 2020

@spacejam Arc<Sender> is !Send, because Sender is !Sync. It is true that Reservation itself is only sent between threads (not shared), but it does shares the Sender, by cloning the plunger arc: plunger: self.plunger.clone() and then sending resulting Reservation around.

@spacejam
Copy link
Author

spacejam commented Aug 9, 2020

Mmm right, good catch. I would not be surprised if it's related.

@tmiasko
Copy link
Contributor

tmiasko commented Aug 9, 2020

I don't think this example could be used to demonstrate any issues in mpsc:

  • It introduces a data race in Sender::send, when the channel flavour stored inside the sender has to be changed after oneshot is used.
  • It introduces a data race in a single producer single consumer queue, which is a channel flavour used afterwards, by using it from multiple producer threads.

I was looking at this, because it was the first reproducer that used send and recv only. In contrast to the Receiver::recv_timeout which is already known to contain a data race as descrbed in #39364 (comment), even if it doesn't say so explicitly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: This is a bug. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

6 participants