-
Notifications
You must be signed in to change notification settings - Fork 310
Bug: Ryuk fails to start due to port binding (colima, timing) #486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
i cant reproduce on my m1 environment, but after what i saw on ipv6 working with compose i have no doubt there is a potential for an issue here. |
Hi @sondr3! Sorry to hear that you are having problems. I am on an M3 setup myself, but haven't encountered the same problem that you have. Am I reading it right that you are using Do you encounter the same problem if you run with a native arm64 backend without virtualization/colima? We'll follow you up closely on this one, as Ryuk is important for us to run smoothly on all architectures. |
@santi, correct. However, I can't run the image natively, I need to run the MSSQL Docker image for tests at $WORK and it only has $ colima status
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/sondre/.colima/default/docker.sock |
Ah, try using the This doesn't really solve your problem, but worth a try:
|
Using that image work without emulating update: I've run the tests a bunch of times on our Ubuntu 22.04 CI machines and it works fine there and on my colleagues Windows machine 🙈 |
Having digged further into this, I strongly believe port bindings are not to blame for this problem. The If this behavior appears randomly in some cases and consistently when using |
I experienced the same problem on a Mac. Downgrading testcontainers (4.3.2 -> 3.7.1) fixed the issue. |
The same happened to me. Fixed downgrading it to 3.7.1. |
|
sounds like ports become available later on colima, and so we'd want to actually check those and not just wait on logs, if we wanted to be compatible with colima's differences with docker |
does this tweak to retry for ~20 seconds help? pip install git+https://github.com/testcontainers/testcontainers-python.git@issue486_explore_retry |
It doesn't because an unhandled OSError gets thrown. Simply handling the OSError doesn't help either. I get these exceptions:
Something like this appears to resolve it but I don't know enough about this library, Python, or sockets to know if it's the correct approach.
|
@pseidel-kcf thanks for testing, ive updated my branch - i think from the perspective of maintenance of this library the missing insights are into colima - per the hypothesis that this is a colima timing bug (well bug in the sense its not matching the behavior of docker engine), this approach could be the one to go with |
Thanks @alexanderankin. I didn't do a great job explaining but I found that I needed to recreate the socket in addition to handling the exception type. |
Looks like another instance of moby/moby#42442 |
I had the same issue of ConnectionRefused on linux via Rancher Desktop (colima based), and using the Before I saw this thread, I investigated using a new Note that the Suggest polishing this timing/retry branch, and consider merging, if it proves a good compromise. |
alright im going to merge the associated PR - this will close this issue. please try the next release (4.4.0) when its released in couple mins and re-open/comment (and we'll reopen) this issue if needed. |
🤖 I have created a release *beep* *boop* --- ## [4.4.0](testcontainers-v4.3.3...testcontainers-v4.4.0) (2024-04-17) ### Features * **labels:** Add common testcontainers labels ([#519](#519)) ([e04b7ac](e04b7ac)) * **network:** Add network context manager ([#367](#367)) ([11964de](11964de)) ### Bug Fixes * **core:** [#486](#486) for colima delay for port avail for connect ([#543](#543)) ([90bb780](90bb780)) * **core:** add TESTCONTAINERS_HOST_OVERRIDE as alternative to TC_HOST ([#384](#384)) ([8073874](8073874)) * **dependencies:** remove usage of `sqlalchemy` in DB extras. Add default wait timeout for `wait_for_logs` ([#525](#525)) ([fefb9d0](fefb9d0)) * tests for Kafka container running on ARM64 CPU ([#536](#536)) ([29b5179](29b5179)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Describe the bug
I upgraded from 3.5.0 to 4.1.0 and the container itself fails to spawn because the Ryuk container setup fails. I've tried debugging the issue and it looks like it is trying to bind the port exposed on IPv6 to the port on IPv4 (the
container_port
variable is correct for IPv4), which are for some reason different ports.To Reproduce
Provide a self-contained code snippet that illustrates the bug or unexpected behavior. Ideally, send a Pull Request to illustrate with a test that illustrates the problem.
Runtime environment
Provide a summary of your runtime environment. Which operating system, python version, and docker version are you using? What is the version of
testcontainers-python
you are using? You can run the following commands to get the relevant information.The text was updated successfully, but these errors were encountered: