-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Docker containers no longer accessible externally #8
Comments
Once you use this script, container manager won't completely function, as it's expecting containers to log using the synology custom logger, |
I think I found an issue with the startup script not getting modified in some cases, which may be related to your issue. |
I believe this is an artifact of this issue #9 |
@ozhound please re-pull all files and attempt the update again if you're bold enough. I believe this should be resolved now. |
Closing this, should be solved with #9 |
Hello. This is still an issue. I created the Seafile stack using docker-compose but it looks like all containers are cut out from the world completely for both inbound and outbound traffic. |
@m33ts4k0z please:
The last item should get you running again, in the meantime I'll take a look at your config and see what the problem might be. |
Hello and thanks for the tips. I ended up running the already EOL-ed v24 offered as beta by synology. Can I still get the info you need somehow? I still have the restore backup made since the last attempt. I forgot to mention in my other post that I already tried the fix_ipforward but received that it was already applied. Here is the output of the iptables command. I guess it hasn't changed since the restore:
|
Thanks! Please also send the contents of the It looks like this is a new issue versus the Also make sure you have "Enable Multiple Gateways" disabled in Network Advanced settings on the NAS. |
Here you go:
Some more context: This is a 716+II running DSM 7.2.2-72806 update 2 |
I don't see anything that would create a problem in that file, and the update script should handle it well. Next time you try the update, try disabling the firewall temporarily and see if things become accessible at that point. If that doesn't fix things, please (before restoring) run If it DOES fix things, please then re-enable the firewall, test that indeed container access is broken again, and run the same command and share here When you do, let me know which scenario happened. |
I had disabled the firewall in one of my attemps and unfortunately it didn't solve it. I will run the iptables command next time I update and let you know 👍 |
For others experiencing this issue, please do the following for posting here so I can understand what's happening better... When you experience the connectivity issues, BEFORE trying to fix it, please run the following and post the results:
once you've collected those, you should be able to resolve until the next reboot by running
|
Working through this with a user, and it appears that in some cases (still not sure why) the
Section does not run at all. While I further investigate, I suggest users create a scheduled task on boot which runs as root and executes:
|
@ozhound do you happen to be running VMM on your synology? |
@m33ts4k0z are you running VMM on your NAS? |
no I actually dont 🤔 |
Hello, I have observed a similar behavior when creating a macvlan interface. Before the creation of the macvlan, the forwarding chain is present for Docker. [user@nas: ~/docker/docker-hub/compose]$ sudo iptables -n -v -L FORWARD
Chain DEFAULT_FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 However, after creating the macvlan, all these rules are removed. I verified that this issue occurs even without updating the Docker version. I have confirmed that this happens when removing and reinstalling ContainerManager from the UI, but I have not yet checked if this is sufficient to completely remove all changes made by this script. I will check another Linux host to determine whether this behavior is standard or specific to Synology Linux. @m33ts4k0z are you using bridge network? |
I tried with new network, bridged and host. The result is the same. |
Thanks for the help folks!! For additional info for those helping:
sleep 60 && iptables -P FORWARD ACCEPT Docker manages iptables rules as a matter of course, so typically when creating networks, macvlans etc, docker does all of the iptables work 'out of the box', however for users in this situation, there appears no FORWARD rule that allows forward access to the docker networks created. I've used The file add_logging_to_start_script.sh was added as well as some candidate If there's any more context I can think of, I'll add it here... Feel free to ask questions if you get anywhere or think you've found something that might be useful. |
it might be worth looking (for those where this is a problem) at the kernel ip_forward setting:
The value should be 1. I have a hard time troubleshooting this one because on my 218+ It's never been a problem and so I don't have a broken environment with which to spend time testing. I have to lean on the charity of others' time to do it through them. I wish I did have an environment where I could spend more time troubleshooting this. |
to view iptables rule order, run |
I have the 1821+ and 220+ models. On the 1821+, I have an 802.3 LACP bond interface (bond0) with two 10Gb interfaces and a defined VLAN (bond0.1440). In my Docker Compose setup, I am using macvlan with a separate VLAN ID (bond0.1450). In this case, I do not need IP forwarding, but I would like to understand why forwarding is removed after certain actions. |
This is my iptables (some irrelevant or sensitive entries removed) for a working docker setup on a 218+
Here's an iptables output for a user where it's not working (they're also not using firewall some firewall chains missing, no issue there)
Note the |
I just made an update to the Here's what the startup looks like for me (working setup):
So the insertion of iptables kernel modules add some modules apparently correctly. Wondering what others' output would be if they're in the situation where they have this container networking issue. You can see the results of the startup after adding the logging and rebooting by running: |
Here is a [modified for privacy] outcome for a user who has the problem of containers not being accessible:
...As you can see, the FORWARD policy changes to DROP before completion (in this case, the ACCEPT policy was already in place) |
If you're having this issue, run Apparently a way to solve for this would be by adding two rules:
...but someone with the issue would need to test that out and see if it solves the problem for them. ...In a working setup (doesn't matter if FORWARD is DROP or ACCEPT:
...In a non-working setup (FORWARD has to be set to ACCEPT with this setup):
(notice no DEFAULT_FORWARD chain) |
the failure of the docker daemon to create the DEFAULT_FORWARD chain may also be caused by permissions of the user that's running the docker daemon. |
for those following the saga, adding two rules on startup:
Doesn't work. |
So after deleting all my containers (as I wasn't actually using any of them) I updated using this script successfully and the latest versions were being reported. However, when I created new containers using the CLI they created successfully and were running as reported by docker ps, but I couldn't connect to any of them using the normal ip:port combination.
Interestingly, any container I tried to add through the container manager in DSM failed with an API failure. I presume this is a symptom of upgrading docker with this method.
I managed to get everything running again by uninstalling and reinstalling container manager, but am on the old version now.
The text was updated successfully, but these errors were encountered: