Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Docker containers no longer accessible externally #8

Open
ozhound opened this issue Nov 25, 2024 · 29 comments
Open

[BUG] Docker containers no longer accessible externally #8

ozhound opened this issue Nov 25, 2024 · 29 comments
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@ozhound
Copy link

ozhound commented Nov 25, 2024

So after deleting all my containers (as I wasn't actually using any of them) I updated using this script successfully and the latest versions were being reported. However, when I created new containers using the CLI they created successfully and were running as reported by docker ps, but I couldn't connect to any of them using the normal ip:port combination.

Interestingly, any container I tried to add through the container manager in DSM failed with an API failure. I presume this is a symptom of upgrading docker with this method.

I managed to get everything running again by uninstalling and reinstalling container manager, but am on the old version now.

@ozhound ozhound added the bug Something isn't working label Nov 25, 2024
@telnetdoogie
Copy link
Owner

telnetdoogie commented Nov 25, 2024

Once you use this script, container manager won't completely function, as it's expecting containers to log using the synology custom logger, db. I recommend using docker-compose instead. (and portainer is fine AFTER this update, however updating portainer installed before this update isn't without its problems)
If you have any remaining details about the containers you were trying to create that weren't accessible, add details here please. Also let me know what Synology model and DSM version you're using.

@telnetdoogie
Copy link
Owner

I think I found an issue with the startup script not getting modified in some cases, which may be related to your issue.
I'll work on making that more robust and will ping here when it's modified.

@telnetdoogie
Copy link
Owner

I believe this is an artifact of this issue #9

@telnetdoogie
Copy link
Owner

telnetdoogie commented Nov 25, 2024

@ozhound please re-pull all files and attempt the update again if you're bold enough. I believe this should be resolved now.
Just know... once you've updated, Container Manager is not a reliable way to manage containers any more.

@telnetdoogie
Copy link
Owner

Closing this, should be solved with #9

@m33ts4k0z
Copy link

Hello.

This is still an issue. I created the Seafile stack using docker-compose but it looks like all containers are cut out from the world completely for both inbound and outbound traffic.

@telnetdoogie
Copy link
Owner

@m33ts4k0z please:

  • share the contents of your /var/packages/ContainerManager/scripts/start-stop-status file here.
    • (by running sudo cat /var/packages/ContainerManager/scripts/start-stop-status )
  • copy and paste the output of the following command: sudo iptables-save | grep FORWARD
  • run the sudo ./fix_ipforward.sh script from this repo and capture and paste the output here.

The last item should get you running again, in the meantime I'll take a look at your config and see what the problem might be.

@telnetdoogie telnetdoogie reopened this Feb 1, 2025
@m33ts4k0z
Copy link

m33ts4k0z commented Feb 1, 2025

Hello and thanks for the tips.

I ended up running the already EOL-ed v24 offered as beta by synology. Can I still get the info you need somehow? I still have the restore backup made since the last attempt. I forgot to mention in my other post that I already tried the fix_ipforward but received that it was already applied. Here is the output of the iptables command. I guess it hasn't changed since the restore:

root@DIMI-NAS:~# iptables-save | grep FORWARD
:FORWARD ACCEPT [0:0]
:DEFAULT_FORWARD - [0:0]
:FORWARD_FIREWALL - [0:0]
-A FORWARD -j FORWARD_FIREWALL
-A FORWARD -j DEFAULT_FORWARD
-A DEFAULT_FORWARD -j DOCKER-USER
-A DEFAULT_FORWARD -j DOCKER-ISOLATION-STAGE-1
-A DEFAULT_FORWARD -o docker-d6f8cd5b -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DEFAULT_FORWARD -o docker-d6f8cd5b -j DOCKER
-A DEFAULT_FORWARD -i docker-d6f8cd5b ! -o docker-d6f8cd5b -j ACCEPT
-A DEFAULT_FORWARD -i docker-d6f8cd5b -o docker-d6f8cd5b -j ACCEPT
-A DEFAULT_FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DEFAULT_FORWARD -o docker0 -j DOCKER
-A DEFAULT_FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A DEFAULT_FORWARD -i docker0 -o docker0 -j ACCEPT
-A DEFAULT_FORWARD -o br-8f6810bc9536 -j DOCKER
-A DEFAULT_FORWARD -o br-8f6810bc9536 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DEFAULT_FORWARD -i br-8f6810bc9536 ! -o br-8f6810bc9536 -j ACCEPT
-A DEFAULT_FORWARD -i br-8f6810bc9536 -o br-8f6810bc9536 -j ACCEPT
-A FORWARD_FIREWALL -i lo -j ACCEPT
-A FORWARD_FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD_FIREWALL -p tcp -m multiport --dports 26500:27000,65422,65434,65430,55536:55899 -j RETURN
-A FORWARD_FIREWALL -p udp -m multiport --dports 547,546 -j RETURN
-A FORWARD_FIREWALL -p tcp -m geoip --source-country GR,SE  -m multiport --dports 5000,5001,25,465,587,110,995 -j RETURN
-A FORWARD_FIREWALL -p tcp -m geoip --source-country GR,SE  -m multiport --dports 143,993,5080,5043,65430,55536:55899,65422 -j RETURN
-A FORWARD_FIREWALL -p tcp -m geoip --source-country GR,SE  -m multiport --dports 65434 -j RETURN
-A FORWARD_FIREWALL -p udp -m geoip --source-country GR,SE  -m multiport --dports 123,547,546 -j RETURN
-A FORWARD_FIREWALL -m iprange --src-range 192.168.1.1-192.168.1.254 -j RETURN
-A FORWARD_FIREWALL -s 10.0.0.0/8 -j RETURN
-A FORWARD_FIREWALL -s 172.21.0.0/16 -j RETURN
-A FORWARD_FIREWALL -s 172.20.0.0/16 -j RETURN
-A FORWARD_FIREWALL -j DROP

@telnetdoogie
Copy link
Owner

telnetdoogie commented Feb 2, 2025

Thanks! Please also send the contents of the /var/packages/ContainerManager/scripts/start-stop-status file.

It looks like this is a new issue versus the iptables issue I've seen elsewhere, but the contents of that script will help a lot.

Also make sure you have "Enable Multiple Gateways" disabled in Network Advanced settings on the NAS.

@m33ts4k0z
Copy link

m33ts4k0z commented Feb 2, 2025

Here you go:

#!/bin/sh
# Copyright (c) 2000-2015 Synology Inc. All rights reserved.
. /var/packages/ContainerManager/scripts/pkg_utils
## Get modules list
source /usr/syno/etc.defaults/iptables_modules_list
DockerModules="xt_addrtype.ko xt_conntrack.ko veth.ko macvlan.ko aufs.ko"
DockerBridgeModules="llc.ko stp.ko bridge.ko macvlan.ko"
if [ -f /lib/modules/br_netfilter.ko ]; then
    DockerBridgeModules="${DockerBridgeModules} br_netfilter.ko"
fi
DockerIngressModules="iptable_mangle.ko xt_mark.ko ip_vs.ko ip_vs_rr.ko xt_ipvs.ko"
DockerBinLink="/usr/local/bin/docker"
DockerdBinLink="/usr/local/bin/dockerd"
DockerComposeBinLink="/usr/local/bin/docker-compose"
ContainerdBinLink="/usr/local/bin/containerd"
ContainerdCtrBinLink="/usr/local/bin/ctr"
ContainerdShimBinLink="/usr/local/bin/containerd-shim"
ContainerdShimRuncV1BinLink="/usr/local/bin/containerd-shim-runc-v1"
ContainerdShimRuncV2BinLink="/usr/local/bin/containerd-shim-runc-v2"
ContainerdStressBinLink="/usr/local/bin/containerd-stress"
RuncBinLink="/usr/local/bin/runc"
DockerInitBinLink="/usr/local/bin/docker-init"
DockerProxyBinLink="/usr/local/bin/docker-proxy"
AuplinkBinLink="/usr/local/bin/auplink"
InsertModules="${KERNEL_MODULES_CORE} ${KERNEL_MODULES_COMMON} ${KERNEL_MODULES_NAT} ${IPV6_MODULES} ${DockerModules} ${DockerBridgeModules}"
if [ -f /lib/modules/ip_vs.ko -a -f /lib/modules/ip_vs_rr.ko -a -f /lib/modules/xt_ipvs.ko ]; then
    InsertModules="${InsertModules} ${DockerIngressModules}"
fi
DockerServName="docker"
RunningContainerList="/var/packages/ContainerManager/etc/LastRunningContainer"
Dockerd="pkg-ContainerManager-dockerd"
Termd="pkg-ContainerManager-termd"
DockerEventWatcherd="pkg-ContainerManager-event-watcherd"
TargetPath="/var/packages/ContainerManager/target"
DockerBin="$TargetPath/usr/bin/docker"
DockerdBin="$TargetPath/usr/bin/dockerd"
DockerComposeBin="$TargetPath/usr/bin/docker-compose"
ContainerdBin="$TargetPath/usr/bin/containerd"
ContainerdCtrBin="$TargetPath/usr/bin/ctr"
ContainerdShimBin="$TargetPath/usr/bin/containerd-shim"
ContainerdShimRuncV1Bin="$TargetPath/usr/bin/containerd-shim-runc-v1"
ContainerdShimRuncV2Bin="$TargetPath/usr/bin/containerd-shim-runc-v2"
ContainerdStressBin="$TargetPath/usr/bin/containerd-stress"
RuncBin="$TargetPath/usr/bin/runc"
DockerInitBin="$TargetPath/usr/bin/docker-init"
DockerProxyBin="$TargetPath/usr/bin/docker-proxy"
AuplinkBin="$TargetPath/usr/bin/auplink"
DockerUpdaterBin="$TargetPath/tool/updater"
ContainerDepBin="$TargetPath/tool/container_sort"
ShutdownDockerDaemonFlag="/tmp/shutdown_docker_daemon"
ContainerRunShareDir="/run/docker-share"
HookDir="/var/packages/ContainerManager/var/hook"
EventHookDir="$HookDir/event"
MountShareHelper="$TargetPath/tool/mount_share_helper"
DockerServicePortalBin="$TargetPath/tool/docker_service_portals"

get_install_volume_type() {
     local installed_volume="${SYNOPKG_PKGDEST_VOL}"
     local volume_type="$(synofstool --get-fs-type "${installed_volume}")"
     echo "${volume_type}"
}

wait_for_condition()
{
	local retryTimes=3
	local timeGap=1
	local i=0

	for ((i;i<retryTimes;i=i+1)); do
		if eval "$@" >&/dev/null; then
			return 0 # condition succeeds
		fi

		sleep "${timeGap}"
	done

	return 1 # error
}

argument_reverse() {
	local args="$1"
	local arg
	local ret=""

	for arg in ${args}; do
		ret="${arg} ${ret}"
	done

	echo "${ret}"
}

running_container_record() {
	autostart_containers=()
	for container_id in $($DockerBin ps -a --format '{{ .Names }}'); do
		result="$(timeout 10 $DockerBin inspect --format '{{ .State.Running }} {{ .HostConfig.RestartPolicy.Name }}' $container_id 2>/dev/null)"
		if [ $? -ne 0 ]; then
			continue
		fi
		state=($result)
		if [ "xtrue" = "x${state[0]}" ] || [ "xalways" = "x${state[1]}" ]; then
			autostart_containers+=("$container_id")
		fi
	done
	echo "${autostart_containers[@]}" > ${RunningContainerList}
}

running_container_oper() {
	local action=$1
	if [ "xstop" = "x${action}" ]; then
		running_container_record
	fi
	if [ -f ${RunningContainerList} ]; then
		list="$(cat "${RunningContainerList}")"
		if [ "x" != "x${list}" ]; then
			sort_list="$(${ContainerDepBin} ${list})"
			for container in $sort_list
			do
				/usr/syno/bin/synowebapi --exec api=SYNO.Docker.Container method="$action" version=1 'name="'${container}'"'
			done
		fi
		if [ "xstart" = "x${action}" ]; then
			/bin/rm -f "${RunningContainerList}"
		fi
	fi
}

iptables_clear()
{
	eval $(iptables-save -t nat | grep DOCKER | grep -v "^:"| sed -e 1d -e  's/^-A/iptables -t filter -D/' -e 's/DEFAULT_//' -e 's/$/;/')
	eval $(iptables-save -t filter | grep DOCKER | grep -v "^:"| sed -e 1d -e  's/^-A/iptables -t filter -D/' -e 's/DEFAULT_//' -e 's/$/;/')

	iptables -t nat -X DOCKER
	iptables -t filter -X DOCKER
}

clean_lock_files() {
	rm /var/lock/dockerImage.lock /var/lock/dockerMapping.lock /var/lock/dockerRemoteAPI.lock*
}

umount_aufs() {
	for i in $(grep aufs/mnt /proc/mounts | sed 's@.*aufs/mnt/\(\w*\)@\1@'); do
		umount "${TargetPath}"/docker/aufs/mnt/$i
	done

	if grep -q @docker/aufs /proc/mounts; then
		umount "${TargetPath}"/docker/aufs
	fi
}

start_docker_daemon() {
	local retryTimes=3
	local i=0

	echo "$(date): start_docker_daemon: try start docker daemon"

	for ((i;i<retryTimes;i=i+1)); do
		echo "$(date): start_docker_daemon: start daemon.."
		/usr/syno/bin/synosystemctl start "${Dockerd}"

		echo "$(date): start_docker_daemon: daemon started. start to wait for daemon ready"
		if wait_for_condition timeout 10m "${DockerBin}" version; then
			echo "$(date): start_docker_daemon: daemon is ready"
			return 0
		fi

		echo "$(date): start_docker_daemon: daemon didn't get ready till timeout. Stop daemon.."
		/usr/syno/bin/synosystemctl stop "${Dockerd}"
		echo "$(date): start_docker_daemon: daemon stopped."
	done

	echo "$(date): start_docker_daemon: failed to start docker daemon"
	return 1
}

clear_building_project_state() {
	find /var/packages/ContainerManager/etc/projects -name '*.config.json' -exec sed -i 's/BUILDING//g' {} \;
}
sync_service_portal() {
	if [ ! -f "/var/packages/WebStation/enabled" ]; then
		return
	fi
	$DockerServicePortalBin sync
}

case "$1" in
	start)
		[ -d /usr/local/bin ] || mkdir -p /usr/local/bin
		ln -sf "${DockerBin}" "${DockerBinLink}"
		ln -sf "${DockerdBin}" "${DockerdBinLink}"
		ln -sf "${DockerComposeBin}" "${DockerComposeBinLink}"
		ln -sf "${ContainerdBin}" "${ContainerdBinLink}"
		ln -sf "${ContainerdCtrBin}" "${ContainerdCtrBinLink}"
		ln -sf "${ContainerdShimBin}" "${ContainerdShimBinLink}"
		ln -sf "${ContainerdShimRuncV1Bin}" "${ContainerdShimRuncV1BinLink}"
		ln -sf "${ContainerdShimRuncV2Bin}" "${ContainerdShimRuncV2BinLink}"
		ln -sf "${ContainerdStressBin}" "${ContainerdStressBinLink}"
		ln -sf "${RuncBin}" "${RuncBinLink}"
		ln -sf "${DockerInitBin}" "${DockerInitBinLink}"
		ln -sf "${DockerProxyBin}" "${DockerProxyBinLink}"
		ln -sf "${AuplinkBin}" "${AuplinkBinLink}"

		[ -d "${ContainerRunShareDir}" ] || mkdir -p "${ContainerRunShareDir}"
		[ -d "${HookDir}" ] || mkdir -p "${HookDir}" && chmod 700 ${HookDir}
		[ -d "${EventHookDir}" ] || mkdir -p "${EventHookDir}" && chmod 700 ${EventHookDir}

		# install modules
		iptablestool --insmod "${DockerServName}" ${InsertModules}

		$DockerUpdaterBin postinst updatedockerdconf "$(get_install_volume_type)"

		$DockerUpdaterBin predaemonup
		# start docker event watcherd
		/usr/syno/bin/synosystemctl start "${DockerEventWatcherd}"

		# start docker
		if ! start_docker_daemon; then
			exit 1
		fi

		$DockerUpdaterBin postdaemonup
		clear_building_project_state
		sync_service_portal

		## Start running container
		running_container_oper start

		#start termd
		/usr/syno/bin/synosystemctl start ${Termd}

		CreateHelpAndString

		if [[ -f "/var/packages/Docker/INFO" ]]; then
			synonotify cm_remove_legacy_docker
		fi

		$MountShareHelper --mount-all

		exit 0
		;;

	stop)
		modules="$(argument_reverse "${InsertModules}")"

		rm "${DockerBinLink}"
		rm "${DockerComposeBinLink}"

		## Kill termd
		/usr/syno/bin/synosystemctl stop ${Termd}

		## Stop running container
		running_container_oper stop

		## touch flag to avoid container unexpected stopped false alarm
		touch $ShutdownDockerDaemonFlag
		/usr/syno/bin/synosystemctl stop ${Dockerd}
		## remove flag
		rm -f $ShutdownDockerDaemonFlag

		## stop docker event watcherd
		/usr/syno/bin/synosystemctl stop "${DockerEventWatcherd}"

		rm "${DockerdBinLink}"
		rm "${DockerContainerdBinLink}"
		rm "${DockerContainerdCtrBinLink}"
		rm "${DockerContainerdShimBinLink}"
		rm "${DockerContainerdShimRuncV1BinLink}"
		rm "${DockerContainerdShimRuncV2BinLink}"
		rm "${DockerContainerdStressBinLink}"
		rm "${DockerRuncBinLink}"
		rm "${DockerInitBinLink}"
		rm "${DockerProxyBinLink}"
		rm "${AuplinkBinLink}"

		umount_aufs
		clean_lock_files
		iptables_clear

		iptablestool --rmmod "${DockerServName}" ${modules}

		RemoveHelpAndString

		$MountShareHelper --umount-all

		exit 0
		;;

	status)
		if  [ "active" = "$(synosystemctl get-active-status ${Dockerd})" ]; then
			exit 0
		else
			exit 1
		fi
		;;

	*)
		exit 1
		;;
esac

Some more context: This is a 716+II running DSM 7.2.2-72806 update 2
Let me know if you need anything else

@telnetdoogie
Copy link
Owner

telnetdoogie commented Feb 2, 2025

I don't see anything that would create a problem in that file, and the update script should handle it well.
I'm wondering if you have some custom rules in your firewall that are preventing containers from being accessible. I do notice some rules which might be custom firewall rules which may be creating a problem.

Next time you try the update, try disabling the firewall temporarily and see if things become accessible at that point. If that doesn't fix things, please (before restoring) run sudo iptables -L -v -n and drop the output here.

If it DOES fix things, please then re-enable the firewall, test that indeed container access is broken again, and run the same command and share here sudo iptables -L -v -n

When you do, let me know which scenario happened.

@m33ts4k0z
Copy link

I had disabled the firewall in one of my attemps and unfortunately it didn't solve it. I will run the iptables command next time I update and let you know 👍

@telnetdoogie
Copy link
Owner

For others experiencing this issue, please do the following for posting here so I can understand what's happening better...

When you experience the connectivity issues, BEFORE trying to fix it, please run the following and post the results:

sudo journalctl | grep -i iptables
sudo iptables -L -v -n --line-numbers
sudo iptables-save

once you've collected those, you should be able to resolve until the next reboot by running

sudo iptables -P FORWARD ACCEPT (but please capture the output from the above 3 first to help me troubleshoot)

@telnetdoogie
Copy link
Owner

Working through this with a user, and it appears that in some cases (still not sure why) the /var/packages/ContainerManager/scripts/start-stop-status start script does not execute (or fully execute) on docker startup, so the:

    # Added by docker update
    iptables -P FORWARD ACCEPT

Section does not run at all.

While I further investigate, I suggest users create a scheduled task on boot which runs as root and executes:

iptables -P FORWARD ACCEPT

@telnetdoogie
Copy link
Owner

@ozhound do you happen to be running VMM on your synology?

@telnetdoogie
Copy link
Owner

@m33ts4k0z are you running VMM on your NAS?

@m33ts4k0z
Copy link

no I actually dont 🤔

@telnetdoogie telnetdoogie added the help wanted Extra attention is needed label Feb 21, 2025
@mrkhachaturov
Copy link

mrkhachaturov commented Feb 23, 2025

Hello,

I have observed a similar behavior when creating a macvlan interface. Before the creation of the macvlan, the forwarding chain is present for Docker.

[user@nas: ~/docker/docker-hub/compose]$ sudo iptables -n -v -L FORWARD
Chain DEFAULT_FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0  

However, after creating the macvlan, all these rules are removed. I verified that this issue occurs even without updating the Docker version. I have confirmed that this happens when removing and reinstalling ContainerManager from the UI, but I have not yet checked if this is sufficient to completely remove all changes made by this script.

I will check another Linux host to determine whether this behavior is standard or specific to Synology Linux.

@m33ts4k0z are you using bridge network?

@m33ts4k0z
Copy link

I tried with new network, bridged and host. The result is the same.

@telnetdoogie
Copy link
Owner

telnetdoogie commented Feb 23, 2025

Thanks for the help folks!!

For additional info for those helping:

  • This problem only shows up on SOME users' installation - I haven't figured out what the differentiating factor is
  • The solution I'd used successfully for some portion of those users was to add an iptables -P FORWARD ACCEPT into the /var/packages/ContainerManager/scripts/start-stop-status file (works for many, yet still some users have issues)
  • For the users where there's still a problem, the command above runs during startup in that script, but is then removed, perhaps on docker startup, or perhaps with another service.
  • running iptables -P FORWARD ACCEPT typically solves the problem for users, but only until a reboot
  • running iptables -P FORWARD ACCEPT on startup as a scheduled task may not work based on modules loaded at scheduled task run time. A recommendation for one user was to run the below script as a startup task, however it'd be great to ensure that this just works with no additional scheduled tasks or janky hacks.
sleep 60 && iptables -P FORWARD ACCEPT

Docker manages iptables rules as a matter of course, so typically when creating networks, macvlans etc, docker does all of the iptables work 'out of the box', however for users in this situation, there appears no FORWARD rule that allows forward access to the docker networks created.

I've used systemd-analyze plot > ~/systemchain.svg and have started scraping through services which start after ContainerManager to look for potential services which might 'undo' the iptables rules, but have not yet found a candidate.

The file add_logging_to_start_script.sh was added as well as some candidate start-stop-status scripts in the test_file folder to try to add more logging to the start script, so if you want to do more troubleshooting, take a look at that script and at the start-stop-status.withlogging file which is placed in the appropriate place and will log (whatever you want, if edited appropriately) to /var/log/messages on service startup.

If there's any more context I can think of, I'll add it here... Feel free to ask questions if you get anywhere or think you've found something that might be useful.

@telnetdoogie
Copy link
Owner

it might be worth looking (for those where this is a problem) at the kernel ip_forward setting:

sudo sysctl net.ipv4.ip_forward

The value should be 1.

I have a hard time troubleshooting this one because on my 218+ It's never been a problem and so I don't have a broken environment with which to spend time testing. I have to lean on the charity of others' time to do it through them.

I wish I did have an environment where I could spend more time troubleshooting this.

@telnetdoogie
Copy link
Owner

to view iptables rule order, run sudo iptables -L -v -n --line-numbers

@mrkhachaturov
Copy link

mrkhachaturov commented Feb 23, 2025

I have the 1821+ and 220+ models.

On the 1821+, I have an 802.3 LACP bond interface (bond0) with two 10Gb interfaces and a defined VLAN (bond0.1440).

In my Docker Compose setup, I am using macvlan with a separate VLAN ID (bond0.1450).

In this case, I do not need IP forwarding, but I would like to understand why forwarding is removed after certain actions.

@telnetdoogie
Copy link
Owner

This is my iptables (some irrelevant or sensitive entries removed) for a working docker setup on a 218+

Chain INPUT (policy ACCEPT 212K packets, 31M bytes)
num   pkts bytes target     prot opt in     out     source               destination
1     883K  751M INPUT_FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 91463 packets, 7356K bytes)
num   pkts bytes target     prot opt in     out     source               destination
1    2045K 8492M FORWARD_FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 734K packets, 1122M bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.16.0.2           tcp dpt:53
2        0     0 ACCEPT     udp  --  !docker0 docker0  0.0.0.0/0            172.16.0.2           udp dpt:53
3        0     0 ACCEPT     tcp  --  !br-c9cfd0693ed4 br-c9cfd0693ed4  0.0.0.0/0            172.16.1.3           tcp dpt:8080
4        0     0 ACCEPT     tcp  --  !br-8669c2bf2967 br-8669c2bf2967  0.0.0.0/0            172.16.3.3           tcp dpt:80
5  ...
6  ... (bunch of entries omitted here)

Chain DOCKER-ISOLATION-STAGE-1 (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1       34  2878 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1       67  5598 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD_FIREWALL (1 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
2    1954K 8485M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
3     <REDACTED>
4      164  9840 RETURN     all  --  *      *       192.168.0.0/16       0.0.0.0/0
5    91299 7346K RETURN     all  --  *      *       172.16.0.0/12        0.0.0.0/0
6        0     0 RETURN     all  --  *      *       10.0.0.0/8           0.0.0.0/0
7        0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain INPUT_FIREWALL (1 references)
num   pkts bytes target     prot opt in     out     source               destination
1     393K   58M ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0
2     276K  662M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
3     <REDACTED>
4     209K   30M RETURN     all  --  *      *       192.168.0.0/16       0.0.0.0/0
5     2437  238K RETURN     all  --  *      *       172.16.0.0/12        0.0.0.0/0
6        0     0 RETURN     all  --  *      *       10.0.0.0/8           0.0.0.0/0
7     1457 57509 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Here's an iptables output for a user where it's not working (they're also not using firewall some firewall chains missing, no issue there)

Chain INPUT (policy ACCEPT 5399 packets, 924K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy DROP 645 packets, 82432 bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 4956 packets, 2644K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.2           tcp dpt:9000
2        0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.3           tcp dpt:5000
3        0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.4           tcp dpt:5055
4        0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.5           tcp dpt:8989
5  ...
6  ... (bunch of entries omitted here)

Chain DOCKER-ISOLATION-STAGE-1 (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (0 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Note the FORWARD policy of DROP in the 'broken' setup.

@telnetdoogie
Copy link
Owner

telnetdoogie commented Feb 24, 2025

I just made an update to the with-logging script which is pushed to the main branch now. Can be activated with sudo ./add_logging_to_start_script.sh after a git pull

Here's what the startup looks like for me (working setup):

2025-02-24T09:11:02-06:00 TheBucket Synology-Docker[9859]: Start called in start-stop-status
2025-02-24T09:11:02-06:00 TheBucket Synology-Docker[9873]: Symobolic Links created in start-stop-status
2025-02-24T09:11:02-06:00 TheBucket Synology-Docker[9879]: iptables modules pre-iptablestool:
2025-02-24T09:11:02-06:00 TheBucket Synology-Docker[9880]: iptable_nat 2023 1 nf_nat_ipv4 4903 1 iptable_nat iptable_filter 1656 1 ip_tables 14342 2 iptable_filter,iptable_nat x_tables 17395 18 ip6table_filter,xt_iprange,xt_recent,ip_tables,xt_tcpudp,ipt_MASQUERADE,xt_geoip,xt_limit,xt_state,xt_conntrack,xt_LOG,xt_mac,xt_nat,xt_multiport,iptable_filter,xt_REDIRECT,ip6_tables,xt_addrtype
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9945]: iptables modules added in start-stop-status
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9949]: iptables modules post-iptablestool:
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9950]: iptable_mangle 1720 0 iptable_nat 2023 1 nf_nat_ipv4 4903 1 iptable_nat iptable_filter 1656 1 ip_tables 14342 3 iptable_filter,iptable_mangle,iptable_nat x_tables 17395 21 ip6table_filter,xt_ipvs,xt_iprange,xt_mark,xt_recent,ip_tables,xt_tcpudp,ipt_MASQUERADE,xt_geoip,xt_limit,xt_state,xt_conntrack,xt_LOG,xt_mac,xt_nat,xt_multiport,iptable_filter,xt_REDIRECT,iptable_mangle,ip6_tables,xt_addrtype
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9970]: FORWARD and DOCKER chains pre-FORWARD rule:
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9971]: Chain FORWARD (policy ACCEPT) Chain DOCKER-ISOLATION-STAGE-1 (0 references) Chain DOCKER-ISOLATION-STAGE-2 (0 references) Chain DOCKER-USER (0 references) Chain FORWARD_FIREWALL (1 references)
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9972]: about to add FORWARD ACCEPT rule in start-stop-status
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9975]: FORWARD ACCEPT rule added in start-stop-status
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9980]: FORWARD and DOCKER chains post-FORWARD rule:
2025-02-24T09:11:03-06:00 TheBucket Synology-Docker[9981]: Chain FORWARD (policy ACCEPT) Chain DOCKER-ISOLATION-STAGE-1 (0 references) Chain DOCKER-ISOLATION-STAGE-2 (0 references) Chain DOCKER-USER (0 references) Chain FORWARD_FIREWALL (1 references)
2025-02-24T09:11:23-06:00 TheBucket Synology-Docker[15917]: start_docker_daemon completed in start-stop-status
2025-02-24T09:14:10-06:00 TheBucket Synology-Docker[23423]: FORWARD and DOCKER chains after complete startup:
2025-02-24T09:14:10-06:00 TheBucket Synology-Docker[23424]: Chain FORWARD (policy ACCEPT) Chain DOCKER (0 references) Chain DOCKER-ISOLATION-STAGE-1 (0 references) Chain DOCKER-ISOLATION-STAGE-2 (0 references) Chain DOCKER-USER (0 references) Chain FORWARD_FIREWALL (1 references)

So the insertion of iptables kernel modules add some modules apparently correctly.
MY FORWARD policy is already ACCEPT before the policy is modified, so it has no impact.

Wondering what others' output would be if they're in the situation where they have this container networking issue.

You can see the results of the startup after adding the logging and rebooting by running:
sudo cat /var/log/messages | grep Synology-Docker

@telnetdoogie
Copy link
Owner

Here is a [modified for privacy] outcome for a user who has the problem of containers not being accessible:

2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17045]: Start called in start-stop-status
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17059]: Symobolic Links created in start-stop-status
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17086]: iptables modules pre-iptablestool:
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17341]: iptables modules added in start-stop-status
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17345]: iptables modules post-iptablestool:
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17346]: iptable_mangle 1720 0 iptable_nat 2023 0 nf_nat_ipv4 4839 1 iptable_nat iptable_filter 1656 0 ip_tables 14150 3 iptable_filter,iptable_mangle,iptable_nat x_tables 17075 19 ip6table_filter,xt_ipvs,xt_iprange,xt_mark,xt_recent,ip_tables,xt_tcpudp,ipt_MASQUERADE,xt_limit,xt_state,xt_conntrack,xt_LOG,xt_nat,xt_multiport,iptable_filter,xt_REDIRECT,iptable_mangle,ip6_tables,xt_addrtype
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17491]: FORWARD and DOCKER chains pre-FORWARD rule:
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17492]: Chain FORWARD (policy ACCEPT)
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17493]: about to add FORWARD ACCEPT rule in start-stop-status
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17497]: FORWARD ACCEPT rule added in start-stop-status
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17509]: FORWARD and DOCKER chains post-FORWARD rule:
2025-02-24T18:06:51+01:00 NAS_NAME Synology-Docker[17510]: Chain FORWARD (policy ACCEPT)
2025-02-24T18:06:57+01:00 NAS_NAME Synology-Docker[22220]: start_docker_daemon completed in start-stop-status
2025-02-24T18:07:06+01:00 NAS_NAME Synology-Docker[31620]: FORWARD and DOCKER chains after complete startup:
2025-02-24T18:07:06+01:00 NAS_NAME Synology-Docker[31621]: Chain FORWARD (policy DROP) Chain DOCKER (0 references) Chain DOCKER-ISOLATION-STAGE-1 (0 references) Chain DOCKER-ISOLATION-STAGE-2 (0 references) Chain DOCKER-USER (0 references)

...As you can see, the FORWARD policy changes to DROP before completion (in this case, the ACCEPT policy was already in place)

@telnetdoogie
Copy link
Owner

telnetdoogie commented Feb 24, 2025

If you're having this issue, run sudo iptables-save and see if you have a DEFAULT_FORWARD chain in #filter.
Based on comparing iptables-save entries between a working setup and a non-working setup, it appears that in the non-working setup, docker does NOT create the DEFAULT_FORWARD chain.

Apparently a way to solve for this would be by adding two rules:

sudo iptables -I FORWARD -i docker0 -j ACCEPT
sudo iptables -I FORWARD -o docker0 -j ACCEPT

...but someone with the issue would need to test that out and see if it solves the problem for them.

...In a working setup (doesn't matter if FORWARD is DROP or ACCEPT:

...
...
*filter
:INPUT ACCEPT [278:53931]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [1108:208954]
:DEFAULT_FORWARD - [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
...
...

...In a non-working setup (FORWARD has to be set to ACCEPT with this setup):

...
...
*filter
:INPUT ACCEPT [12069:2865292]
:FORWARD ACCEPT [1426:701840]
:OUTPUT ACCEPT [9780:2006859]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
...
...

(notice no DEFAULT_FORWARD chain)

@telnetdoogie
Copy link
Owner

the failure of the docker daemon to create the DEFAULT_FORWARD chain may also be caused by permissions of the user that's running the docker daemon.

@telnetdoogie
Copy link
Owner

for those following the saga, adding two rules on startup:

sudo iptables -I FORWARD -i docker0 -j ACCEPT
sudo iptables -I FORWARD -o docker0 -j ACCEPT

Doesn't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants