-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define Linux Network Devices #1271
base: main
Are you sure you want to change the base?
Conversation
/assign @samuelkarp |
https://github.com/opencontainers/runtime-spec/blob/main/features.md should be updated too |
51e5104
to
3a666eb
Compare
updated and addressed the comments |
AI @aojea (document the cleanup and destroy of the network interfaces) |
From the in-person discussion today:
|
config-linux.md
Outdated
|
||
This schema focuses solely on moving existing network devices identified by name into the container namespace. It does not cover the complexities of network device creation or network configuration, such as IP address assignment, routing, and DNS setup. | ||
|
||
**`netDevices`** (object, OPTIONAL) set of network devices that MUST be available in the container. The runtime is responsible for providing these devices; the underlying mechanism is implementation-defined. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This spec said "MUST" but, I think it can't do it in the rootless container because the rootless container doesn't have CAP_NET_ADMIN, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we should take care of the rootless container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be an error in the case of a rootless container, if the runtime is not able to satisfy the MUST condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be an error in the case of a rootless container, if the runtime is not able to satisfy the MUST condition.
+1 but It'd be better to clarify it in the spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added mor explanations about runtime and network devices lifecycle and runtime checks, PTAL
Pushed a new commit addressing those comments, the changelog is
|
My "border line" is about features and fields provided by kernel netlink api. If something is done with it, I don't see big difference between call "move interface xyz to namespace abc" vs. "set property foo to value bar on interface xyz in namespace abc". Or to what it worth, "put value 123 into cgroup foobar". |
To me, this sounds like a runtime should implement most of what If that's the case, here's a crazy idea. Considering that ip(8) is a de facto standard on Linux, perhaps such configuration can be achieved by supplying a sequence of Something like "linux": {
"netDevices": [
"name": "enp34s0u2u1u2",
"ct_name": "eth0",
"config": [
"ip addr add 10.2.3.4/25 dev $IF",
"ip link set $IF mtu 1350",
"ip route add 10.10.0.0/16 via 10.2.3.99 dev $IF proto static metric 100"
"ip route add default dev $IF"
]
]
} Which will result in runtime running the host's The upsides are:
The biggest downside, I guess, is the scope is not well defined, as |
I don't like the idea of execing commands, specially in golang implementations that are problematic with goroutines and namespaces I still think that preventing to detect failures at runtime is desirable, you can argue you should not add duplicates, but bugs exist and if somebody accidentally add a duplicate interface name it will cause a problem in production we could have avoided just not allowing to define it |
I believe that this is really bad idea, compared to have limited set of config options implemented. To provide more insights on our use case, here is snip of the code that we want to get rid of, currently implemented as OCI runtime hook (btw, including already unreliable call to |
Let me move the milestone from vNext (v1.2.1?) to vNextNext (v1.3.0?) |
Do you have an aprox estimation on how long can take 1.3.0? I have some dependencies on this feature and it will be nice to be able to account for that time |
@AkihiroSuda It looks like 1.2.1 got tagged yesterday: https://github.com/opencontainers/runtime-spec/releases/tag/v1.2.1. Is there anything blocking the merge of this PR into |
I still don't think it's appropriate to expect the runtime to set up the network interfaces and believe we're opening a can of worms here, but not strongly enough to block it outright (especially with Kir approving it, and thus the implied maintenance of it in runc). ❤️ |
I don't see risk meanwhile we stick to network interfaces, the moment we leak networks , we start to create runtimes dependencies and I fully agree with you. That is also the reason why the last proposal removed entirely the IP initialization bits #1271 (comment), I have some ideas on how to solve this problem without modifying the spec, but my priority now is to solve the existing problem in the ecosystem. The problem this solves is that today there is hardware that needs to use network interfaces, mainly GPUs (but there are other cases). Since there is no way to declaratively move these interfaces to the container, everyone solves the problem from a different angle, by hooking into the Pod/Container network namespace creation using:
All these solutions are brittle , requires high privileges that exposes security problems, and are hard to troubleshoot since run in different places during the container creation and cause fragmentation in the ecosystem and bad user experience. The goal here is that developers can just patch the OCI spec to say "move eth2 to this container with /dev/gpu0" simple to do with CDI and NRI and no need for extra privileges ... I personally don't feel that meanwhile we stick to this definition, we will open that pandora box that is the network (that non of us want) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just taking a look, I tried to go through all the 137 comments. But sorry if I'm missing something.
I guess most concerns are solved now that there is no reference to the IP address and friends? Is there any unaddressed concern still?
The idea LGTM. I've also checked quickly the runc implementation, it seemed cleaned and nothing that catched my attention (like no weird magic to do with IPs or anything, that is just not touched at all).
@aojea I ignore this, but how does this works in practice? Is it expected that the host network interface will be configured by the host (i.e. IP address, mtu, etc.) and then moved into the container? All of the configuration and all "just works" when moved into the containe? Or CNI or some other entity will need to do something?
config-linux.md
Outdated
The name of the network device is the entry key. | ||
Entry values are objects with the following properties: | ||
|
||
* **`name`** *(string, OPTIONAL)* - the name of the network device inside the container namespace. If not specified, the host name is used. The network device name is unique per network namespace, if an existing network device with the same name exists that rename operation will fail. The runtime MAY check that the name is unique before the rename operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, as I'm not very familiar with NRI and I don't know if this concern makes sense, please let me know. How can NRI plugins using this decide on container interface name to use? I mean choose one that won't clash with the ones set by potentially other plugins? Can they see what has been done so far by previous plugins? Or this is not an issue at all (in that case, can you explain briefly why? I'm curious :))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In kubernetes both main runtimes, containerd and crio, the name of the interface inside the container is always eth0
, so for 95% of the cases in kubernetes the problem is easy to solve.
There are cases where people add additional interfaces with out of band mechanisms as in #1271 (comment), in that case, there are several options:
- add a random generated name with enough entropy
- inspect the network namespace and check for duplicates
- fail with a collision name error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, but you can't inspect the netns because it hasn't been created yet. So, how can those tools, befor choosing a name for the interface inside the container, check which names were used by others? E.g if NRI has several plugins and more than one adds a interface, how can they the second plugin know eth1 is added and avoid using that name?
The random generated would be an option, but it will be nice to understand if that is needed or if people can just choose names that avoids collisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In kubernetes the network namespace is created by the runtime and there will be only an eth0
interface,
If there are more interface is because some component is adding them via an out of band process, that will have exactly the same problem. This works today because cluster administrators only set up one component to add additional interfaces.
This reinforces my point in #1271 (comment) , using a well defined specification will help multiple implementations to be able to synchronize, and we need thhis primitive to standardize these behaviors, to build higher level APIs ... we are already doing it for Network Status kubernetes/enhancements#4817 , we need to do it for configuration based on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel we are talking about different things. Let's assume this PR is in, implemented, etc. How a NRI plugin chooses a network interface name without collisions with a network interface added by another plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not into the internal details of CDI of NRI, but I think those modify the OCI spec, so any plugin will be able to check in the OCI spec the transformations of all the other plugins, included the interface names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, it could work fine. However, if we don't allow interface renames we can just forget about these problems too.
The goal is to decouple the interface lifecycle and configuration from the oci runtime, that is the part that SHOULD NOT be handled by the OCI runtimes or actors of the pod/container creation. I think there are two scenarios:
|
Okay, we were talking over slack and there are two things that we think we still need to answer:
I ask 2. for two reasons: a) be clear that is the concern of another part of the stack; b) understand how this will be used and understand if nothing else is missing here (i.e. if CNI is expected to handle it, make sure it is not running too late, or it has all the info to realize there is an extra interface to configure, if another component is expected to handle, see that it can, etc.) |
Thanks @rata for your help today, I updated the PR to implement it in runc with integration tests that show the behavior
the interface configuration is preserved, so users can set down the interface in the host namespace, configure the interface (ip address, mtu, hw address) and the runtime will move it to the network namespace maintaining that configuration, this removes the need to include network configuration in the runtime and allow for implementations to use the preparation of the device to configure it without risks (kubernetes use case) Users can still decide to use a process inside the container to configure the network configuration, use dhcp or some sort of bootstrap ala cloud-init |
rebased and added this last requirement to preserve the network config iff --git a/config-linux.md b/config-linux.md
index 6682e16..1f0e808 100644
--- a/config-linux.md
+++ b/config-linux.md
@@ -201,6 +201,8 @@ This schema focuses solely on moving existing network devices identified by name
The runtime MUST check that is possible to move the network interface to the container namespace and MUST [generate an error](runtime.md#errors) if the check fails.
+The runtime MUST preserve the existing network interface attributes, like MTU, MAC and IP addresses, enabling users to preconfigure the interfaces.
+
The runtime MUST set the network device state to "up" after moving it to the network namespace to allow the container to send and receive network traffic through that device.
|
Link attributes are preserved, but not iP addresses or routes, As IP address configuration needs to happen after network namespace creation, it leaves options only for runtime hooks, address configuration options in the spec or to run ip configuraton locally? Local ip configuration options will have some timing issues, though. |
@pfl which timing issues do you mean? |
Just that the containerized IP configuration needs to happen before the workload runs, unless the workload properly waits for the interfaces to come online. I think the order is clear when there is a CNI setting up addresses, less so if the or another container needs to do the configuration more or less in parallel. |
that is the
I think we are derailing the discussion, this is about network interfaces, not for IP configuration of containers, that already works today with CNI or libnetwork, and this is completely orthogonal to that The applications that need additional network interfaces may fall into two, but let's keep in mind that the container already has an interface an IP :
|
Let me share the big picture https://docs.google.com/presentation/d/16eT_EYVbm75UvqKVg8L55VtRuGJ1_dr463OpbrRN2gg/edit?usp=sharing , CNI does one work and does it well, but is struggles to handle more complex scenarios that require a more declarative approach and that will allow high level apis to build this complexity of network configuration |
Fair enough, PR #4538 indeed reads IP addresses and writes them to the interface in the new namespace, very much differently to iproute2.
With IP addresses copied to the new network interface, couldn't the addresses be as easily expressed in the spec? They are after all set to a specific value as far as runc is concerned. When DRA and NRI are involved as in the big picture presentation, doesn't the spec at that point need IP address information for the NRI plugin to set some of them? |
There is a reasonable concern about scope creep and magnet for "network things"
with DRA and NRI or CDI, there is a "driver" entity. The "driver" may receive the network details via the Kubernetes APIs or decide to use their owns, this driver will preprovision the interface (any sort of operations, if is virtual to create it, if is a vlan or a SRIOV VF, ... whatever, and in this part it can also apply the network configuration). The kubernetes kubelet will stop in this preprovisioning hook, before sending the data to the container runtime. So at this point, we just need to modify the spec to attach the preprovisioned interface ... bear in mind that there are also applications that can configure the IPs and routes and all stuff directly in the CNI or NRI or OCI hooks, that is still another possibility, so this is a very good starting point that unblocks 90% of the problems we have today ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks mostly fine, added some comments on the interface rename mostly. I think that is the part that needs a little bit of ironing-out (or I need to understand better why it is not a problem, maybe I'm missing something :))
You mention the scenario of the runtime not participating in the container cleanup. In that case, I wonder what would happen if:
scenario A:
- host has two interfaces:
rata
andantonio
. - Container is created and inteface
rata
is moved to the container - The interface name in the container is
antonio
. - The container crashes, the kernel moves the interfaces back into the host network namespace. How will it be called?
scenario B:
- host has two interfaces:
rata
andantonio
. - Container is created and inteface
rata
is moved to the container - The interface name in the container is
eth2
. - The container crashes, the kernel moves the interfaces back into the host network namespace. Now it has:
antonio
,eth2
. - If the container is created again, how will recognize that the interface that it is interested on is called
eth2
?
I think if we can find a way out for these cases, this looks good to go for me. Well, these scenarios and its combinations (like scenario A, and how does a new container find that the interface they are looking for is called now antonioX
, if the kernel did that rename to avoid clashes, and not `rata).
One option might be the alias, that I suggested inline (and just require nodes to not have interfaces named eth0
to be moved), another might be not support renames at all. But if those scenarios are not good, there might be some other way out. Maybe I'm thinking something is a problem when it isn't.
@aojea Let me know what you think or what you can find out of what the kernel does for scenario A :)
config-linux.md
Outdated
|
||
The runtime MUST check that is possible to move the network interface to the container namespace and MUST [generate an error](runtime.md#errors) if the check fails. | ||
|
||
The runtime MUST preserve the existing network interface attributes, like MTU, MAC and IP addresses, enabling users to preconfigure the interfaces. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the wording is a little vague. I'd have an exhaustive list of things the runtime must preserve, to avoid different runtimes doing something different by mistake.
Also, if the kernel preserves the MTU and MAC already, I'd just remove those things? What do others think?
The kernel has a quite strict user-space breaking policy, so I don't think this behavior will change, we can depend on it. It doesn't seem wrong to depend on it either, IMHO. And I don't think there is any other way than netlink to change the namespace of a iface, it seems all runtimes will use that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, need to change this
config-linux.md
Outdated
Entry values are objects with the following properties: | ||
|
||
* **`name`** *(string, OPTIONAL)* - the name of the network device inside the container namespace. If not specified, the host name is used. The network device name is unique per network namespace, if an existing network device with the same name exists that rename operation will fail. The runtime MAY check that the name is unique before the rename operation. | ||
The runtime, when participating on the container termination, must revert back the original name to guarantee the idempotence of operations, so a container that moves an interface and renames it can be created and destroyed multiple times with the same result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so if the container crashes this idem-potency will be broken, right? Like if a container crashes and it is later created again on this node, it will fail (the node interface is not called as it is expected). Right?
Would it be an issue if we don't allow interface renames, to avoid this issue?
If not supporting that is an issue, maybe we can add aliases? With ip we can do it like this:
ip link property add dev <device_name> altname rata
Will an alias be enough?
We will be leaking aliases in the worst case, not sure if that can have some undesired issues, like learning who used this before by the iface name or something.
IMHO, if we are not sure we need to change the interface name, my preference would be to not support changing the name at all for now, we can add the alias or something else later if needed.
Things like if the host and the container default interface have the same name are an issue, but we can easily change the host interface name. And for ages the default is not eth0
, so it seems this idea could fly
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the kernel implemented "move and rename" in the same operation because it is a known issue in containerized environment the name conlict , specially with systemd rules and more complex stuff , see discussion related https://lore.kernel.org/r/netdev/[email protected]/T/ , also comments about altname to be more problematic , since increases the risk of collision , see https://gist.github.com/aojea/a5371456177ae85765714fd52db55fdf
config-linux.md
Outdated
The name of the network device is the entry key. | ||
Entry values are objects with the following properties: | ||
|
||
* **`name`** *(string, OPTIONAL)* - the name of the network device inside the container namespace. If not specified, the host name is used. The network device name is unique per network namespace, if an existing network device with the same name exists that rename operation will fail. The runtime MAY check that the name is unique before the rename operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, it could work fine. However, if we don't allow interface renames we can just forget about these problems too.
The proposed "netdevices" field provides a declarative way to specify which host network devices should be moved into a container's network namespace. This approach is similar than the existing "devices" field used for block devices but uses a dictionary keyed by the interface name instead. The proposed scheme is based on the existing representation of network device by the `struct net_device` https://docs.kernel.org/networking/netdevices.html. This proposal focuses solely on moving existing network devices into the container namespace. It does not cover the complexities of network configuration or network interface creation, emphasizing the separation of device management and network configuration. Signed-off-by: Antonio Ojea <[email protected]>
Pushed last proposal , most important change is to not require the runtime to handle the interface lifecycle and not to return it back duringn the container deletion, this is based on following two premises:
I've added much more explanations on all the decisions in the text, also suggestions on how to handle certain situations, like interface rename or handling with systemd. You can find a technical reseach that explain all the relations between namespaces and interfaces in https://gist.github.com/aojea/a5371456177ae85765714fd52db55fdf I've also updated the A list of real use cases that justify this proposal is:
|
The proposed "netdevices" field provides a declarative way to specify which host network devices should be moved into a container's network namespace.
This approach is similar than the existing "devices" field used for block devices but uses a dictionary keyed by the interface name instead.
The proposed scheme is based on the existing representation of network device by the
struct net_device
https://docs.kernel.org/networking/netdevices.html.
This proposal focuses solely on moving existing network devices into the container namespace. It does not cover the complexities of network configuration or network interface creation, emphasizing the separation of device management and network configuration.
A list of real use cases that justify this proposal is:
Pre-Configuring Physical Devices:
netDevices
to move the pre-configured interface into the container.Creating and Moving Virtual Interfaces:
macvlan
interface on the host, based on an existing physical interface.netDevices
to move the MACVLAN interface into the container.Network Function Containers:
netDevices
to move multiple physical or virtual interfaces into the container.References
Fixes: #1239