You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+60-21Lines changed: 60 additions & 21 deletions
Original file line number
Diff line number
Diff line change
@@ -1,46 +1,84 @@
1
1
# vrnetlab - VR Network Lab
2
2
3
-
This is a fork of the original [plajjan/vrnetlab](https://github.com/plajjan/vrnetlab) project. The fork has been created specifically to make vrnetlab-based images to be runnable by [containerlab](https://containerlab.srlinux.dev).
3
+
This is a fork of the original [plajjan/vrnetlab](https://github.com/plajjan/vrnetlab)
4
+
project and was created specifically to make vrnetlab-based images runnable by
5
+
[containerlab](https://containerlab.srlinux.dev).
4
6
5
-
The documentation provided in this fork only explains the parts that have been changed in any way from the upstream project. To get a general overview of the vrnetlab project itself, consider reading the docs of the upstream repo.
7
+
The documentation provided in this fork only explains the parts that have been
8
+
changed from the upstream project. To get a general overview of the vrnetlab
9
+
project itself, consider reading the [docs of the upstream repo](https://github.com/vrnetlab/vrnetlab/blob/master/README.md).
6
10
7
11
## What is this fork about?
8
12
9
-
At [containerlab](https://containerlab.srlinux.dev) we needed to have [a way to run virtual routers](https://containerlab.srlinux.dev/manual/vrnetlab/) alongside the containerized Network Operating Systems.
13
+
At [containerlab](https://containerlab.srlinux.dev) we needed to have
14
+
[a way to run virtual routers](https://containerlab.dev/manual/vrnetlab/)
15
+
alongside the containerized Network Operating Systems.
10
16
11
-
Vrnetlab provides a perfect machinery to package most-common routing VMs in the container packaging. What upstream vrnetlab doesn't do, though, is creating datapath between the VMs in a "container-native" way.
12
-
Vrnetlab relies on a separate VM (vr-xcon) to stich sockets exposed on each container and that doesn't play well with the regular ways of interconnecting container workloads.
17
+
Vrnetlab provides perfect machinery to package most-common routing VMs in
18
+
container packaging. What upstream vrnetlab doesn't do, though, is create
19
+
datapaths between the VMs in a "container-native" way.
13
20
14
-
This fork adds additional option for `launch.py` script of the supported VMs called `connection-mode`. This option allows to choose the way vrnetlab will create datapath for the launched VMs.
21
+
Vrnetlab relies on a separate VM ([vr-xcon](https://github.com/vrnetlab/vrnetlab/tree/master/vr-xcon))
22
+
to stitch sockets exposed on each container and that doesn't play well with the
23
+
regular ways of interconnecting container workloads.
15
24
16
-
By adding a few options a `connection-mode` value can be set to, we made it possible to run vrnetlab containers with the networking that doesn't require a separate container and is native to the tools like docker.
25
+
This fork adds the additional option `connection-mode` to the `launch.py` script
26
+
of supported VMs. The `connection-mode` option controls how vrnetlab creates
27
+
datapaths for launched VMs.
28
+
29
+
The `connection-mode` values make it possible to run vrnetlab containers with
30
+
networking that doesn't require a separate container and is native to tools such
31
+
as docker.
17
32
18
33
### Container-native networking?
19
34
20
-
Yes, the term is bloated, what it actually means is that with the changes we made in this fork it is possible to add interfaces to a container that hosts a qemu VM and vrnetlab will recognize those interfaces and stitch them with the VM interfaces.
35
+
Yes, the term is bloated. What it actually means is this fork makes it possible
36
+
to add interfaces to a container hosting a qemu VM and vrnetlab will recognize
37
+
those interfaces and stitch them with the VM interfaces.
21
38
22
-
With this you can just add, say, veth pairs between the containers as you would do normally, and vrnetlab will make sure that these ports get mapped to your router' ports. In essence, that allows you to work with your vrnetlab containers like with a normal container and get the datapath working in the same "native" way.
39
+
With this you can, for example, add veth pairs between containers as you would
40
+
normally and vrnetlab will make sure these ports get mapped to your routers'
41
+
ports. In essence, that allows you to work with your vrnetlab containers like a
42
+
normal container and get the datapath working in the same "native" way.
23
43
24
-
> Although the changes we made here are of a general purpose and you can run vrnetlab routers with docker CLI or any other container runtime, the purpose of this work was to couple vrnetlab with containerlab.
25
-
> With this being said, we recommend the readers to start their journey from this [documentation entry](https://containerlab.srlinux.dev/manual/vrnetlab/) which will show you how easy it is to run routers in a containerized setting.
44
+
> [!IMPORTANT]
45
+
> Although the changes we made here are of a general purpose and you can run
46
+
> vrnetlab routers with docker CLI or any other container runtime, the purpose
47
+
> of this work was to couple vrnetlab with containerlab.
48
+
>
49
+
> With this being said, we recommend the readers start their journey from
50
+
> this [documentation entry](https://containerlab.dev/manual/vrnetlab/)
51
+
> which will show you how easy it is to run routers in a containerized setting.
26
52
27
53
## Connection modes
28
54
29
-
As mentioned above, the major change this fork brings is the ability to run vrnetlab containers without requiring vr-xcon and by using container-native networking.
55
+
As mentioned above, the major change this fork brings is the ability to run
56
+
vrnetlab containers without requiring [vr-xcon](https://github.com/vrnetlab/vrnetlab/tree/master/vr-xcon)
57
+
and instead uses container-native networking.
58
+
59
+
For containerlab the default connection mode value is `connection-mode=tc`.
60
+
With this particular mode we use **tc-mirred** redirects to stitch a container's
61
+
interfaces `eth1+` with the ports of the qemu VM running inside.
30
62
31
-
The default option that we use in containerlab for this setting is `connection-mode=tc`. With this particular mode we use tc-mirred redirects to stitch container's interfaces `eth1+` with the ports of the qemu VM running inside.
63
+

but tc redirect (tc-mirred :star:) works best of all.
36
71
37
-
We scrambled through many alternatives, which I described in [this post](https://netdevops.me/2021/transparently-redirecting-packets/frames-between-interfaces/), but tc-redirect works best of them all.
72
+
### Mode List
38
73
39
-
Other connection mode values are:
74
+
Full list of connection mode values:
40
75
41
-
* bridge - creates a linux bridge and attaches `eth` and `tap` interfaces to it. Can't pass LACP traffic.
42
-
* ovs-bridge - same as a regular bridge, but uses OvS. Can pass LACP traffic.
| tc-mirred | :white_check_mark: | Creates a linux bridge and attaches `eth` and `tap` interfaces to it. Cleanest solution for point-to-point links.
79
+
| bridge | :last_quarter_moon: | No additional kernel modules and has native qemu/libvirt support. Does not support passing STP. Requires restricting `MAC_PAUSE` frames in order to support LACP.
80
+
| ovs-bridge | :white_check_mark: | Same as a regular bridge, but uses OvS (Open vSwitch).
81
+
| macvtap | :x: | Requires mounting entire `/dev` to a container namespace. Needs file descriptor manipulation due to no native qemu support.
44
82
45
83
## Management interface
46
84
@@ -62,7 +100,8 @@ It is possible to change from the default management interface mode by setting t
62
100
63
101
## Which vrnetlab routers are supported?
64
102
65
-
Since the changes we made in this fork are VM specific, we added a few popular routing products:
103
+
Since the changes we made in this fork are VM specific, we added a few popular
0 commit comments