Skip to content

Commit 43e48eb

Browse files
authored
Merge branch 'master' into transparent-mgmt-intfs-dev
2 parents 75a64be + 4583deb commit 43e48eb

File tree

4 files changed

+107
-37
lines changed

4 files changed

+107
-37
lines changed

README.md

Lines changed: 60 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,84 @@
11
# vrnetlab - VR Network Lab
22

3-
This is a fork of the original [plajjan/vrnetlab](https://github.com/plajjan/vrnetlab) project. The fork has been created specifically to make vrnetlab-based images to be runnable by [containerlab](https://containerlab.srlinux.dev).
3+
This is a fork of the original [plajjan/vrnetlab](https://github.com/plajjan/vrnetlab)
4+
project and was created specifically to make vrnetlab-based images runnable by
5+
[containerlab](https://containerlab.srlinux.dev).
46

5-
The documentation provided in this fork only explains the parts that have been changed in any way from the upstream project. To get a general overview of the vrnetlab project itself, consider reading the docs of the upstream repo.
7+
The documentation provided in this fork only explains the parts that have been
8+
changed from the upstream project. To get a general overview of the vrnetlab
9+
project itself, consider reading the [docs of the upstream repo](https://github.com/vrnetlab/vrnetlab/blob/master/README.md).
610

711
## What is this fork about?
812

9-
At [containerlab](https://containerlab.srlinux.dev) we needed to have [a way to run virtual routers](https://containerlab.srlinux.dev/manual/vrnetlab/) alongside the containerized Network Operating Systems.
13+
At [containerlab](https://containerlab.srlinux.dev) we needed to have
14+
[a way to run virtual routers](https://containerlab.dev/manual/vrnetlab/)
15+
alongside the containerized Network Operating Systems.
1016

11-
Vrnetlab provides a perfect machinery to package most-common routing VMs in the container packaging. What upstream vrnetlab doesn't do, though, is creating datapath between the VMs in a "container-native" way.
12-
Vrnetlab relies on a separate VM (vr-xcon) to stich sockets exposed on each container and that doesn't play well with the regular ways of interconnecting container workloads.
17+
Vrnetlab provides perfect machinery to package most-common routing VMs in
18+
container packaging. What upstream vrnetlab doesn't do, though, is create
19+
datapaths between the VMs in a "container-native" way.
1320

14-
This fork adds additional option for `launch.py` script of the supported VMs called `connection-mode`. This option allows to choose the way vrnetlab will create datapath for the launched VMs.
21+
Vrnetlab relies on a separate VM ([vr-xcon](https://github.com/vrnetlab/vrnetlab/tree/master/vr-xcon))
22+
to stitch sockets exposed on each container and that doesn't play well with the
23+
regular ways of interconnecting container workloads.
1524

16-
By adding a few options a `connection-mode` value can be set to, we made it possible to run vrnetlab containers with the networking that doesn't require a separate container and is native to the tools like docker.
25+
This fork adds the additional option `connection-mode` to the `launch.py` script
26+
of supported VMs. The `connection-mode` option controls how vrnetlab creates
27+
datapaths for launched VMs.
28+
29+
The `connection-mode` values make it possible to run vrnetlab containers with
30+
networking that doesn't require a separate container and is native to tools such
31+
as docker.
1732

1833
### Container-native networking?
1934

20-
Yes, the term is bloated, what it actually means is that with the changes we made in this fork it is possible to add interfaces to a container that hosts a qemu VM and vrnetlab will recognize those interfaces and stitch them with the VM interfaces.
35+
Yes, the term is bloated. What it actually means is this fork makes it possible
36+
to add interfaces to a container hosting a qemu VM and vrnetlab will recognize
37+
those interfaces and stitch them with the VM interfaces.
2138

22-
With this you can just add, say, veth pairs between the containers as you would do normally, and vrnetlab will make sure that these ports get mapped to your router' ports. In essence, that allows you to work with your vrnetlab containers like with a normal container and get the datapath working in the same "native" way.
39+
With this you can, for example, add veth pairs between containers as you would
40+
normally and vrnetlab will make sure these ports get mapped to your routers'
41+
ports. In essence, that allows you to work with your vrnetlab containers like a
42+
normal container and get the datapath working in the same "native" way.
2343

24-
> Although the changes we made here are of a general purpose and you can run vrnetlab routers with docker CLI or any other container runtime, the purpose of this work was to couple vrnetlab with containerlab.
25-
> With this being said, we recommend the readers to start their journey from this [documentation entry](https://containerlab.srlinux.dev/manual/vrnetlab/) which will show you how easy it is to run routers in a containerized setting.
44+
> [!IMPORTANT]
45+
> Although the changes we made here are of a general purpose and you can run
46+
> vrnetlab routers with docker CLI or any other container runtime, the purpose
47+
> of this work was to couple vrnetlab with containerlab.
48+
>
49+
> With this being said, we recommend the readers start their journey from
50+
> this [documentation entry](https://containerlab.dev/manual/vrnetlab/)
51+
> which will show you how easy it is to run routers in a containerized setting.
2652
2753
## Connection modes
2854

29-
As mentioned above, the major change this fork brings is the ability to run vrnetlab containers without requiring vr-xcon and by using container-native networking.
55+
As mentioned above, the major change this fork brings is the ability to run
56+
vrnetlab containers without requiring [vr-xcon](https://github.com/vrnetlab/vrnetlab/tree/master/vr-xcon)
57+
and instead uses container-native networking.
58+
59+
For containerlab the default connection mode value is `connection-mode=tc`.
60+
With this particular mode we use **tc-mirred** redirects to stitch a container's
61+
interfaces `eth1+` with the ports of the qemu VM running inside.
3062

31-
The default option that we use in containerlab for this setting is `connection-mode=tc`. With this particular mode we use tc-mirred redirects to stitch container's interfaces `eth1+` with the ports of the qemu VM running inside.
63+
![diagram showing network connections via tc redirects](https://gitlab.com/rdodin/pics/-/wikis/uploads/4d31c06e6258e70edc887b17e0e758e0/image.png)
3264

33-
![tc](https://gitlab.com/rdodin/pics/-/wikis/uploads/4d31c06e6258e70edc887b17e0e758e0/image.png)
65+
Using tc redirection (tc-mirred) we get a transparent pipe between a container's
66+
interfaces and those of the VMs running within.
3467

35-
Using tc redirection we get a transparent pipe between container's interfaces and VM's.
68+
We scrambled through many connection alternatives, which are described in
69+
[this post](https://netdevops.me/2021/transparently-redirecting-packetsframes-between-interfaces/),
70+
but tc redirect (tc-mirred :star:) works best of all.
3671

37-
We scrambled through many alternatives, which I described in [this post](https://netdevops.me/2021/transparently-redirecting-packets/frames-between-interfaces/), but tc-redirect works best of them all.
72+
### Mode List
3873

39-
Other connection mode values are:
74+
Full list of connection mode values:
4075

41-
* bridge - creates a linux bridge and attaches `eth` and `tap` interfaces to it. Can't pass LACP traffic.
42-
* ovs-bridge - same as a regular bridge, but uses OvS. Can pass LACP traffic.
43-
* macvtap
76+
| Connection Mode | LACP Support | Description |
77+
| --------------- | :-----------------: | :---------- |
78+
| tc-mirred | :white_check_mark: | Creates a linux bridge and attaches `eth` and `tap` interfaces to it. Cleanest solution for point-to-point links.
79+
| bridge | :last_quarter_moon: | No additional kernel modules and has native qemu/libvirt support. Does not support passing STP. Requires restricting `MAC_PAUSE` frames in order to support LACP.
80+
| ovs-bridge | :white_check_mark: | Same as a regular bridge, but uses OvS (Open vSwitch).
81+
| macvtap | :x: | Requires mounting entire `/dev` to a container namespace. Needs file descriptor manipulation due to no native qemu support.
4482

4583
## Management interface
4684

@@ -62,7 +100,8 @@ It is possible to change from the default management interface mode by setting t
62100

63101
## Which vrnetlab routers are supported?
64102

65-
Since the changes we made in this fork are VM specific, we added a few popular routing products:
103+
Since the changes we made in this fork are VM specific, we added a few popular
104+
routing products:
66105

67106
* Arista vEOS
68107
* Cisco XRv9k

ocnos/Makefile

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,11 @@ IMAGE_GLOB=*.qcow2
55

66
# match versions like:
77
# DEMO_VM-OcNOS-6.0.2.11-MPLS-x86-MR.qcow2
8-
VERSION=$(shell echo $(IMAGE) | sed -rn 's/DEMO_VM-OcNOS-(.+)-MPLS-.*.qcow/\1/p')
8+
#VERSION=$(shell echo $(IMAGE) | sed -rn 's/DEMO_VM-OcNOS-(.+)-MPLS-.*.qcow/\1/p')
9+
10+
# match versions like:
11+
# OcNOS-SP-PLUS-x86-6.5.2-101-GA.qcow2
12+
VERSION=$(shell echo $(IMAGE) | sed -rn 's/OcNOS-SP-PLUS-x86-(.+)-GA.qcow2/\1/p')
913

1014
-include ../makefile-sanity.include
1115
-include ../makefile.include

ocnos/README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,18 @@ This is the vrnetlab docker image for IPInfusion OcNOS.
77
Download the OcNOS-VM image from https://www.ipinfusion.com/products/ocnos-vm/
88
Copy the qcow2 image into this folder, then run `make docker-image`.
99

10-
Tested booting and responding to SSH:
10+
Tested booting and responding to Telnet:
11+
12+
- OcNOS-SP-PLUS-x86-6.5.2-101-GA.qcow2 MD5:796c121be77d43ffffbf6214a44f54eb
13+
14+
Tested booting and responding to SSH:
15+
(The relevant parts of the Makefile for this version are commented out.)
1116

1217
- DEMO_VM-OcNOS-6.0.2.11-MPLS-x86-MR.qcow2 MD5:08bbaf99347c33f75d15f552bda762e1
1318

1419
## Serial console issues
1520

21+
(This issue did not occur in version 6.5.2-101.)
1622
The image of OcNOS version 6.0.2.11 distributed from the official website has a bug that prevents connection via serial console.
1723
This problem can be corrected by modifying /boot/grub/grub.cfg in the image.
1824

ocnos/docker/launch.py

Lines changed: 35 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -100,23 +100,44 @@ def bootstrap_config(self):
100100
"""Do the actual bootstrap config"""
101101
self.logger.info("applying bootstrap configuration")
102102
self.wait_write("", None)
103-
self.wait_write("enable", ">")
104-
self.wait_write("configure terminal")
105-
self.wait_write(
106-
"username %s role network-admin password %s"
107-
% (self.username, self.password)
108-
)
109103

110-
# configure mgmt interface
111-
self.wait_write("interface eth0")
112-
self.wait_write("ip address 10.0.0.15 255.255.255.0")
113-
self.wait_write("exit")
104+
if self.spins > 300:
105+
# too many spins with no result -> give up
106+
self.logger.info("To many spins with no result at logging in, restarting")
107+
self.stop()
108+
self.start()
109+
return
114110

115-
self.wait_write(f"hostname {self.hostname}")
111+
(ridx, match, res) = self.tn.expect([b"OcNOS> "], 1)
112+
if match: # got a match!
113+
if ridx == 0: # write config
114+
self.logger.debug("matched logged in prompt")
115+
self.wait_write("enable", None)
116+
self.wait_write("configure terminal")
117+
self.wait_write(
118+
"username %s role network-admin password %s"
119+
% (self.username, self.password)
120+
)
121+
122+
# configure mgmt interface
123+
self.wait_write("interface eth0")
124+
self.wait_write("ip address 10.0.0.15 255.255.255.0")
125+
self.wait_write("exit")
126+
127+
self.wait_write(f"hostname {self.hostname}")
128+
129+
self.wait_write("commit")
130+
self.wait_write("exit")
131+
self.wait_write("write memory")
132+
133+
# no match, if we saw executive mode from the router it's probably
134+
# logging in or logging in failed, so let's give it some more time
135+
if res != b"":
136+
self.logger.trace("OUTPUT: %s" % res.decode())
137+
# reset spins if we saw some output
138+
self.spins = 0
116139

117-
self.wait_write("commit")
118-
self.wait_write("exit")
119-
self.wait_write("write memory")
140+
self.spins += 1
120141

121142
def startup_config(self):
122143
"""Load additional config provided by user."""

0 commit comments

Comments
 (0)