-
Notifications
You must be signed in to change notification settings - Fork 4
/
Copy pathnode.py
executable file
·2217 lines (1942 loc) · 84.8 KB
/
node.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
"""
Node objects for Mininet.
Nodes provide a simple abstraction for interacting with hosts, switches
and controllers. Local nodes are simply one or more processes on the local
machine.
Node: superclass for all (primarily local) network nodes.
Host: a virtual host. By default, a host is simply a shell; commands
may be sent using Cmd (which waits for output), or using sendCmd(),
which returns immediately, allowing subsequent monitoring using
monitor(). Examples of how to run experiments using this
functionality are provided in the examples/ directory. By default,
hosts share the root file system, but they may also specify private
directories.
CPULimitedHost: a virtual host whose CPU bandwidth is limited by
RT or CFS bandwidth limiting.
Switch: superclass for switch nodes.
UserSwitch: a switch using the user-space switch from the OpenFlow
reference implementation.
OVSSwitch: a switch using the Open vSwitch OpenFlow-compatible switch
implementation (openvswitch.org).
OVSBridge: an Ethernet bridge implemented using Open vSwitch.
Supports STP.
IVSSwitch: OpenFlow switch using the Indigo Virtual Switch.
Controller: superclass for OpenFlow controllers. The default controller
is controller(8) from the reference implementation.
OVSController: The test controller from Open vSwitch.
NOXController: a controller node using NOX (noxrepo.org).
Ryu: The Ryu controller (https://osrg.github.io/ryu/)
RemoteController: a remote controller node, which may use any
arbitrary OpenFlow-compatible controller, and which is not
created or managed by Mininet.
Future enhancements:
- Possibly make Node, Switch and Controller more abstract so that
they can be used for both local and remote nodes
- Create proxy objects for remote nodes (Mininet: Cluster Edition)
"""
import errno
import os
import pty
import re
import signal
import select
import docker
import json
from distutils.version import StrictVersion
from re import findall
from subprocess import Popen, PIPE, check_output
from sys import exit # pylint: disable=redefined-builtin
from time import sleep
from mininet.log import info, error, warn, debug
from mininet.util import ( quietRun, errRun, errFail, moveIntf, isShellBuiltin,
numCores, retry, mountCgroups, BaseString, decode,
encode, getincrementaldecoder, Python3, which )
from mininet.moduledeps import moduleDeps, pathCheck, TUN
from mininet.link import Link, Intf, TCIntf, OVSIntf
# pylint: disable=too-many-arguments
class Node( object ):
"""A virtual network node is simply a shell in a network namespace.
We communicate with it using pipes."""
portBase = 0 # Nodes always start with eth0/port0, even in OF 1.0
def __init__( self, name, inNamespace=True, **params ):
"""name: name of node
inNamespace: in network namespace?
privateDirs: list of private directory strings or tuples
params: Node parameters (see config() for details)"""
# Make sure class actually works
self.checkSetup()
self.name = params.get( 'name', name )
self.privateDirs = params.get( 'privateDirs', [] )
self.inNamespace = params.get( 'inNamespace', inNamespace )
# Python 3 complains if we don't wait for shell exit
self.waitExited = params.get( 'waitExited', Python3 )
# Stash configuration parameters for future reference
self.params = params
# dict of port numbers to interfacse
self.intfs = {}
# dict of interfaces to port numbers
# todo: replace with Port objects, eventually ?
self.ports = {}
self.nameToIntf = {} # dict of interface names to Intfs
# Make pylint happy
( self.shell, self.execed, self.pid, self.stdin, self.stdout,
self.lastPid, self.lastCmd, self.pollOut ) = (
None, None, None, None, None, None, None, None )
self.waiting = False
self.readbuf = ''
# Incremental decoder for buffered reading
self.decoder = getincrementaldecoder()
# Start command interpreter shell
self.master, self.slave = None, None # pylint
self.startShell()
self.mountPrivateDirs()
# File descriptor to node mapping support
# Class variables and methods
inToNode = {} # mapping of input fds to nodes
outToNode = {} # mapping of output fds to nodes
@classmethod
def fdToNode( cls, fd ):
"""Return node corresponding to given file descriptor.
fd: file descriptor
returns: node"""
node = cls.outToNode.get( fd )
return node or cls.inToNode.get( fd )
# Command support via shell process in namespace
def startShell( self, mnopts=None ):
"Start a shell process for running commands"
if self.shell:
error( "%s: shell is already running\n" % self.name )
return
# mnexec: (c)lose descriptors, (d)etach from tty,
# (p)rint pid, and run in (n)amespace
opts = '-cd' if mnopts is None else mnopts
if self.inNamespace:
opts += 'n'
# bash -i: force interactive
# -s: pass $* to shell, and make process easy to find in ps
# prompt is set to sentinel chr( 127 )
cmd = [ 'mnexec', opts, 'env', 'PS1=' + chr( 127 ),
'bash', '--norc', '--noediting',
'-is', 'mininet:' + self.name ]
# Spawn a shell subprocess in a pseudo-tty, to disable buffering
# in the subprocess and insulate it from signals (e.g. SIGINT)
# received by the parent
self.master, self.slave = pty.openpty()
self.shell = self._popen( cmd, stdin=self.slave, stdout=self.slave,
stderr=self.slave, close_fds=False )
# XXX BL: This doesn't seem right, and we should also probably
# close our files when we exit...
self.stdin = os.fdopen( self.master, 'r' )
self.stdout = self.stdin
self.pid = self.shell.pid
self.pollOut = select.poll()
self.pollOut.register( self.stdout )
# Maintain mapping between file descriptors and nodes
# This is useful for monitoring multiple nodes
# using select.poll()
self.outToNode[ self.stdout.fileno() ] = self
self.inToNode[ self.stdin.fileno() ] = self
self.execed = False
self.lastCmd = None
self.lastPid = None
self.readbuf = ''
# Wait for prompt
while True:
data = self.read( 1024 )
if data[ -1 ] == chr( 127 ):
break
self.pollOut.poll()
self.waiting = False
# +m: disable job control notification
self.cmd( 'unset HISTFILE; stty -echo; set +m' )
def mountPrivateDirs( self ):
"mount private directories"
# Avoid expanding a string into a list of chars
assert not isinstance( self.privateDirs, BaseString )
for directory in self.privateDirs:
if isinstance( directory, tuple ):
# mount given private directory
privateDir = directory[ 1 ] % self.__dict__
mountPoint = directory[ 0 ]
self.cmd( 'mkdir -p %s' % privateDir )
self.cmd( 'mkdir -p %s' % mountPoint )
self.cmd( 'mount --bind %s %s' %
( privateDir, mountPoint ) )
else:
# mount temporary filesystem on directory
self.cmd( 'mkdir -p %s' % directory )
self.cmd( 'mount -n -t tmpfs tmpfs %s' % directory )
def unmountPrivateDirs( self ):
"mount private directories"
for directory in self.privateDirs:
if isinstance( directory, tuple ):
self.cmd( 'umount ', directory[ 0 ] )
else:
self.cmd( 'umount ', directory )
def _popen( self, cmd, **params ):
"""Internal method: spawn and return a process
cmd: command to run (list)
params: parameters to Popen()"""
# Leave this is as an instance method for now
assert self
popen = Popen( cmd, **params )
debug( '_popen', cmd, popen.pid )
return popen
def cleanup( self ):
"Help python collect its garbage."
# We used to do this, but it slows us down:
# Intfs may end up in root NS
# for intfName in self.intfNames():
# if self.name in intfName:
# quietRun( 'ip link del ' + intfName )
if self.shell:
# Close ptys
self.stdin.close()
# os.close(self.slave)
if self.waitExited:
debug( 'waiting for', self.pid, 'to terminate\n' )
self.shell.wait()
self.shell = None
if self.master:
self.stdin.close()
self.master = None
self.stdin = None
self.stdout = None
if self.slave:
os.close(self.slave)
self.slave = None
# Subshell I/O, commands and control
def read( self, size=1024 ):
"""Buffered read from node, potentially blocking.
size: maximum number of characters to return"""
count = len( self.readbuf )
if count < size:
data = os.read( self.stdout.fileno(), size - count )
self.readbuf += self.decoder.decode( data )
if size >= len( self.readbuf ):
result = self.readbuf
self.readbuf = ''
else:
result = self.readbuf[ :size ]
self.readbuf = self.readbuf[ size: ]
return result
def readline( self ):
"""Buffered readline from node, potentially blocking.
returns: line (minus newline) or None"""
self.readbuf += self.read( 1024 )
if '\n' not in self.readbuf:
return None
pos = self.readbuf.find( '\n' )
line = self.readbuf[ 0: pos ]
self.readbuf = self.readbuf[ pos + 1: ]
return line
def write( self, data ):
"""Write data to node.
data: string"""
os.write( self.stdin.fileno(), encode( data ) )
def terminate( self ):
"Send kill signal to Node and clean up after it."
self.unmountPrivateDirs()
if self.shell:
if self.shell.poll() is None:
os.killpg( self.shell.pid, signal.SIGHUP )
self.cleanup()
def stop( self, deleteIntfs=False ):
"""Stop node.
deleteIntfs: delete interfaces? (False)"""
if deleteIntfs:
self.deleteIntfs()
self.terminate()
def waitReadable( self, timeoutms=None ):
"""Wait until node's output is readable.
timeoutms: timeout in ms or None to wait indefinitely.
returns: result of poll()"""
if len( self.readbuf ) == 0:
return self.pollOut.poll( timeoutms )
return None
def sendCmd( self, *args, **kwargs ):
"""Send a command, followed by a command to echo a sentinel,
and return without waiting for the command to complete.
args: command and arguments, or string
printPid: print command's PID? (False)"""
# be a bit more relaxed here and allow to wait 120s for the shell
cnt = 0
while (self.waiting and cnt < 5 * 120):
debug("Waiting for shell to unblock...")
sleep(.2)
cnt += 1
if cnt > 0:
warn("Shell unblocked after {:.2f}s"
.format(float(cnt)/5))
assert self.shell and not self.waiting
printPid = kwargs.get( 'printPid', False )
# Allow sendCmd( [ list ] )
if len( args ) == 1 and isinstance( args[ 0 ], list ):
cmd = args[ 0 ]
# Allow sendCmd( cmd, arg1, arg2... )
elif len( args ) > 0:
cmd = args
# Convert to string
if not isinstance( cmd, str ):
cmd = ' '.join( [ str( c ) for c in cmd ] )
if not re.search( r'\w', cmd ):
# Replace empty commands with something harmless
cmd = 'echo -n'
self.lastCmd = cmd
# if a builtin command is backgrounded, it still yields a PID
if len( cmd ) > 0 and cmd[ -1 ] == '&':
# print ^A{pid}\n so monitor() can set lastPid
cmd += ' printf "\\001%d\\012" $! '
elif printPid and not isShellBuiltin( cmd ):
cmd = 'mnexec -p ' + cmd
#info('execute cmd: {0}'.format(cmd))
self.write( cmd + '\n' )
self.lastPid = None
self.waiting = True
def sendInt( self, intr=chr( 3 ) ):
"Interrupt running command."
debug( 'sendInt: writing chr(%d)\n' % ord( intr ) )
self.write( intr )
def monitor( self, timeoutms=None, findPid=True ):
"""Monitor and return the output of a command.
Set self.waiting to False if command has completed.
timeoutms: timeout in ms or None to wait indefinitely
findPid: look for PID from mnexec -p"""
ready = self.waitReadable( timeoutms )
if not ready:
return ''
data = self.read( 1024 )
pidre = r'\[\d+\] \d+\r\n'
# Look for PID
marker = chr( 1 ) + r'\d+\r\n'
if findPid and chr( 1 ) in data:
# suppress the job and PID of a backgrounded command
if re.findall( pidre, data ):
data = re.sub( pidre, '', data )
# Marker can be read in chunks; continue until all of it is read
while not re.findall( marker, data ):
data += self.read( 1024 )
markers = re.findall( marker, data )
if markers:
self.lastPid = int( markers[ 0 ][ 1: ] )
data = re.sub( marker, '', data )
# Look for sentinel/EOF
if len( data ) > 0 and data[ -1 ] == chr( 127 ):
self.waiting = False
data = data[ :-1 ]
elif chr( 127 ) in data:
self.waiting = False
data = data.replace( chr( 127 ), '' )
return data
def waitOutput( self, verbose=False, findPid=True ):
"""Wait for a command to complete.
Completion is signaled by a sentinel character, ASCII(127)
appearing in the output stream. Wait for the sentinel and return
the output, including trailing newline.
verbose: print output interactively"""
log = info if verbose else debug
output = ''
while self.waiting:
data = self.monitor( findPid=findPid )
output += data
log( data )
return output
def cmd( self, *args, **kwargs ):
"""Send a command, wait for output, and return it.
cmd: string"""
verbose = kwargs.get( 'verbose', False )
log = info if verbose else debug
log( '*** %s : %s\n' % ( self.name, args ) )
if self.shell:
self.shell.poll()
if self.shell.returncode is not None:
print("shell died on ", self.name)
self.shell = None
self.startShell()
self.sendCmd( *args, **kwargs )
return self.waitOutput( verbose )
else:
warn( '(%s exited - ignoring cmd%s)\n' % ( self, args ) )
return None
def cmdPrint( self, *args):
"""Call cmd and printing its output
cmd: string"""
return self.cmd( *args, **{ 'verbose': True } )
def popen( self, *args, **kwargs ):
"""Return a Popen() object in our namespace
args: Popen() args, single list, or string
kwargs: Popen() keyword args"""
defaults = { 'stdout': PIPE, 'stderr': PIPE,
'mncmd':
[ 'mnexec', '-da', str( self.pid ) ] }
defaults.update( kwargs )
shell = defaults.pop( 'shell', False )
if len( args ) == 1:
if isinstance( args[ 0 ], list ):
# popen([cmd, arg1, arg2...])
cmd = args[ 0 ]
elif isinstance( args[ 0 ], BaseString ):
# popen("cmd arg1 arg2...")
cmd = [ args[ 0 ] ] if shell else args[ 0 ].split()
else:
raise Exception( 'popen() requires a string or list' )
elif len( args ) > 0:
# popen( cmd, arg1, arg2... )
cmd = list( args )
if shell:
cmd = [ os.environ[ 'SHELL' ], '-c' ] + [ ' '.join( cmd ) ]
# Attach to our namespace using mnexec -a
cmd = defaults.pop( 'mncmd' ) + cmd
popen = self._popen( cmd, **defaults )
return popen
def pexec( self, *args, **kwargs ):
"""Execute a command using popen
returns: out, err, exitcode"""
popen = self.popen( *args, stdin=PIPE, stdout=PIPE, stderr=PIPE,
**kwargs )
# Warning: this can fail with large numbers of fds!
out, err = popen.communicate()
exitcode = popen.wait()
return decode( out ), decode( err ), exitcode
# Interface management, configuration, and routing
# BL notes: This might be a bit redundant or over-complicated.
# However, it does allow a bit of specialization, including
# changing the canonical interface names. It's also tricky since
# the real interfaces are created as veth pairs, so we can't
# make a single interface at a time.
def newPort( self ):
"Return the next port number to allocate."
if len( self.ports ) > 0:
return max( self.ports.values() ) + 1
return self.portBase
def addIntf( self, intf, port=None, moveIntfFn=moveIntf ):
"""Add an interface.
intf: interface
port: port number (optional, typically OpenFlow port number)
moveIntfFn: function to move interface (optional)"""
if port is None:
port = self.newPort()
self.intfs[ port ] = intf
self.ports[ intf ] = port
self.nameToIntf[ intf.name ] = intf
debug( '\n' )
debug( 'added intf %s (%d) to node %s\n' % (
intf, port, self.name ) )
if self.inNamespace:
debug( 'moving', intf, 'into namespace for', self.name, '\n' )
moveIntfFn( intf.name, self )
def delIntf( self, intf ):
"""Remove interface from Node's known interfaces
Note: to fully delete interface, call intf.delete() instead"""
port = self.ports.get( intf )
if port is not None:
del self.intfs[ port ]
del self.ports[ intf ]
del self.nameToIntf[ intf.name ]
def defaultIntf( self ):
"Return interface for lowest port"
ports = self.intfs.keys()
if ports:
return self.intfs[ min( ports ) ]
else:
warn( '*** defaultIntf: warning:', self.name,
'has no interfaces\n' )
return None
def intf( self, intf=None ):
"""Return our interface object with given string name,
default intf if name is falsy (None, empty string, etc).
or the input intf arg.
Having this fcn return its arg for Intf objects makes it
easier to construct functions with flexible input args for
interfaces (those that accept both string names and Intf objects).
"""
if not intf:
return self.defaultIntf()
elif isinstance( intf, BaseString):
return self.nameToIntf[ intf ]
else:
return intf
def connectionsTo( self, node):
"Return [ intf1, intf2... ] for all intfs that connect self to node."
# We could optimize this if it is important
connections = []
for intf in self.intfList():
link = intf.link
if link:
node1, node2 = link.intf1.node, link.intf2.node
if node1 == self and node2 == node:
connections += [ ( intf, link.intf2 ) ]
elif node1 == node and node2 == self:
connections += [ ( intf, link.intf1 ) ]
return connections
def deleteIntfs( self, checkName=True ):
"""Delete all of our interfaces.
checkName: only delete interfaces that contain our name"""
# In theory the interfaces should go away after we shut down.
# However, this takes time, so we're better off removing them
# explicitly so that we won't get errors if we run before they
# have been removed by the kernel. Unfortunately this is very slow,
# at least with Linux kernels before 2.6.33
for intf in list( self.intfs.values() ):
# Protect against deleting hardware interfaces
if ( self.name in intf.name ) or ( not checkName ):
intf.delete()
info( '.' )
# Routing support
def setARP( self, ip, mac ):
"""Add an ARP entry.
ip: IP address as string
mac: MAC address as string"""
result = self.cmd( 'arp', '-s', ip, mac )
return result
def setHostRoute( self, ip, intf ):
"""Add route to host.
ip: IP address as dotted decimal
intf: string, interface name"""
return self.cmd( 'route add -host', ip, 'dev', intf )
def setDefaultRoute( self, intf=None ):
"""Set the default route to go through intf.
intf: Intf or {dev <intfname> via <gw-ip> ...}"""
# Note setParam won't call us if intf is none
if isinstance( intf, BaseString ) and ' ' in intf:
params = intf
else:
params = 'dev %s' % intf
# Do this in one line in case we're messing with the root namespace
self.cmd( 'ip route del default; ip route add default', params )
# Convenience and configuration methods
def setMAC( self, mac, intf=None ):
"""Set the MAC address for an interface.
intf: intf or intf name
mac: MAC address as string"""
return self.intf( intf ).setMAC( mac )
def setIP( self, ip, prefixLen=8, intf=None, **kwargs ):
"""Set the IP address for an interface.
intf: intf or intf name
ip: IP address as a string
prefixLen: prefix length, e.g. 8 for /8 or 16M addrs
kwargs: any additional arguments for intf.setIP"""
return self.intf( intf ).setIP( ip, prefixLen, **kwargs )
def IP( self, intf=None ):
"Return IP address of a node or specific interface."
return self.intf( intf ).IP()
def MAC( self, intf=None ):
"Return MAC address of a node or specific interface."
return self.intf( intf ).MAC()
def intfIsUp( self, intf=None ):
"Check if an interface is up."
return self.intf( intf ).isUp()
# The reason why we configure things in this way is so
# That the parameters can be listed and documented in
# the config method.
# Dealing with subclasses and superclasses is slightly
# annoying, but at least the information is there!
def setParam( self, results, method, **param ):
"""Internal method: configure a *single* parameter
results: dict of results to update
method: config method name
param: arg=value (ignore if value=None)
value may also be list or dict"""
name, value = list( param.items() )[ 0 ]
if value is None:
return None
f = getattr( self, method, None )
if not f:
return None
if isinstance( value, list ):
result = f( *value )
elif isinstance( value, dict ):
result = f( **value )
else:
result = f( value )
results[ name ] = result
return result
def config( self, mac=None, ip=None,
defaultRoute=None, lo='up', **_params ):
"""Configure Node according to (optional) parameters:
mac: MAC address for default interface
ip: IP address for default interface
ifconfig: arbitrary interface configuration
Subclasses should override this method and call
the parent class's config(**params)"""
# If we were overriding this method, we would call
# the superclass config method here as follows:
# r = Parent.config( **_params )
r = {}
self.setParam( r, 'setMAC', mac=mac )
self.setParam( r, 'setIP', ip=ip )
self.setParam( r, 'setDefaultRoute', defaultRoute=defaultRoute )
# This should be examined
self.cmd( 'ifconfig lo ' + lo )
return r
def configDefault( self, **moreParams ):
"Configure with default parameters"
self.params.update( moreParams )
self.config( **self.params )
# This is here for backward compatibility
def linkTo( self, node, link=Link ):
"""(Deprecated) Link to another node
replace with Link( node1, node2)"""
return link( self, node )
# Other methods
def intfList( self ):
"List of our interfaces sorted by port number"
return [ self.intfs[ p ] for p in sorted( self.intfs.keys() ) ]
def intfNames( self ):
"The names of our interfaces sorted by port number"
return [ str( i ) for i in self.intfList() ]
def __repr__( self ):
"More informative string representation"
intfs = ( ','.join( [ '%s:%s' % ( i.name, i.IP() )
for i in self.intfList() ] ) )
return '<%s %s: %s pid=%s> ' % (
self.__class__.__name__, self.name, intfs, self.pid )
def __str__( self ):
"Abbreviated string representation"
return self.name
# Automatic class setup support
isSetup = False
@classmethod
def checkSetup( cls ):
"Make sure our class and superclasses are set up"
clas = cls
while clas and not getattr( clas, 'isSetup', True ):
clas.setup()
clas.isSetup = True
# Make pylint happy
clas = getattr( type( clas ), '__base__', None )
@classmethod
def setup( cls ):
"Make sure our class dependencies are available"
pathCheck( 'mnexec', 'ifconfig', moduleName='Mininet')
class Host( Node ):
"A host is simply a Node"
pass
class Docker ( Host ):
"""Node that represents a docker container.
This part is inspired by:
http://techandtrains.com/2014/08/21/docker-container-as-mininet-host/
We use the docker-py client library to control docker.
"""
def __init__(self, name, dimage=None, dcmd=None, build_params={},
**kwargs):
"""
Creates a Docker container as Mininet host.
Resource limitations based on CFS scheduler:
* cpu.cfs_quota_us: the total available run-time within a period (in microseconds)
* cpu.cfs_period_us: the length of a period (in microseconds)
(https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt)
Default Docker resource limitations:
* cpu_shares: Relative amount of max. avail CPU for container
(not a hard limit, e.g. if only one container is busy and the rest idle)
e.g. usage: d1=4 d2=6 <=> 40% 60% CPU
* cpuset_cpus: Bind containers to CPU 0 = cpu_1 ... n-1 = cpu_n (string: '0,2')
* mem_limit: Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
* memswap_limit: Total limit = memory + swap
All resource limits can be updated at runtime! Use:
* updateCpuLimits(...)
* updateMemoryLimits(...)
"""
self.dimage = dimage
self.dnameprefix = "mn"
self.dcmd = dcmd if dcmd is not None else "/bin/bash"
self.dc = None # pointer to the dict containing 'Id' and 'Warnings' keys of the container
self.dcinfo = None
self.did = None # Id of running container
# let's store our resource limits to have them available through the
# Mininet API later on
defaults = { 'cpu_quota': -1,
'cpu_period': None,
'cpu_shares': None,
'cpuset_cpus': None,
'mem_limit': None,
'memswap_limit': None,
'environment': {},
'volumes': [], # use ["/home/user1/:/mnt/vol2:rw"]
'tmpfs': [], # use ["/home/vol1/:size=3G,uid=1000"]
'network_mode': None,
'publish_all_ports': True,
'port_bindings': {},
'ports': [],
'dns': [],
'ipc_mode': None,
'devices': [],
'cap_add': ['net_admin'], # we need this to allow mininet network setup
'storage_opt': None,
'sysctls': {},
'runtime': None
}
defaults.update( kwargs )
if 'net_admin' not in defaults['cap_add']:
defaults['cap_add'] += ['net_admin'] # adding net_admin if it's cleared out to allow mininet network setup
# keep resource in a dict for easy update during container lifetime
self.resources = dict(
cpu_quota=defaults['cpu_quota'],
cpu_period=defaults['cpu_period'],
cpu_shares=defaults['cpu_shares'],
cpuset_cpus=defaults['cpuset_cpus'],
mem_limit=defaults['mem_limit'],
memswap_limit=defaults['memswap_limit']
)
self.volumes = defaults['volumes']
self.tmpfs = defaults['tmpfs']
self.environment = {} if defaults['environment'] is None else defaults['environment']
# setting PS1 at "docker run" may break the python docker api (update_container hangs...)
# self.environment.update({"PS1": chr(127)}) # CLI support
self.network_mode = defaults['network_mode']
self.publish_all_ports = defaults['publish_all_ports']
self.port_bindings = defaults['port_bindings']
self.dns = defaults['dns']
self.ipc_mode = defaults['ipc_mode']
self.devices = defaults['devices']
self.cap_add = defaults['cap_add']
self.sysctls = defaults['sysctls']
self.storage_opt = defaults['storage_opt']
self.runtime = defaults['runtime']
# setup docker client
# self.dcli = docker.APIClient(base_url='unix://var/run/docker.sock')
self.d_client = docker.from_env()
self.dcli = self.d_client.api
_id = None
if build_params.get("path", None):
if not build_params.get("tag", None):
if dimage:
build_params["tag"] = dimage
_id, output = self.build(**build_params)
dimage = _id
self.dimage = _id
info("Docker image built: id: {}, {}. Output:\n".format(
_id, build_params.get("tag", None)))
info(output)
# pull image if it does not exist
self._check_image_exists(dimage, True, _id=None)
# for DEBUG
debug("Created docker container object %s\n" % name)
debug("image: %s\n" % str(self.dimage))
debug("dcmd: %s\n" % str(self.dcmd))
info("%s: kwargs %s\n" % (name, str(kwargs)))
# creats host config for container
# see: https://docker-py.readthedocs.io/en/stable/api.html#docker.api.container.ContainerApiMixin.create_host_config
hc = self.dcli.create_host_config(
network_mode=self.network_mode,
privileged=False, # no longer need privileged, using net_admin capability instead
binds=self.volumes,
tmpfs=self.tmpfs,
publish_all_ports=self.publish_all_ports,
port_bindings=self.port_bindings,
mem_limit=self.resources.get('mem_limit'),
cpuset_cpus=self.resources.get('cpuset_cpus'),
dns=self.dns,
ipc_mode=self.ipc_mode, # string
devices=self.devices, # see docker-py docu
cap_add=self.cap_add, # see docker-py docu
sysctls=self.sysctls, # see docker-py docu
storage_opt=self.storage_opt,
# Assuming Docker uses the cgroupfs driver, we set the parent to safely
# access cgroups when modifying resource limits.
cgroup_parent='/docker',
runtime=self.runtime
)
if kwargs.get("rm", False):
container_list = self.dcli.containers(all=True)
for container in container_list:
for container_name in container.get("Names", []):
if "%s.%s" % (self.dnameprefix, name) in container_name:
self.dcli.remove_container(container="%s.%s" % (self.dnameprefix, name), force=True)
break
# create new docker container
self.dc = self.dcli.create_container(
name="%s.%s" % (self.dnameprefix, name),
image=self.dimage,
command=self.dcmd,
entrypoint=list(), # overwrite (will be executed manually at the end)
stdin_open=True, # keep container open
tty=True, # allocate pseudo tty
environment=self.environment,
#network_disabled=True, # docker stats breaks if we disable the default network
host_config=hc,
ports=defaults['ports'],
labels=['com.containernet'],
volumes=[self._get_volume_mount_name(v) for v in self.volumes if self._get_volume_mount_name(v) is not None],
hostname=name
)
# start the container
self.dcli.start(self.dc)
debug("Docker container %s started\n" % name)
# fetch information about new container
self.dcinfo = self.dcli.inspect_container(self.dc)
self.did = self.dcinfo.get("Id")
# call original Node.__init__
Host.__init__(self, name, **kwargs)
# let's initially set our resource limits
self.update_resources(**self.resources)
self.master = None
self.slave = None
def build(self, **kwargs):
image, output = self.d_client.images.build(**kwargs)
output_str = parse_build_output(output)
return image.id, output_str
def start(self):
# Containernet ignores the CMD field of the Dockerfile.
# Lets try to load it here and manually execute it once the
# container is started and configured by Containernet:
cmd_field = self.get_cmd_field(self.dimage)
entryp_field = self.get_entrypoint_field(self.dimage)
if entryp_field is not None:
if cmd_field is None:
cmd_field = list()
# clean up cmd_field
try:
cmd_field.remove(u'/bin/sh')
cmd_field.remove(u'-c')
except ValueError as ex:
pass
# we just add the entryp. commands to the beginning:
cmd_field = entryp_field + cmd_field
if cmd_field is not None:
cmd_field.append("> /dev/pts/0 2>&1") # make output available to docker logs
cmd_field.append("&") # put to background (works, but not nice)
info("{}: running CMD: {}\n".format(self.name, cmd_field))
self.cmd(" ".join(cmd_field))
def get_cmd_field(self, imagename):
"""
Try to find the original CMD command of the Dockerfile
by inspecting the Docker image.
Returns list from CMD field if it is different from
a single /bin/bash command which Containernet executes
anyhow.
"""
try:
imgd = self.dcli.inspect_image(imagename)
cmd = imgd.get("Config", {}).get("Cmd")
assert isinstance(cmd, list)
# filter the default case: a single "/bin/bash"
if "/bin/bash" in cmd and len(cmd) == 1:
return None
return cmd
except BaseException as ex:
error("Error during image inspection of {}:{}"
.format(imagename, ex))
return None
def get_entrypoint_field(self, imagename):
"""
Try to find the original ENTRYPOINT command of the Dockerfile
by inspecting the Docker image.
Returns list or None.
"""
try:
imgd = self.dcli.inspect_image(imagename)
ep = imgd.get("Config", {}).get("Entrypoint")
if isinstance(ep, list) and len(ep) < 1:
return None
return ep
except BaseException as ex:
error("Error during image inspection of {}:{}"
.format(imagename, ex))
return None
# Command support via shell process in namespace
def startShell( self, *args, **kwargs ):
"Start a shell process for running commands"
if self.shell:
error( "%s: shell is already running\n" % self.name )
return
# mnexec: (c)lose descriptors, (d)etach from tty,
# (p)rint pid, and run in (n)amespace
# opts = '-cd' if mnopts is None else mnopts
# if self.inNamespace:
# opts += 'n'
# bash -i: force interactive
# -s: pass $* to shell, and make process easy to find in ps
# prompt is set to sentinel chr( 127 )
cmd = [ 'docker', 'exec', '-it', '%s.%s' % ( self.dnameprefix, self.name ), 'env', 'PS1=' + chr( 127 ),
'bash', '--norc', '-is', 'mininet:' + self.name ]
# Spawn a shell subprocess in a pseudo-tty, to disable buffering
# in the subprocess and insulate it from signals (e.g. SIGINT)
# received by the parent
self.master, self.slave = pty.openpty()
self.shell = self._popen( cmd, stdin=self.slave, stdout=self.slave, stderr=self.slave,
close_fds=False )
self.stdin = os.fdopen( self.master, 'r' )
self.stdout = self.stdin
self.pid = self._get_pid()
self.pollOut = select.poll()
self.pollOut.register( self.stdout )
# Maintain mapping between file descriptors and nodes
# This is useful for monitoring multiple nodes
# using select.poll()
self.outToNode[ self.stdout.fileno() ] = self
self.inToNode[ self.stdin.fileno() ] = self
self.execed = False
self.lastCmd = None
self.lastPid = None
self.readbuf = ''
# Wait for prompt
while True:
data = self.read( 1024 )
if data[ -1 ] == chr( 127 ):
break
self.pollOut.poll()
self.waiting = False
# +m: disable job control notification