@@ -454,6 +454,27 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <
[email protected] > -- 2008-2014.
454
454
actual binary loaded;
455
455
aggr mac vlan: tags to identify compile time options that are enabled.
456
456
457
+ > Protocol version 10 (ipfix), refresh-rate 20, timeout-rate 30, (templates 2, active 2). Timeouts: active 5, inactive 15. Maxflows 2000000
458
+
459
+ Protocol version currently in use. Refresh-rate and timeout-rate
460
+ for v9 and IPFIX. Total templates generated and currently active.
461
+ Timeout: active X: how much seconds to wait before exporting active flow.
462
+ - same as sysctl net.netflow.active_timeout variable.
463
+ inactive X: how much seconds to wait before exporting inactive flow.
464
+ - same as sysctl net.netflow.inactive_timeout variable.
465
+ Maxflows 2000000: maxflows limit.
466
+ - all flows above maxflows limit must be dropped.
467
+ - you can control maxflows limit by sysctl net.netflow.maxflows variable.
468
+
469
+ > Promisc hack is disabled (observed 0 packets, discarded 0).
470
+
471
+ observed n: To see that promisc hack is really working.
472
+
473
+ > Natevents disabled, count start 0, stop 0.
474
+
475
+ - Natevents mode disabled or enabled, and how much start or stop events
476
+ are reported.
477
+
457
478
> Flows: active 5187 (peak 83905 reached 0d0h1m ago), mem 283K, worker delay 100/1000 (37 ms, 0 us, 4:0 0 [3]).
458
479
459
480
active X: currently active flows in memory cache.
@@ -466,7 +487,7 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <
[email protected] > -- 2008-2014.
466
487
worker delay X/HZ: how frequently exporter scan flows table per second.
467
488
Rest is boring debug info.
468
489
469
- > Hash: size 8192 (mem 32K), metric 1.00, [1.00, 1.00, 1.00]. MemTraf : 1420 pkt, 364 K (pdu 0, 0) .
490
+ > Hash: size 8192 (mem 32K), metric 1.00, [1.00, 1.00, 1.00]. InHash : 1420 pkt, 364 K, InPDU 28, 6716 .
470
491
471
492
Hash: size X: current hash size/limit.
472
493
- you can control this by sysctl net.netflow.hashsize variable.
@@ -482,87 +503,68 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <
[email protected] > -- 2008-2014.
482
503
15 minutes. Sort of hash table load average. First value is instantaneous.
483
504
You can try to increase hashsize if averages more than 1 (increase
484
505
certainly if >= 2).
485
- MemTraf: X pkt, X K: how much traffic accounted for flows that are in memory.
486
- - these flows that are residing in internal hash table.
487
- pdu X, X: how much traffic in flows preparing to be exported.
488
- - it is included already in aforementioned MemTraf total.
489
-
490
- > Protocol version 10 (ipfix), refresh-rate 20, timeout-rate 30, (templates 2, active 2). Timeouts: active 5, inactive 15. Maxflows 2000000
491
-
492
- Protocol version currently in use. Refresh-rate and timeout-rate
493
- for v9 and IPFIX. Total templates generated and currently active.
494
- Timeout: active X: how much seconds to wait before exporting active flow.
495
- - same as sysctl net.netflow.active_timeout variable.
496
- inactive X: how much seconds to wait before exporting inactive flow.
497
- - same as sysctl net.netflow.inactive_timeout variable.
498
- Maxflows 2000000: maxflows limit.
499
- - all flows above maxflows limit must be dropped.
500
- - you can control maxflows limit by sysctl net.netflow.maxflows variable.
506
+ InHash: X pkt, X K: how much traffic accounted for flows in the hash table.
507
+ InPDU X, X: how much traffic in flows preparing to be exported.
501
508
502
509
> Rate: 202448 bits/sec, 83 packets/sec; 1 min: 668463 bps, 930 pps; 5 min: 329039 bps, 483 pps
503
510
504
511
- Module throughput values for 1 second, 1 minute, and 5 minutes.
505
512
506
- > cpu# stat: <search found new [metric], trunc frag alloc maxflows>, sock: <ok fail cberr, bytes >, traffic: <pkt, bytes>, drop: <pkt, bytes>
507
- > cpu0 stat: 980540 10473 180600 [1.03], 0 0 0 0, sock: 4983 928 0, 7124 K , traffic: 188765, 14 MB, drop: 27863, 1142 K
513
+ > cpu# pps; <search found new [metric], trunc frag alloc maxflows>, traffic: <pkt, bytes>, drop: <pkt, bytes>
514
+ > cpu0 123; 980540 10473 180600 [1.03], 0 0 0 0, traffic: 188765, 14 MB, drop: 27863, 1142 K
508
515
509
516
cpu#: this is Total and per CPU statistics for:
510
- stat: <search found new, trunc frag alloc maxflows>: internal stat for:
517
+ pps: packets per second on this CPU. It's useful to debug load imbalance.
518
+ <search found new, trunc frag alloc maxflows>: internal stat for:
511
519
search found new: hash table searched, found, and not found counters.
512
520
[metric]: one minute (ewma) average hash metric per cpu.
513
521
trunc: how much truncated packets are ignored
514
- - these are that possible don't have valid IP header.
515
- - accounted in drop packets counter but not in drop bytes.
522
+ - for example if packets don't have valid IP header.
523
+ - it's also accounted in drop packets counter, but not in drop bytes.
516
524
frag: how much fragmented packets have seen.
517
- - kernel always defragments INPUT/OUTPUT chains for us.
525
+ - kernel defragments INPUT/OUTPUT chains for us if nf_defrag_ipv[46]
526
+ module is loaded.
518
527
- these packets are not ignored but not reassembled either, so:
519
528
- if there is no enough data in fragment (ex. tcp ports) it is considered
520
- zero.
529
+ to be zero.
521
530
alloc: how much cache memory allocations are failed.
522
- - packets ignored and accounted in drop stat.
531
+ - packets ignored and accounted in traffic drop stat.
523
532
- probably increase system memory if this ever happen.
524
533
maxflows: how much packets ignored on maxflows (maximum active flows reached).
525
- - packets ignored and accounted in drop stat.
534
+ - packets ignored and accounted in traffic drop stat.
526
535
- you can control maxflows limit by sysctl net.netflow.maxflows variable.
527
536
528
- sock: <ok fail cberr, bytes>: table of exporting stats for:
529
- ok: how much Netflow PDUs are exported (i.e. UDP packets sent by module).
530
- fail: how much socket errors (i.e. packets failed to be sent).
531
- - packets dropped and their internal statistics cumulatively accounted in
532
- drop stat.
533
- cberr: how much connection refused ICMP errors we got from export target.
534
- - probably you not launched collector software on destination,
535
- - or specified wrong destination address.
536
- - flows lost in this fashion is not possible to account in drop stat.
537
- - these are ICMP errors, and would look like this in tcpdump:
538
- 05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
539
- 05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
540
- bytes: how much kilobytes of exporting data successfully sent by the module.
541
-
542
537
traffic: <pkt, bytes>: how much traffic is accounted.
543
538
pkt, bytes: sum of packets/megabytes accounted by module.
544
539
- flows that failed to be exported (on socket error) is accounted here too.
545
540
546
541
drop: <pkt, bytes>: how much of traffic is not accounted.
547
- pkt, bytes: sum of packets/kilobytes we are lost/ dropped.
548
- - reasons they are dropped and accounted here:
542
+ pkt, bytes: sum of packets/kilobytes that are dropped by metering process .
543
+ - reasons these drops are accounted here:
549
544
truncated/fragmented packets,
550
545
packet is for new flow but failed to allocate memory for it,
551
- packet is for new flow but maxflows is already reached,
552
- all flows in export packets that got socket error.
546
+ packet is for new flow but maxflows is already reached.
547
+ Traffic lost due to socket errors is not accounted here. Look below
548
+ about export and socket errors.
553
549
554
- > Natevents disabled, count start 0, stop 0 .
550
+ > Export: Rate 0 bytes/s; Total 2 pkts, 0 MB, 18 flows; Errors 0 pkts; Traffic lost 0 pkts, 0 Kbytes, 0 flows .
555
551
556
- - Natevents mode disabled or enabled, and how much start or stop events
557
- are reported.
552
+ Rate X bytes/s: traffic rate generated by exporter itself.
553
+ Total X pkts, X MB: total amount of traffic generated by exporter.
554
+ X flows: how much data flows are exported.
555
+ Errors X pkts: how much packets not sent due to socket errors.
556
+ Traffic lost 0 pkts, 0 Kbytes, 0 flows: how much metered traffic is lost
557
+ due to socket errors.
558
+ Note that `cberr' errors are not accounted here due to their asynchronous
559
+ nature. Read below about `cberr' errors.
558
560
559
561
> sock0: 10.0.0.2:2055 unconnected (1 attempts).
560
562
561
563
If socket is unconnected (for example if module loaded before interfaces is
562
564
up) it shows now much connection attempts was failed. It will try to connect
563
565
until success.
564
566
565
- > sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, connect 0, other 0
567
+ > sock0: 10.0.0.2:2055, sndbuf 106496, filled 0, peak 106848; err: sndbuf reached 928, connect 0, cberr 0, other 0
566
568
567
569
sockX: per destination stats for:
568
570
X.X.X.X:Y: destination ip address and port.
@@ -579,6 +581,13 @@ ipt_NETFLOW linux 2.6.x-3.x kernel module by <
[email protected] > -- 2008-2014.
579
581
sndbuf reached X: how much packets dropped due to sndbuf being too small
580
582
(error -11).
581
583
connect X: how much connection attempts was failed.
584
+ cberr X: how much connection refused ICMP errors we got from export target.
585
+ - probably you are not launched collector software on destination,
586
+ - or specified wrong destination address.
587
+ - flows lost in this fashion is not possible to account in drop stat.
588
+ - these are ICMP errors, and would look like this in tcpdump:
589
+ 05:04:09.281247 IP alice.19440 > bob.2055: UDP, length 120
590
+ 05:04:09.281405 IP bob > alice: ICMP bob udp port 2055 unreachable, length 156
582
591
other X: dropped due to other possible errors.
583
592
584
593
> aggr0: ...
0 commit comments