Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

contiv-vpp pod-to-pod low performance #1437

Closed
itailev opened this issue Feb 26, 2019 · 5 comments
Closed

contiv-vpp pod-to-pod low performance #1437

itailev opened this issue Feb 26, 2019 · 5 comments

Comments

@itailev
Copy link

itailev commented Feb 26, 2019

Following the discussion in #944 and as described in #944 (comment)
Pod-to-Pod iperf3 test result is ~2Gbits/sec over 100GB Mellanox NIC.
Opening this item to document the low performance and consult regarding the following:

  • Anyone did this test on 25GB+ NIC?
  • Anyone has some expected performance results for this kind of test?
  • Anyone has an idea how to overcome the VPP tap bottleneck? maybe some traffic gen test container connected to the vpp vswitch over dpdk-based interface (MemIF?) ?
@rastislavs
Copy link
Collaborator

As mentioned in #944, VPP 18.04 will contain the GSO (Generic Segmentation Offload) support. That should boost the tap interface performance.

By then, you may try tweaking the iperf3 test:

  • enlarge window size (e.g. -w 1M)
  • more parallel streams (e.g. -P 8)
  • UDP traffic (-u)

and as I said, running multiple parallel connections between the nodes (multiple client & server PODs).

@itailev
Copy link
Author

itailev commented Feb 27, 2019

thanks @rastislavszabo - I will try those.
Do you know of a test container utilizing high performance interface to the VPP?

@rastislavs
Copy link
Collaborator

There is a Go libmemif implementation that could be used for writing an app that would connect to VPP via memif, and could send large amount of data to VPP via it. That could be packaged into a container and deployed as a POD - but VPP side of the memif would need to be configured somehow - e.g. via writing its config to contiv ETCD. I'm not aware of any existing container.

@itailev
Copy link
Author

itailev commented Feb 28, 2019

@rastislavszabo - just to record it:
none of the suggestions above changed, however after increasing the tap mtu by setting the mtuSize in the contiv-vpp.yaml to 9000 and playing with iperf params I was able to get:
UDP ~15 Gbits/sec using iperf3 -c <server> -u -b 0
TCP ~10 Gbits/sec using iperf3 -c <server> -M 8900

  • parallel streams are not improving the results

@rastislavs
Copy link
Collaborator

As of the release 3.2.0, the GSO feature on VPP is enabled, which should increase pod-to-pod performance within the same node about 3x. For better DPDK performance (inter-node), multple worker threads should be used for VPP, which is tracked in #1432 and should be resolved in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants