-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
contiv-vpp pod-to-pod low performance #1437
Comments
As mentioned in #944, VPP 18.04 will contain the GSO (Generic Segmentation Offload) support. That should boost the tap interface performance. By then, you may try tweaking the iperf3 test:
and as I said, running multiple parallel connections between the nodes (multiple client & server PODs). |
thanks @rastislavszabo - I will try those. |
There is a Go libmemif implementation that could be used for writing an app that would connect to VPP via memif, and could send large amount of data to VPP via it. That could be packaged into a container and deployed as a POD - but VPP side of the memif would need to be configured somehow - e.g. via writing its config to contiv ETCD. I'm not aware of any existing container. |
@rastislavszabo - just to record it:
|
As of the release 3.2.0, the GSO feature on VPP is enabled, which should increase pod-to-pod performance within the same node about 3x. For better DPDK performance (inter-node), multple worker threads should be used for VPP, which is tracked in #1432 and should be resolved in the next release. |
Following the discussion in #944 and as described in #944 (comment)
Pod-to-Pod iperf3 test result is ~2Gbits/sec over 100GB Mellanox NIC.
Opening this item to document the low performance and consult regarding the following:
The text was updated successfully, but these errors were encountered: