-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP performance stuck around 10mbits unless using -A option #234
Comments
Two other pieces of output that might be useful:
We have seen some odd effects related to processor affinity but they usually show up at much higher bitrates than what you're doing (i.e. 10Gbps), and only on multi-CPU systems. |
hi Bruce, Thanks for the quick response. In regards to the points raised: 1: Ouch that's a nasty trap :-) I've re-run the test as requested and sadly still getting less-than 10mbits on the test, so it's clear the earlier results were from the cli arguments bug rather than something relating to affinity itself.
2: Short answer is that I'd expect at least 100 Mbits across the tunnel, I've confirmed that it's capable of doing such with a test over the tunnel using TCP:
The test done previously with iperf2 verified that the tunnel was capable of doing over 100 Mbits UDP as well, so based on iperf3's TCP results and iperf2's UDP results, I'm pretty comfortable that the tunnel itself is operating as expected. I've also done a test between the real IP addresses of the servers in UDP to demonstrate how UDP works correctly as long as it's not going via a Linux tunnel.
|
OK thanks for the sample output. One thing you could try is (on iperf3 3.0.10 or newer) experiment with the |
Tested with latest version as of 20 Jan 2015 (6bd4e25) and found that whilst the -w command option can impact the performance somewhat, I can never get it beyond the 10mbit limit. |
I've come across a similar issue where iperf3 has huge packet loss while iperf2 has none. What was interesting is that when reversing the test, there's no packet loss. I've also tried to change the UDP datagram size to match that of iperf2, but this had no effect. My setup:
If I force laptop1's ethernet to 100Mbps there's no more packet loss. Anyway in which we can help debug this problem? |
Hello Esnet. It's been over a year since this packet loss problem was reported. Are there any plans to release a new version of IPERF3 where UDP works reliably over 10Mbps? While testing over 1Gbps links: -----forward direction----- (very big delta in results) iperf3 -c 10.x.x.y -u -b950M -t120s -p 5202 = 75.18% packet loss -----reverse direction----- (not as bad as Forward direction) iperf3 -c 10.x.x.y -u -b950M -t120s -p 5202 -R = 8.5566% packet loss |
I am observing the same problem running UDP tests for different packet sizes and different number of streams. I am running over a link where TCP throughput is 50 MBit/s but UDP throughput starts to show packet loss when the load irrespective of number of threads and packet size is larger than 10 Mbit/s. |
I switched to the older version 2.0.8 and have no problems now. Version 3 is not accurate. Sent from my iPhone On Nov 23, 2016, at 4:45 AM, petterwildhagen <[email protected]mailto:[email protected]> wrote: I am observing the same problem running UDP tests for different packet sizes and different number of streams. I am running over a link where TCP throughput is 50 MBit/s but UDP throughput starts to show packet loss when the load irrespective of number of threads and packet size is larger than 10 Mbit/s. You are receiving this because you commented. |
Recommend you try increasing the buffer size with Also if you are not running over a path that is jumbo-frame clean, try setting |
We're tracking UDP performance issues in bug #296, so resolving this one as a duplicate bug. |
I believe I've found a bug in iperf3 where the performance of UDP tests seem capped at around 10mbit when done over a Linux tunnel interface. Doing TCP tests via the tunnel does not have any performance issue, likewise doing UDP or TCP tests via the real (not tunneled) interface also works fine.
What is particularly weird is that I can work around the performance issue by setting the -A parameter (CPU affinity), as soon as this is done, UDP performance is back to normal. CPU affinity should make no difference, give that the test machines have only a single CPU, so it's unclear why this is making any effect.
Reproduction:
Environment:
Example Of Fault:
The following illustrates a test via a tunnel, where bandwidth of 100Mbits has been requested, but the receiving server only shows ~ 10mbits recieved and a large amount of lost packets.
To verify it's not a case of the tunnel itself, a test with iperf2 performs correctly:
The remote server shows the expected performance:
Re-doing the UDP test with iperf3, but this time with the CPU affinity option set (-A), the test results perform as expected:
The text was updated successfully, but these errors were encountered: