Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDP performance stuck around 10mbits unless using -A option #234

Closed
jethrocarr opened this issue Jan 1, 2015 · 10 comments
Closed

UDP performance stuck around 10mbits unless using -A option #234

jethrocarr opened this issue Jan 1, 2015 · 10 comments
Assignees

Comments

@jethrocarr
Copy link

I believe I've found a bug in iperf3 where the performance of UDP tests seem capped at around 10mbit when done over a Linux tunnel interface. Doing TCP tests via the tunnel does not have any performance issue, likewise doing UDP or TCP tests via the real (not tunneled) interface also works fine.

What is particularly weird is that I can work around the performance issue by setting the -A parameter (CPU affinity), as soon as this is done, UDP performance is back to normal. CPU affinity should make no difference, give that the test machines have only a single CPU, so it's unclear why this is making any effect.

Reproduction:

  1. Configure two Linux servers connected on a network.
  2. Perform an iperf3 test using UDP via the real interfaces. Performance will be near wire speed as seen on the recieving server.
  3. Configure an OpenVPN tunnel between the two servers.
  4. Perform an iperf3 test using UDP via the tunnel. Performance will be limited to roughly 10mbits as seen on the recieving server.
  5. Perform the test again, but this time with the -A option set. Performance will be much better as seen on the recieving server.

Environment:

  • Iperf3 3.0.6 and also current git master as of 20150101
  • Kernel 3.10.0-123.13.2.el7.x86_64 (RHEL 7) and kernel 2.6.32-431.1.2.0.1.el6.x86_64 (RHEL 6)
  • Single-CPU KVM guest.
  • KVM host environment/kernel unknown, but in theory irrelevant.

Example Of Fault:

The following illustrates a test via a tunnel, where bandwidth of 100Mbits has been requested, but the receiving server only shows ~ 10mbits recieved and a large amount of lost packets.

# iperf3 -u -c 10.8.1.1 -b 100M --get-server-output
Connecting to host 10.8.1.1, port 5201
[  4] local 10.8.1.6 port 37162 connected to 10.8.1.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  11.3 MBytes  94.7 Mbits/sec  1445
[  4]   1.00-2.00   sec  12.5 MBytes   105 Mbits/sec  1598
[  4]   2.00-3.00   sec  12.5 MBytes   105 Mbits/sec  1600
[  4]   3.00-4.00   sec  12.5 MBytes   105 Mbits/sec  1599
[  4]   4.00-5.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   5.00-6.00   sec  12.5 MBytes   105 Mbits/sec  1599
[  4]   6.00-7.00   sec  12.5 MBytes   105 Mbits/sec  1600
[  4]   7.00-8.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   8.00-9.00   sec  12.5 MBytes   105 Mbits/sec  1599
[  4]   9.00-10.00  sec  12.5 MBytes   105 Mbits/sec  1601
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec  0.438 ms  14118/15699 (90%)
[  4] Sent 15699 datagrams

Server output:
Accepted connection from 10.8.1.6, port 37952
[  5] local 10.8.1.1 port 5201 connected to 10.8.1.6 port 37162
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  1.16 MBytes  9.76 Mbits/sec  0.748 ms  1150/1299 (89%)
[  5]   1.00-2.00   sec  1.29 MBytes  10.8 Mbits/sec  0.388 ms  1435/1600 (90%)
[  5]   2.00-3.00   sec  1.25 MBytes  10.5 Mbits/sec  0.524 ms  1440/1600 (90%)
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  0.480 ms  1440/1600 (90%)
[  5]   4.00-5.00   sec  1.22 MBytes  10.2 Mbits/sec  0.381 ms  1444/1600 (90%)
[  5]   5.00-6.00   sec  1.24 MBytes  10.4 Mbits/sec  0.702 ms  1441/1600 (90%)
[  5]   6.00-7.00   sec  1.25 MBytes  10.5 Mbits/sec  0.383 ms  1439/1599 (90%)
[  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec  0.567 ms  1443/1603 (90%)
[  5]   8.00-9.00   sec  1.22 MBytes  10.2 Mbits/sec  0.467 ms  1437/1593 (90%)
[  5]   9.00-10.00  sec  1.22 MBytes  10.2 Mbits/sec  0.438 ms  1449/1605 (90%)

To verify it's not a case of the tunnel itself, a test with iperf2 performs correctly:

# iperf -u -c 10.8.1.1 -b 100M
------------------------------------------------------------
Client connecting to 10.8.1.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  122 KByte (default)
------------------------------------------------------------
[  3] local 10.8.1.6 port 33252 connected with 10.8.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85467 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec   0.026 ms    0/85466 (0%)
[  3]  0.0-10.0 sec  1 datagrams received out-of-order

The remote server shows the expected performance:

[  4] local 10.8.1.1 port 5001 connected with 10.8.1.6 port 46247
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec   0.029 ms    0/85469 (0%)
[  4]  0.0-10.0 sec  1 datagrams received out-of-order

Re-doing the UDP test with iperf3, but this time with the CPU affinity option set (-A), the test results perform as expected:

# iperf3 -A -u -c 10.8.1.1 -b 100M --get-server-output
Connecting to host 10.8.1.1, port 5201
[  4] local 10.8.1.6 port 37963 connected to 10.8.1.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  11.6 MBytes  97.1 Mbits/sec   10    106 KBytes
[  4]   1.00-2.00   sec  13.1 MBytes   110 Mbits/sec    1    105 KBytes
[  4]   2.00-3.00   sec  12.7 MBytes   107 Mbits/sec    1    103 KBytes
[  4]   3.00-4.00   sec  12.4 MBytes   104 Mbits/sec    3   98.0 KBytes
[  4]   4.00-5.00   sec  12.5 MBytes   105 Mbits/sec    0    135 KBytes
[  4]   5.00-6.00   sec  11.5 MBytes  96.1 Mbits/sec    2    109 KBytes
[  4]   6.00-7.00   sec  13.2 MBytes   110 Mbits/sec    1    114 KBytes
[  4]   7.00-8.00   sec  12.9 MBytes   108 Mbits/sec    3    107 KBytes
[  4]   8.00-9.00   sec  12.3 MBytes   103 Mbits/sec    2    103 KBytes
[  4]   9.00-10.00  sec  12.7 MBytes   106 Mbits/sec   10    111 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   125 MBytes   105 Mbits/sec   33             sender
[  4]   0.00-10.00  sec   125 MBytes   105 Mbits/sec                  receiver

Server output:
-----------------------------------------------------------
Accepted connection from 10.8.1.6, port 37962
[  5] local 10.8.1.1 port 5201 connected to 10.8.1.6 port 37963
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  11.0 MBytes  92.5 Mbits/sec
[  5]   1.00-2.00   sec  12.9 MBytes   108 Mbits/sec
[  5]   2.00-3.00   sec  12.8 MBytes   107 Mbits/sec
[  5]   3.00-4.00   sec  12.3 MBytes   103 Mbits/sec
[  5]   4.00-5.00   sec  12.7 MBytes   106 Mbits/sec
[  5]   5.00-6.00   sec  12.1 MBytes   101 Mbits/sec
[  5]   6.00-7.00   sec  12.3 MBytes   103 Mbits/sec
[  5]   7.00-8.00   sec  13.1 MBytes   110 Mbits/sec
[  5]   8.00-9.00   sec  12.6 MBytes   106 Mbits/sec
[  5]   9.00-10.00  sec  12.3 MBytes   103 Mbits/sec
@bmah888 bmah888 self-assigned this Jan 2, 2015
@bmah888
Copy link
Contributor

bmah888 commented Jan 2, 2015

Two other pieces of output that might be useful:

  1. The test with the -A option is actually a TCP test. What happened is that due to some sloppy argument processing in iperf3, the -A option ate the '-u' argument (interpreting it as a zero), and then iperf3 selected TCP as a protocol by default. So that test isn't showing what you think it shows. Can you do a run with the client like this:
    iperf3 -A 0 -u -c 10.8.1.1 -b 100M --get-server-output
    (Basically what you had, but give 0 as a parameter to the -A flag, so that it's really a well-formed option.)
  2. Can you also do a run between the "real" interfaces (one that gave "as expected" performance)? That way we know what "as expected" really means.

We have seen some odd effects related to processor affinity but they usually show up at much higher bitrates than what you're doing (i.e. 10Gbps), and only on multi-CPU systems.

@jethrocarr
Copy link
Author

hi Bruce,

Thanks for the quick response. In regards to the points raised:

1: Ouch that's a nasty trap :-) I've re-run the test as requested and sadly still getting less-than 10mbits on the test, so it's clear the earlier results were from the cli arguments bug rather than something relating to affinity itself.

# iperf3 -A 0 -u -c 10.8.1.1 -b 100M --get-server-output
Connecting to host 10.8.1.1, port 5201
[  4] local 10.8.1.6 port 57797 connected to 10.8.1.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  11.3 MBytes  94.7 Mbits/sec  1446
[  4]   1.00-2.00   sec  12.5 MBytes   105 Mbits/sec  1597
[  4]   2.00-3.00   sec  12.5 MBytes   105 Mbits/sec  1603
[  4]   3.00-4.00   sec  12.5 MBytes   105 Mbits/sec  1597
[  4]   4.00-5.00   sec  12.5 MBytes   105 Mbits/sec  1602
[  4]   5.00-6.00   sec  12.5 MBytes   105 Mbits/sec  1598
[  4]   6.00-7.00   sec  12.5 MBytes   105 Mbits/sec  1600
[  4]   7.00-8.00   sec  12.5 MBytes   105 Mbits/sec  1603
[  4]   8.00-9.00   sec  12.5 MBytes   105 Mbits/sec  1600
[  4]   9.00-10.00  sec  12.5 MBytes   105 Mbits/sec  1600
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec  0.376 ms  14622/15699 (93%)
[  4] Sent 15699 datagrams

Server output:
Accepted connection from 10.8.1.6, port 35124
[  5] local 10.8.1.1 port 5201 connected to 10.8.1.6 port 57797
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec   672 KBytes  5.50 Mbits/sec  0.174 ms  1215/1299 (94%)
[  5]   1.00-2.00   sec  1.17 MBytes  9.83 Mbits/sec  0.270 ms  1451/1601 (91%)
[  5]   2.00-3.00   sec   856 KBytes  7.01 Mbits/sec  0.213 ms  1491/1598 (93%)
[  5]   3.00-4.00   sec   888 KBytes  7.27 Mbits/sec  0.222 ms  1489/1600 (93%)
[  5]   4.00-5.00   sec   672 KBytes  5.51 Mbits/sec  0.209 ms  1517/1601 (95%)
[  5]   5.00-6.00   sec   720 KBytes  5.90 Mbits/sec  0.557 ms  1501/1591 (94%)
[  5]   6.00-7.00   sec  1.12 MBytes  9.37 Mbits/sec  0.402 ms  1467/1610 (91%)
[  5]   7.00-8.00   sec   656 KBytes  5.37 Mbits/sec  0.521 ms  1506/1588 (95%)
[  5]   8.00-9.00   sec  1.01 MBytes  8.45 Mbits/sec  0.231 ms  1476/1605 (92%)
[  5]   9.00-10.00  sec   776 KBytes  6.36 Mbits/sec  0.376 ms  1509/1606 (94%)


iperf Done.

2: Short answer is that I'd expect at least 100 Mbits across the tunnel, I've confirmed that it's capable of doing such with a test over the tunnel using TCP:

iperf3 -c 10.8.1.1 -b 100M --get-server-output
Connecting to host 10.8.1.1, port 5201
[  4] local 10.8.1.6 port 35126 connected to 10.8.1.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  12.5 MBytes   104 Mbits/sec    2   56.9 KBytes
[  4]   1.00-2.00   sec  12.2 MBytes   102 Mbits/sec   10   46.3 KBytes
[  4]   2.00-3.00   sec  12.0 MBytes   101 Mbits/sec    2   42.4 KBytes
[  4]   3.00-4.00   sec  13.2 MBytes   111 Mbits/sec    5   35.8 KBytes
[  4]   4.00-5.00   sec  12.2 MBytes   102 Mbits/sec    9   47.7 KBytes
[  4]   5.00-6.00   sec  12.8 MBytes   107 Mbits/sec   14   43.7 KBytes
[  4]   6.00-7.00   sec  11.8 MBytes  99.1 Mbits/sec    3   22.5 KBytes
[  4]   7.00-8.00   sec  12.5 MBytes   105 Mbits/sec    4   43.7 KBytes
[  4]   8.00-9.00   sec  13.0 MBytes   109 Mbits/sec    4   46.3 KBytes
[  4]   9.00-10.00  sec  11.7 MBytes  97.8 Mbits/sec    4   47.7 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec   57             sender
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec                  receiver

Server output:
-----------------------------------------------------------
Accepted connection from 10.8.1.6, port 35125
[  5] local 10.8.1.1 port 5201 connected to 10.8.1.6 port 35126
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  11.8 MBytes  98.8 Mbits/sec
[  5]   1.00-2.00   sec  12.0 MBytes   101 Mbits/sec
[  5]   2.00-3.00   sec  12.8 MBytes   108 Mbits/sec
[  5]   3.00-4.00   sec  12.4 MBytes   104 Mbits/sec
[  5]   4.00-5.00   sec  12.9 MBytes   108 Mbits/sec
[  5]   5.00-6.00   sec  12.3 MBytes   103 Mbits/sec
[  5]   6.00-7.00   sec  11.7 MBytes  98.5 Mbits/sec
[  5]   7.00-8.00   sec  12.6 MBytes   106 Mbits/sec
[  5]   8.00-9.00   sec  13.3 MBytes   112 Mbits/sec
[  5]   9.00-10.00  sec  12.0 MBytes   101 Mbits/sec

The test done previously with iperf2 verified that the tunnel was capable of doing over 100 Mbits UDP as well, so based on iperf3's TCP results and iperf2's UDP results, I'm pretty comfortable that the tunnel itself is operating as expected.

I've also done a test between the real IP addresses of the servers in UDP to demonstrate how UDP works correctly as long as it's not going via a Linux tunnel.

iperf3 -u -c 192.168.0.1 -b 100M --get-server-output
Connecting to host 192.168.0.1, port 5201
[  4] local 192.168.0.2 port 51646 connected to 192.168.0.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  11.3 MBytes  94.7 Mbits/sec  1445
[  4]   1.00-2.00   sec  12.5 MBytes   105 Mbits/sec  1599
[  4]   2.00-3.00   sec  12.5 MBytes   105 Mbits/sec  1599
[  4]   3.00-4.00   sec  12.6 MBytes   105 Mbits/sec  1607
[  4]   4.00-5.00   sec  12.4 MBytes   104 Mbits/sec  1593
[  4]   5.00-6.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   6.00-7.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   7.00-8.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   8.00-9.00   sec  12.5 MBytes   105 Mbits/sec  1601
[  4]   9.00-10.00  sec  12.5 MBytes   105 Mbits/sec  1596
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec  0.061 ms  137/15843 (0.86%)
[  4] Sent 15843 datagrams

Server output:
-----------------------------------------------------------
Accepted connection from 192.168.0.2, port 36555
[  5] local 192.168.0.1 port 5201 connected to 192.168.0.2 port 51646
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  10.5 MBytes  88.1 Mbits/sec  0.061 ms  101/1445 (7%)
[  5]   1.00-2.00   sec  12.5 MBytes   105 Mbits/sec  0.066 ms  0/1599 (0%)
[  5]   2.00-3.00   sec  12.5 MBytes   105 Mbits/sec  0.064 ms  0/1599 (0%)
[  5]   3.00-4.00   sec  12.5 MBytes   105 Mbits/sec  0.065 ms  10/1607 (0.62%)
[  5]   4.00-5.00   sec  12.4 MBytes   104 Mbits/sec  0.067 ms  4/1593 (0.25%)
[  5]   5.00-6.00   sec  12.5 MBytes   105 Mbits/sec  0.071 ms  2/1601 (0.12%)
[  5]   6.00-7.00   sec  12.5 MBytes   105 Mbits/sec  0.070 ms  4/1601 (0.25%)
[  5]   7.00-8.00   sec  12.5 MBytes   105 Mbits/sec  0.067 ms  5/1601 (0.31%)
[  5]   8.00-9.00   sec  12.5 MBytes   105 Mbits/sec  0.067 ms  2/1601 (0.12%)
[  5]   9.00-10.00  sec  12.4 MBytes   104 Mbits/sec  0.061 ms  9/1596 (0.56%)

@bmah888
Copy link
Contributor

bmah888 commented Jan 5, 2015

OK thanks for the sample output. One thing you could try is (on iperf3 3.0.10 or newer) experiment with the -w command-line flag to increase the socket buffer sizes. At ESnet we have (somewhat counterintuitively) found that setting larger socket buffers helps reduce losses with UDP tests. Usually we've seen that multi-Gbps tests, however.

@jethrocarr
Copy link
Author

Tested with latest version as of 20 Jan 2015 (6bd4e25) and found that whilst the -w command option can impact the performance somewhat, I can never get it beyond the 10mbit limit.

@rudolf
Copy link

rudolf commented Feb 17, 2015

I've come across a similar issue where iperf3 has huge packet loss while iperf2 has none. What was interesting is that when reversing the test, there's no packet loss. I've also tried to change the UDP datagram size to match that of iperf2, but this had no effect.

My setup:
laptop 1 <- 1Gbps Ethernet -> 1Gbps Switch <- 100Mbps Ethernet -> laptop 2

./src/iperf3 -c 10.1.0.101 -u -b 10m -V -l 1470
iperf 3.0.11
Linux rudolf-xps 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Time: Tue, 17 Feb 2015 07:21:20 GMT
Connecting to host 10.1.0.101, port 5201
      Cookie: rudolf-xps.1424157680.448108.7ac9eef
[  4] local 10.1.0.171 port 53849 connected to 10.1.0.101 port 5201
Starting Test: protocol: UDP, 1 streams, 1470 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  1.08 MBytes  9.02 Mbits/sec  767  
[  4]   1.00-2.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   2.00-3.00   sec  1.19 MBytes  10.0 Mbits/sec  851  
[  4]   3.00-4.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   4.00-5.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   5.00-6.00   sec  1.19 MBytes  10.0 Mbits/sec  851  
[  4]   6.00-7.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   7.00-8.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   8.00-9.00   sec  1.19 MBytes  10.0 Mbits/sec  850  
[  4]   9.00-10.00  sec  1.19 MBytes  10.0 Mbits/sec  851  
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  11.8 MBytes  9.90 Mbits/sec  0.155 ms  5020/8414 (60%)  
[  4] Sent 8414 datagrams
CPU Utilization: local/sender 1.5% (0.4%u/1.1%s), remote/receiver 0.2% (0.1%u/0.1%s)
./src/iperf3 -c 10.1.0.101 -u -b 10m -V -R
iperf 3.0.11
Linux rudolf-xps 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Time: Tue, 17 Feb 2015 07:24:49 GMT
Connecting to host 10.1.0.101, port 5201
Reverse mode, remote host 10.1.0.101 is sending
      Cookie: rudolf-xps.1424157889.665121.5d58bd8
[  4] local 10.1.0.171 port 46537 connected to 10.1.0.101 port 5201
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-1.00   sec  1.20 MBytes  10.1 Mbits/sec  0.796 ms  0/154 (0%)  
[  4]   1.00-2.00   sec  1.20 MBytes  10.0 Mbits/sec  0.805 ms  0/153 (0%)  
[  4]   2.00-3.00   sec  1.19 MBytes  9.96 Mbits/sec  0.822 ms  0/152 (0%)  
[  4]   3.00-4.00   sec  1.20 MBytes  10.0 Mbits/sec  0.865 ms  0/153 (0%)  
[  4]   4.00-5.00   sec  1.19 MBytes  9.96 Mbits/sec  0.773 ms  0/152 (0%)  
[  4]   5.00-6.00   sec  1.20 MBytes  10.0 Mbits/sec  0.799 ms  0/153 (0%)  
[  4]   6.00-7.00   sec  1.20 MBytes  10.0 Mbits/sec  0.840 ms  0/153 (0%)  
[  4]   7.00-8.00   sec  1.19 MBytes  9.96 Mbits/sec  0.774 ms  0/152 (0%)  
[  4]   8.00-9.00   sec  1.20 MBytes  10.0 Mbits/sec  0.802 ms  0/153 (0%)  
[  4]   9.00-10.00  sec  1.19 MBytes  9.96 Mbits/sec  0.783 ms  0/152 (0%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  11.9 MBytes  10.0 Mbits/sec  0.783 ms  0/1527 (0%)  
[  4] Sent 1527 datagrams
CPU Utilization: local/receiver 1.1% (0.2%u/0.9%s), remote/sender 1.1% (0.2%u/0.9%s)

If I force laptop1's ethernet to 100Mbps there's no more packet loss.

Anyway in which we can help debug this problem?

@cgoldman1
Copy link

Hello Esnet.

It's been over a year since this packet loss problem was reported. Are there any plans to release a new version of IPERF3 where UDP works reliably over 10Mbps? While testing over 1Gbps links:

-----forward direction----- (very big delta in results)
iperf3 -c 10.x.x.y -u -b50M -t120s -p 5202 = 46% packet loss
iperf -c 10.x.x.y -u -b50M -t120s -p 5001 = 0% packet loss

iperf3 -c 10.x.x.y -u -b950M -t120s -p 5202 = 75.18% packet loss
iperf -c 10.x.x.y -u -b950M -t120s -p 5001 = 0.19% packet loss

-----reverse direction----- (not as bad as Forward direction)
iperf3 -c 10.x.x.y -u -b50M -t120s -p 5202 -R = 0.0131% packet loss
iperf -c 10.x.x.y -u -b50M -t120s -p 5001 = 0.0055% packet loss

iperf3 -c 10.x.x.y -u -b950M -t120s -p 5202 -R = 8.5566% packet loss
iperf -c 10.x.x.y -u -b950M -t120s -p 5001 = 0.5625% packet loss

@petterwildhagen
Copy link

I am observing the same problem running UDP tests for different packet sizes and different number of streams. I am running over a link where TCP throughput is 50 MBit/s but UDP throughput starts to show packet loss when the load irrespective of number of threads and packet size is larger than 10 Mbit/s.

@cgoldman1
Copy link

I switched to the older version 2.0.8 and have no problems now. Version 3 is not accurate.

Sent from my iPhone

On Nov 23, 2016, at 4:45 AM, petterwildhagen <[email protected]mailto:[email protected]> wrote:

I am observing the same problem running UDP tests for different packet sizes and different number of streams. I am running over a link where TCP throughput is 50 MBit/s but UDP throughput starts to show packet loss when the load irrespective of number of threads and packet size is larger than 10 Mbit/s.

You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com//issues/234#issuecomment-262480357, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ARrOXpnVNM_-UDn6V_N2ySe0jrWwOJ3Pks5rBBdmgaJpZM4DNmlZ.

@bmah888
Copy link
Contributor

bmah888 commented Nov 23, 2016

Recommend you try increasing the buffer size with -w.

Also if you are not running over a path that is jumbo-frame clean, try setting -l to something that will fit inside the path MTU.

@bmah888
Copy link
Contributor

bmah888 commented Mar 30, 2017

We're tracking UDP performance issues in bug #296, so resolving this one as a duplicate bug.

@bmah888 bmah888 closed this as completed Mar 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants