Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Iperf doesn't increase bandwidth #175

Open
delinage opened this issue Apr 20, 2017 · 15 comments
Open

Iperf doesn't increase bandwidth #175

delinage opened this issue Apr 20, 2017 · 15 comments

Comments

@delinage
Copy link

I have installed MPTCP on a client and on a server (both ubuntu 16.04), I've configured the routing tables (I have 2 wifi interfaces) and I have tested (with ifstat) that all interfaces are being used when I do an iperf connection.

My problem is that if do:
iperf -s
iperf -c 10.0.0.3
I get better bandwidth when I'm using just 1 interface for the client and 1 for the server than when I use all of them. If the protocol works, I should be getting better bandwith, but it is not the case... so, I wonder if I have to use a specific iperf configuration or there's something wrong?

@GinesGarcia
Copy link

GinesGarcia commented Apr 21, 2017 via email

@delinage
Copy link
Author

I have checked it, yes. And I'm using the default one (scheduler).

@GinesGarcia
Copy link

GinesGarcia commented Apr 21, 2017 via email

@delinage
Copy link
Author

I have done wireshark captures of them, is that what you mean?

@GinesGarcia
Copy link

GinesGarcia commented Apr 21, 2017 via email

@delinage
Copy link
Author

I've compressed them so I could upload them.

This one:
iperf_MPTCP_test3.zip
It is a capture on the server of 2 iperf sessions using 2 subflows (10.0.0.1 and 10.0.0.2 are the client while 10.0.0.3 and 10.0.04 are the server)

And this one:
iperf_1Wifi1Eth_Test2-3.zip
It is a capture with just one subflow.

@GinesGarcia
Copy link

First test:
from 10.0.0.1:49035 -> 10.0.0.3:5001
from 10.0.0.1:33251 -> 10.0.0.4:5001
from 10.0.0.2:41297 -> 10.0.0.3:5001
from 10.0.0.2:59494 -> 10.0.0.4:5001

Second test:
from 10.0.0.1:51455 -> 10.0.0.3:5001

Have you compared one connection with two subflows (first test without one iperf) with the second one?
If so, which is (more or less) the RTT of both wireless links?

@delinage
Copy link
Author

Do you mean compare this test:
iperf_1Wifi2Eth_Tests.zip
with the last test, isn't it?

I still get better results with 1 subflow than with 2 subflows...

I don't know the RTT right now, but I can calculate it in the wireshark captures, isn't it?

@delinage
Copy link
Author

delinage commented Apr 26, 2017

From 10.0.0.1 to 10.0.0.3 is 0.000041777s
From 10.0.0.1 to 10.0.0.4 is 0.000030349s

More or less in that last capture.

@GinesGarcia
Copy link

ok, The RTT is more or less similar, so you are not suffering head of line blocking problems.

Regarding the last capture (iperf_1Wifi2Eth_Tests.zip), are you using two subflows over the same physical interface (interface with 10.0.0.1) through 2 disjoint paths?

I'm a litte bit lost about what do you want to achieve. I suggest you to run a simple experiment:
Configure MPTCP in your client with:

  • Fulmesh path-manager
  • OLIA congestion control
  • Set routing configuration for two physical interfaces.
    Then download something from the multipath-tcp.org ftp server (http://multipath-tcp.org/pmwiki.php/Users/UseMPTCP) and see if your throughput increases.

@delinage
Copy link
Author

Regarding to the first part:
I've set up a server and a client in two different laptops and I'm connecting them through routers and switches in order to test if MPTCP is useful (throughput wise). But the tests I'm making show me that 1 subflow is better than multiples subflows, so, that's not what I was expecting to get, that's not what the documentation of the implementation says, this protocol is supposed to increase throughput in these conditions... that's what I'm trying to get and that's why I am confused.

Then, in the first test (iperf_MPTCP_test3.zip) I enabled 2 ip addresses on the server and 2 on the client, expecting to get 2 subflows, but I got 4, because it seems that the protocol in fullmesh options make every possible connection.

That's why I turned down an interface on (iperf_1Wifi2Eth_Tests.zip) to get just 2 subflows. Because I don't know how to force the implementation to create just 2 physical subflows with 2ips on the server and 2 ips on the client.

Later, I compared these 2 test with the test with 1 subflow (iperf_1Wifi1Eth_Test2-3.zip) and I saw how the results were not as I expected. So I came here to ask.

Regarding to the second part:

:~$ ftp ftp.multipath-tcp.org
ftp: connect to address 130.104.230.45: Connection timed out
ftp: connect: Network is unreachable

It seems like the server is not working at the moment...

@delinage
Copy link
Author

delinage commented May 2, 2017

I've realized that when I use multiple subflows, iperf doesn't fill them up, so that might be the cause of the problem. Is there any mean to generate (more) traffic faster on iperf?

@ghostli123
Copy link

Hi, I have a similiar issue when using mptcp 0.90 in Ubuntu 14.04.

Two subnets in total. One subnet for 1 MPTCP subflow. Two MPTCP subflows are established for data transmission. Iperf is used for data transmission at application layer.

However, overall throughput of mptcp is not as good as single tcp. Moreover, mptcp throughput is not stable, ranging from 250mbps to 550mbps (average throughput for a 30-second iperf test).

Thus, I have two questions: 1) why mptcp throughput is not stable; 2) in what case, mptcp throughput is not as good as regular tcp thoughput. Thank you!

@yannisT
Copy link

yannisT commented Jun 1, 2017

Hi all,

I am using the latest MPTCP version from git on a LAN testbed where ethernet switches form disjoint paths among 1 multihomed server and 2 multihomed clients.

During some tests I witnessed contradicting throughput estimations of several (monitoring) tools such as iperf, cbm, /proc/net/dev, netstat or even scp. In specific, when performing in isolation, MPTCP fully utilizes network resources according to all monitoring tools. Nevertheless, when MPTCP competes for BW with unicast connections, iperf and cbm display poor performance of MPTCP compared to unicast connections, while /proc/net/dev and netstat report the expected performance superiority. I validated the BW superiority of MPTCP in both cases by transferring actual data via scp, so I assume that iperf and cbm use some system calls that may not be completely compatible with MPTCP implementation.
Could this be true?

Best Regards,
Yannis

@yosephmaloche
Copy link

yosephmaloche commented Jan 15, 2018

Hello, I am facing kind of similar issue. Please, is there anyone who can help?
As mentioned previously I created two IP adresses for single host(Eth0 and Eth1). I used 4 hosts and 4 switches. It is ring topology h1 h2 h3 h4 from left to right circular. I used SDN controller to control the flows on the links. When I check it on GUI of the controller it creates 8 hosts.

When I send packet from h1 to h2 and h3 to h4 The MPTCP throughput is as expected which is nearly double. However, when I send packet from h1 to h3 and h2 to h4 it is totally bad. I was confused. After I added switches and hosts, Here is how I created the links.
...........................................................................
info( '*** Add links\n')
linkProp = {'bw':2,'delay':'10ms'}
net.addLink(s1, s2, cls=TCLink, **linkProp)
net.addLink(s2, s3, cls=TCLink, **linkProp)
net.addLink(s3, s4, cls=TCLink, **linkProp)
net.addLink(s4, s1, cls=TCLink, **linkProp)
net.addLink(h1, s1, cls=TCLink, **linkProp)
net.addLink(h1, s4, cls=TCLink, **linkProp)
net.addLink(h2, s1, cls=TCLink, **linkProp)
net.addLink(h2, s2, cls=TCLink, **linkProp)
net.addLink(h3, s2, cls=TCLink, **linkProp)
net.addLink(h3, s3, cls=TCLink, **linkProp)
net.addLink(h4, s3, cls=TCLink, **linkProp)
net.addLink(h4, s4, cls=TCLink, **linkProp)

info( '*** Starting network\n')
................................................................................
Please, don't hastate to comment. I don't know What I am missing
I attached what it looks like in ONOS controller
screenshot from 2018-01-15 03-40-17

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants