Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Throughput downfall in using MPTCP for TCP VPN #361

Open
jeetu28 opened this issue Sep 13, 2019 · 8 comments
Open

Throughput downfall in using MPTCP for TCP VPN #361

jeetu28 opened this issue Sep 13, 2019 · 8 comments
Labels

Comments

@jeetu28
Copy link

jeetu28 commented Sep 13, 2019

Scenario1 - Transport TCP stream , we are using the MPTCP kernel + shadowsocks and is working fine.
Scenario2 - To transport the UDP traffic on multipath, we choose to go with following design
(See Issue for refrence -> #337)

  • To use the TCP Open-VPN to encapsulate the UDP datagram in to TCP and then use shadowsocks to transport this TCP (VPN)-stream over mptcp using socks.

Between Scenario1 to Scenario2 there is sharp decrease in throughput by upto 35% which is too high ,even if we consider the overhead of tcp-openvpn tunnel .

We can see lot of retrasnmission , dup-pkts etc...PFA pcap.

Configuration of End-Points

Client (2 lte links)
mptcp-version 0.94.6 & kernel4.14.133+
OS Debian8
Kernel Parameters
kernel.osrelease = 4.14.133mptcp+
net.mptcp.mptcp_checksum = 1
net.mptcp.mptcp_debug = 0
net.mptcp.mptcp_enabled = 1
net.mptcp.mptcp_path_manager = fullmesh
net.mptcp.mptcp_scheduler = default
net.mptcp.mptcp_syn_retries = 3
net.mptcp.mptcp_version = 0
/sys/module/mptcp_fullmesh/parameters/create_on_err 1
/proc/sys/net/ipv4/tcp_congestion_control cubic
/sys/module/mptcp_fullmesh/parameters/num_subflows 1

Server (1 link)
mptcp-version 0.94 & kernel4.14.79+
OS Debian9
Kernel Parameters
net.mptcp.mptcp_checksum = 1
net.mptcp.mptcp_debug = 0
net.mptcp.mptcp_enabled = 1
net.mptcp.mptcp_path_manager = fullmesh
net.mptcp.mptcp_scheduler = default
net.mptcp.mptcp_syn_retries = 3
net.mptcp.mptcp_version = 0
/sys/module/mptcp_fullmesh/parameters/create_on_err 1
/proc/sys/net/ipv4/tcp_congestion_control cubic
/sys/module/mptcp_fullmesh/parameters/num_subflows 1

Please advise.

@VikasBansal1510
Copy link

PFA pcap for above issue.
mptcp_low_throughput.zip

@cpaasch
Copy link
Member

cpaasch commented Sep 16, 2019

To understand your Scenario 2 correctly: Are you sending UDP over TCP over MPTCP?

@VikasBansal1510
Copy link

yes right, UDP over TCP over MPTCP . We are converting the UDP stream into TCP stream (now on same level as Scenario1) and then proxying the TCP stream via socks over other end

@cpaasch
Copy link
Member

cpaasch commented Sep 17, 2019

And before that you did TCP over MPTCP, and that was fine?

I really doubt that UDP over TCP over MPTCP can give you a good performance...

@jeetu28
Copy link
Author

jeetu28 commented Sep 18, 2019

Yes... TCP over MPTCP was fine!!

@cpaasch
Copy link
Member

cpaasch commented Sep 18, 2019

Looking at the traces, it does not seem like a MPTCP-problem. Is the network-link the same in both scenarios? There simply seems to be a huge latency between the client and the server (in the order of 300ms). The packet-loss then takes quite some time to recover.

Aren't you seeing this kind of latency and packet-loss in Scenario 1?

@jeetu28
Copy link
Author

jeetu28 commented Sep 19, 2019

well... In scenario 1 we are not seeing such latency. We are using the same link for both the scenario.
Do you think this latency could be due to fragmentation. We are using MTU value as 1500 for TCP VPN Tunnel.

@cpaasch
Copy link
Member

cpaasch commented Sep 19, 2019

I don't think that it's due to the MTU. Because, I guess you are using the same config for Scenario 1 and Scenario 2.

Can you share a packet-trace of Scenario 1 as well? That might give some insight.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants