-
Notifications
You must be signed in to change notification settings - Fork 340
Throughput downfall in using MPTCP for TCP VPN #361
Comments
PFA pcap for above issue. |
To understand your Scenario 2 correctly: Are you sending UDP over TCP over MPTCP? |
yes right, UDP over TCP over MPTCP . We are converting the UDP stream into TCP stream (now on same level as Scenario1) and then proxying the TCP stream via socks over other end |
And before that you did TCP over MPTCP, and that was fine? I really doubt that UDP over TCP over MPTCP can give you a good performance... |
Yes... TCP over MPTCP was fine!! |
Looking at the traces, it does not seem like a MPTCP-problem. Is the network-link the same in both scenarios? There simply seems to be a huge latency between the client and the server (in the order of 300ms). The packet-loss then takes quite some time to recover. Aren't you seeing this kind of latency and packet-loss in Scenario 1? |
well... In scenario 1 we are not seeing such latency. We are using the same link for both the scenario. |
I don't think that it's due to the MTU. Because, I guess you are using the same config for Scenario 1 and Scenario 2. Can you share a packet-trace of Scenario 1 as well? That might give some insight. |
Scenario1 - Transport TCP stream , we are using the MPTCP kernel + shadowsocks and is working fine.
Scenario2 - To transport the UDP traffic on multipath, we choose to go with following design
(See Issue for refrence -> #337)
Between Scenario1 to Scenario2 there is sharp decrease in throughput by upto 35% which is too high ,even if we consider the overhead of tcp-openvpn tunnel .
We can see lot of retrasnmission , dup-pkts etc...PFA pcap.
Configuration of End-Points
Client (2 lte links)
mptcp-version 0.94.6 & kernel4.14.133+
OS Debian8
Kernel Parameters
kernel.osrelease = 4.14.133mptcp+
net.mptcp.mptcp_checksum = 1
net.mptcp.mptcp_debug = 0
net.mptcp.mptcp_enabled = 1
net.mptcp.mptcp_path_manager = fullmesh
net.mptcp.mptcp_scheduler = default
net.mptcp.mptcp_syn_retries = 3
net.mptcp.mptcp_version = 0
/sys/module/mptcp_fullmesh/parameters/create_on_err 1
/proc/sys/net/ipv4/tcp_congestion_control cubic
/sys/module/mptcp_fullmesh/parameters/num_subflows 1
Server (1 link)
mptcp-version 0.94 & kernel4.14.79+
OS Debian9
Kernel Parameters
net.mptcp.mptcp_checksum = 1
net.mptcp.mptcp_debug = 0
net.mptcp.mptcp_enabled = 1
net.mptcp.mptcp_path_manager = fullmesh
net.mptcp.mptcp_scheduler = default
net.mptcp.mptcp_syn_retries = 3
net.mptcp.mptcp_version = 0
/sys/module/mptcp_fullmesh/parameters/create_on_err 1
/proc/sys/net/ipv4/tcp_congestion_control cubic
/sys/module/mptcp_fullmesh/parameters/num_subflows 1
Please advise.
The text was updated successfully, but these errors were encountered: