Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

download large file(More than 5G bytes)failed using mptcp #373

Open
caesar123450 opened this issue Dec 16, 2019 · 9 comments
Open

download large file(More than 5G bytes)failed using mptcp #373

caesar123450 opened this issue Dec 16, 2019 · 9 comments

Comments

@caesar123450
Copy link

caesar123450 commented Dec 16, 2019

hi

  1. I download large file(More than 5G bytes)failed using mptcp, When downloading around 2G, the download is paused。 After some time, the TCP connection is disconnected。
  2. when I set net.mptcp.mptcp_enabled=0, I download large file(More than 5G bytes)success

mptcp test version : https://github.com/multipath-tcp/mptcp/tree/v0.95
client: curl http://10.26.77.30:8000/5G.dat
server: nginx 1.16.1

@matttbe
Copy link
Member

matttbe commented Dec 16, 2019

Ave,

Do you use v0.95 for both the client and the server? I think @cpaasch fixed a similar issue but I don't remember if it was on the receiver or sender side.

@caesar123450
Copy link
Author

yes, the client and the server use the same version v0.95
https://github.com/multipath-tcp/mptcp/tree/v0.95

@caesar123450
Copy link
Author

caesar123450 commented Dec 17, 2019

similar
hi
Excuse me ,which version or tag @cpaasch fixed the similar issue?

@cpaasch
Copy link
Member

cpaasch commented Dec 17, 2019

Can you try disabling MPTCP-checksum and report back?
sysctl -w net.mptcp.mptcp_checksum=0

@caesar123450
Copy link
Author

Can you try disabling MPTCP-checksum and report back?
sysctl -w net.mptcp.mptcp_checksum=0
hi @cpaasch

  1. client and server all set sysctl -w net.mptcp.mptcp_checksum=0,I download large file(More than 5G bytes)ok using mptcp.
  2. What is the cause of this problem, is there a resolution plan?

@cpaasch
Copy link
Member

cpaasch commented Dec 19, 2019

Ah! Thanks for confirming!

There is a problem with MPTCP-checksums. I will try to repro...

@caesar123450
Copy link
Author

Ah! Thanks for confirming!

There is a problem with MPTCP-checksums. I will try to repro...

Thanks!

@cpaasch
Copy link
Member

cpaasch commented May 22, 2020

Hello, do you still see this issue? I have been trying to repro, but without success.

Do you have a packet-trace of this scenario? How many subflows did you create?

Thanks!

matttbe pushed a commit that referenced this issue Sep 6, 2021
[ Upstream commit bb385be ]

If we get an error while looking up the inode item we'll simply bail
without cleaning up the delayed node.  This results in this style of
warning happening on commit:

  WARNING: CPU: 0 PID: 76403 at fs/btrfs/delayed-inode.c:1365 btrfs_assert_delayed_root_empty+0x5b/0x90
  CPU: 0 PID: 76403 Comm: fsstress Tainted: G        W         5.13.0-rc1+ #373
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
  RIP: 0010:btrfs_assert_delayed_root_empty+0x5b/0x90
  RSP: 0018:ffffb8bb815a7e50 EFLAGS: 00010286
  RAX: 0000000000000000 RBX: ffff95d6d07e1888 RCX: ffff95d6c0fa3000
  RDX: 0000000000000002 RSI: 000000000029e91c RDI: ffff95d6c0fc8060
  RBP: ffff95d6c0fc8060 R08: 00008d6d701a2c1d R09: 0000000000000000
  R10: ffff95d6d1760ea0 R11: 0000000000000001 R12: ffff95d6c15a4d00
  R13: ffff95d6c0fa3000 R14: 0000000000000000 R15: ffffb8bb815a7e90
  FS:  00007f490e8dbb80(0000) GS:ffff95d73bc00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007f6e75555cb0 CR3: 00000001101ce001 CR4: 0000000000370ef0
  Call Trace:
   btrfs_commit_transaction+0x43c/0xb00
   ? finish_wait+0x80/0x80
   ? vfs_fsync_range+0x90/0x90
   iterate_supers+0x8c/0x100
   ksys_sync+0x50/0x90
   __do_sys_sync+0xa/0x10
   do_syscall_64+0x3d/0x80
   entry_SYSCALL_64_after_hwframe+0x44/0xae

Because the iref isn't dropped and this leaves an elevated node->count,
so any release just re-queues it onto the delayed inodes list.  Fix this
by going to the out label to handle the proper cleanup of the delayed
node.

Signed-off-by: Josef Bacik <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
matttbe pushed a commit that referenced this issue Sep 6, 2021
[ Upstream commit bb385be ]

If we get an error while looking up the inode item we'll simply bail
without cleaning up the delayed node.  This results in this style of
warning happening on commit:

  WARNING: CPU: 0 PID: 76403 at fs/btrfs/delayed-inode.c:1365 btrfs_assert_delayed_root_empty+0x5b/0x90
  CPU: 0 PID: 76403 Comm: fsstress Tainted: G        W         5.13.0-rc1+ #373
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
  RIP: 0010:btrfs_assert_delayed_root_empty+0x5b/0x90
  RSP: 0018:ffffb8bb815a7e50 EFLAGS: 00010286
  RAX: 0000000000000000 RBX: ffff95d6d07e1888 RCX: ffff95d6c0fa3000
  RDX: 0000000000000002 RSI: 000000000029e91c RDI: ffff95d6c0fc8060
  RBP: ffff95d6c0fc8060 R08: 00008d6d701a2c1d R09: 0000000000000000
  R10: ffff95d6d1760ea0 R11: 0000000000000001 R12: ffff95d6c15a4d00
  R13: ffff95d6c0fa3000 R14: 0000000000000000 R15: ffffb8bb815a7e90
  FS:  00007f490e8dbb80(0000) GS:ffff95d73bc00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007f6e75555cb0 CR3: 00000001101ce001 CR4: 0000000000370ef0
  Call Trace:
   btrfs_commit_transaction+0x43c/0xb00
   ? finish_wait+0x80/0x80
   ? vfs_fsync_range+0x90/0x90
   iterate_supers+0x8c/0x100
   ksys_sync+0x50/0x90
   __do_sys_sync+0xa/0x10
   do_syscall_64+0x3d/0x80
   entry_SYSCALL_64_after_hwframe+0x44/0xae

Because the iref isn't dropped and this leaves an elevated node->count,
so any release just re-queues it onto the delayed inodes list.  Fix this
by going to the out label to handle the proper cleanup of the delayed
node.

Signed-off-by: Josef Bacik <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
@sskras
Copy link

sskras commented Oct 26, 2021

@caesar123450, any news on the download failures?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants