Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduler: implement a "opportunistic retransmission" #332

Open
Tracked by #350
matttbe opened this issue Dec 21, 2022 · 0 comments
Open
Tracked by #350

scheduler: implement a "opportunistic retransmission" #332

matttbe opened this issue Dec 21, 2022 · 0 comments
Labels
enhancement sched packets scheduler

Comments

@matttbe
Copy link
Member

matttbe commented Dec 21, 2022

The goal of the "opportunistic retransmission" is to quickly reinject packets when we notice the window has just been closed on one path, see the section 4.2 of: https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have the ability to trigger a reinjection. Currently there are only hooks to initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

Note that the packet scheduler never significantly queue more than what the cwnd of a subflow would accept: currently, the in-kernel only accepts to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

@matttbe matttbe added the sched packets scheduler label Feb 1, 2023
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 12, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 13, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 13, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 13, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 15, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 19, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 20, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 20, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 21, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 21, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 22, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 22, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 22, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 22, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 23, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 24, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Mar 25, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 13, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 13, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 14, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 15, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 15, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 18, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 22, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 23, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 23, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 24, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 27, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 29, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 29, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 29, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 29, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 30, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 30, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue May 31, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Jun 4, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
matttbe pushed a commit that referenced this issue Jul 10, 2024
Add a test case which replaces an active ingress qdisc while keeping the
miniq in-tact during the transition period to the new clsact qdisc.

  # ./vmtest.sh -- ./test_progs -t tc_link
  [...]
  ./test_progs -t tc_link
  [    3.412871] bpf_testmod: loading out-of-tree module taints kernel.
  [    3.413343] bpf_testmod: module verification failed: signature and/or required key missing - tainting kernel
  #332     tc_links_after:OK
  #333     tc_links_append:OK
  #334     tc_links_basic:OK
  #335     tc_links_before:OK
  #336     tc_links_chain_classic:OK
  #337     tc_links_chain_mixed:OK
  #338     tc_links_dev_chain0:OK
  #339     tc_links_dev_cleanup:OK
  #340     tc_links_dev_mixed:OK
  #341     tc_links_ingress:OK
  #342     tc_links_invalid:OK
  #343     tc_links_prepend:OK
  #344     tc_links_replace:OK
  #345     tc_links_revision:OK
  Summary: 14/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Daniel Borkmann <[email protected]>
Cc: Martin KaFai Lau <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Martin KaFai Lau <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 4, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 8, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 9, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 10, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 10, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 10, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 10, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 12, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 12, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
geliangtang added a commit to geliangtang/mptcp_net-next that referenced this issue Oct 16, 2024
scheduler: implement a "opportunistic retransmission"

The goal of the "opportunistic retransmission" is to quickly reinject
packets when we notice the window has just been closed on one path, see
the section 4.2 of:

https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final125.pdf

This is implemented in mptcp.org, see mptcp_rcv_buf_optimization().

With the current API in the Upstream kernel, a new scheduler doesn't have
the ability to trigger a reinjection. Currently there are only hooks to
initiate reinjection when the MPTCP RTO fires.

The packet scheduler should be able to get more info:

    not just when MPTCP cwnd close or the seq num has increased (max
allowed MPTCP level seq num to be sent == last ack + (...))
    but also when there is a RTO at subflow level: maybe linked to

    scheduler: react when subflow-level events pop up (ACK/RTO) multipath-tcp#343

Note that the packet scheduler never significantly queue more than what
the cwnd of a subflow would accept: currently, the in-kernel only accepts
to queue up to the MPTCP level cwnd (a few more bytes due to round-up)

Closes: multipath-tcp#332
Signed-off-by: Geliang Tang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement sched packets scheduler
Projects
Status: Needs triage
Development

No branches or pull requests

1 participant