Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel taking of orders & retry after failures #654

Closed
offerm opened this issue Nov 9, 2018 · 10 comments
Closed

Parallel taking of orders & retry after failures #654

offerm opened this issue Nov 9, 2018 · 10 comments
Assignees
Labels
has PR issues with an open PR P1 top priority swaps
Milestone

Comments

@offerm
Copy link
Contributor

offerm commented Nov 9, 2018

Consider the case when a taker order (market or limit) is matched with multiple orders. As of today, we execute the first match, wait for result, execute the next match etc.

Why all these matches not executed in parallel?

This is not just performance improvement issue. When the first swap fails the orderbook may be completely different compared to what it was when we planned the swaps. It might be that we can swap the user order with a peer order having a better price.

Maybe we should perform all swaps in parallel and have a seperate retry for each swap that failed?
Maybe we should extend swap-request to include multiple swaps?

@offerm offerm changed the title question : parallel taking of orders question : parallel taking of orders & retry after failures Nov 9, 2018
@sangaman
Copy link
Collaborator

sangaman commented Nov 9, 2018

Good idea. I think the current method is simple but definitely not ideal for the reasons you mentioned.

An easy but still not ideal solution would be to begin and await all the swaps from the first iteration in parallel and then start a second round of swaps after all of them finish. That would be a few lines of code and maybe a goo intermediate step.

Best would be to start them all in parallel and retry for a new match immediately when any swap fails. I don't think we need to change the packets to allow multiple swaps per packet tho, just send the swap requests individually but as soon as possible.

@kilrau
Copy link
Contributor

kilrau commented Nov 10, 2018

We should perform all swaps in parallel and have a seperate retry for each swap that failed.

Agreed, and great that you bring this up!

Who's interested in taking this? @erkarl @offerm @Michael1101

@kilrau kilrau changed the title question : parallel taking of orders & retry after failures Parallel taking of orders & retry after failures Nov 10, 2018
@kilrau kilrau added this to the 1.0.0-alpha.5 milestone Nov 10, 2018
@kilrau kilrau added the swaps label Nov 10, 2018
@offerm
Copy link
Contributor Author

offerm commented Nov 11, 2018

When implementing the swap module, we worked hard to reduce one packet from the communication, remember?

Here, a taker order may be matched with multiple orders from the same maker (can even be for the same price). Even if we do all these trades in parallel it involves 4 XUD packets and 16 LND packets for each swap.
If we combine all the orders from the same peer into one dealRequest and agree with the peer on the execution, we can do then swap with just one LND swap request. This can be a major performance improvement in case of own orders matched with multiple peer orders.

This looks to me like the right solution.

@kilrau
Copy link
Contributor

kilrau commented Nov 11, 2018

Great addon and yup this is how it should be done!

@ImmanuelSegol
Copy link
Contributor

@kilrau if no one takes this I will be happy to :)

@sangaman
Copy link
Collaborator

Batching swap requests can be done but I still think it would be a minimal improvement and not required for running swaps in parallel. Previously when we were trying to remove a packet from the process entirely that also removed a trip between the two nodes, this would just be combining several smaller packets into one larger packet - they'd still arrive at approximately the same time.

Making the swap requests in parallel would be a big improvement though.

@offerm
Copy link
Contributor Author

offerm commented Nov 12, 2018 via email

@kilrau
Copy link
Contributor

kilrau commented Nov 12, 2018

You got it @ImmanuelSegol 🥇

Scope:

  • making swap requests in parallel if an order matches more than 1 peerOrder
  • (preparation for Batching swaps #664) if among the matched peerOrders there are more than 1 of the same peer (nodePubKey), combine these dealRequests into one

Keep us posted about any issues on the way, especially with the last bullet point as we should try to avoid breaking changes in the dealRequest packet.

@sangaman
Copy link
Collaborator

sangaman commented Nov 12, 2018

I wasn't thinking about batching the actual LND payments behind the scenes (just the xud packets), but I'm going to move that to a separate issue because it's a separate concept from executing swaps in parallel.

@kilrau kilrau assigned ghost and unassigned ImmanuelSegol Nov 18, 2018
@kilrau kilrau added the P1 top priority label Nov 18, 2018
@kilrau kilrau assigned sangaman and unassigned ghost Nov 20, 2018
@sangaman
Copy link
Collaborator

I'll open a PR for this after #710 is merged, as there is some overlap and room for potential conflicts.

@kilrau kilrau modified the milestones: 1.0.0-alpha.5, 1.0.0-alpha.6 Dec 5, 2018
@kilrau kilrau added the has PR issues with an open PR label Dec 6, 2018
@kilrau kilrau modified the milestones: 1.0.0-alpha.6, 1.0.0-alpha.8 Jan 2, 2019
@kilrau kilrau modified the milestones: 1.0.0-alpha.7, 1.0.0-alpha.8 Jan 23, 2019
sangaman added a commit that referenced this issue Feb 3, 2019
This commit begins all swaps in parallel when placing/matching an order
and retries failed swaps immediately. Any unmatched quantity is tracked
and added to the orderbook (for limit orders) after matching and swaps
are complete. Previously, swaps would execute one at a time and any
failed quantity would wait to be retried until all first-attempt swaps
completed. This should result in significantly faster order execution
for orders that match with multiple peer orders and therefore require
multiple swaps.

Closes #654.
sangaman added a commit that referenced this issue Feb 3, 2019
This commit begins all swaps in parallel when placing/matching an order
and retries failed swaps immediately. Any unmatched quantity is tracked
and added to the orderbook (for limit orders) after matching and swaps
are complete. Previously, swaps would execute one at a time and any
failed quantity would wait to be retried until all first-attempt swaps
completed. This should result in significantly faster order execution
for orders that match with multiple peer orders and therefore require
multiple swaps.

Closes #654.
sangaman added a commit that referenced this issue Feb 4, 2019
This commit begins all swaps in parallel when placing/matching an order
and retries failed swaps immediately. Any unmatched quantity is tracked
and added to the orderbook (for limit orders) after matching and swaps
are complete. Previously, swaps would execute one at a time and any
failed quantity would wait to be retried until all first-attempt swaps
completed. This should result in significantly faster order execution
for orders that match with multiple peer orders and therefore require
multiple swaps.

Closes #654.
sangaman added a commit that referenced this issue Feb 4, 2019
This commit begins all swaps in parallel when placing/matching an order
and retries failed swaps immediately. Any unmatched quantity is tracked
and added to the orderbook (for limit orders) after matching and swaps
are complete. Previously, swaps would execute one at a time and any
failed quantity would wait to be retried until all first-attempt swaps
completed. This should result in significantly faster order execution
for orders that match with multiple peer orders and therefore require
multiple swaps.

Closes #654.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
has PR issues with an open PR P1 top priority swaps
Projects
None yet
Development

No branches or pull requests

4 participants