-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel taking of orders & retry after failures #654
Comments
Good idea. I think the current method is simple but definitely not ideal for the reasons you mentioned. An easy but still not ideal solution would be to begin and await all the swaps from the first iteration in parallel and then start a second round of swaps after all of them finish. That would be a few lines of code and maybe a goo intermediate step. Best would be to start them all in parallel and retry for a new match immediately when any swap fails. I don't think we need to change the packets to allow multiple swaps per packet tho, just send the swap requests individually but as soon as possible. |
Agreed, and great that you bring this up! Who's interested in taking this? @erkarl @offerm @Michael1101 |
When implementing the swap module, we worked hard to reduce one packet from the communication, remember? Here, a taker order may be matched with multiple orders from the same maker (can even be for the same price). Even if we do all these trades in parallel it involves 4 XUD packets and 16 LND packets for each swap. This looks to me like the right solution. |
Great addon and yup this is how it should be done! |
@kilrau if no one takes this I will be happy to :) |
Batching swap requests can be done but I still think it would be a minimal improvement and not required for running swaps in parallel. Previously when we were trying to remove a packet from the process entirely that also removed a trip between the two nodes, this would just be combining several smaller packets into one larger packet - they'd still arrive at approximately the same time. Making the swap requests in parallel would be a big improvement though. |
Daniel,
Consider that the taker order is matched with 3 different maker orders.
How many communications we will do between the peers:
for each orders:
- 2 packets on XUD (deal request, response)
- 4 packets on LNDBTC (setup the HTLC)
- 4 packets on LNDLTC (setup the HTLC)
- 4 packets on LNDLTC (resolve the HTLC)
- 4 packets on LNDBTC (resolve the HTLC
- 1 packet on XUD (error/complete)
19 packets for a swap operation. We have 3 of them so we have a total of 57
packets.
If we do the aggregation we will only need 19 packets as we setup all the 3
orders together and we swap all the amounts together.
…On Sun, Nov 11, 2018 at 11:27 PM Daniel McNally ***@***.***> wrote:
Batching swap requests can be done but I still think it would be a minimal
improvement and not required for running swaps in parallel. Previously when
we were trying to remove a packet from the process entirely that also
removed a trip between the two nodes, this would just be combining several
smaller packets into one larger packet - they'd still arrive at
approximately the same time.
Making the swap requests in parallel would be a big improvement though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#654 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJQ0ckSqGP24nvx-0CO4cB2Zpx2YuErLks5uuJZTgaJpZM4YWnqS>
.
|
You got it @ImmanuelSegol 🥇 Scope:
Keep us posted about any issues on the way, especially with the last bullet point as we should try to avoid breaking changes in the |
I wasn't thinking about batching the actual LND payments behind the scenes (just the xud packets), but I'm going to move that to a separate issue because it's a separate concept from executing swaps in parallel. |
I'll open a PR for this after #710 is merged, as there is some overlap and room for potential conflicts. |
This commit begins all swaps in parallel when placing/matching an order and retries failed swaps immediately. Any unmatched quantity is tracked and added to the orderbook (for limit orders) after matching and swaps are complete. Previously, swaps would execute one at a time and any failed quantity would wait to be retried until all first-attempt swaps completed. This should result in significantly faster order execution for orders that match with multiple peer orders and therefore require multiple swaps. Closes #654.
This commit begins all swaps in parallel when placing/matching an order and retries failed swaps immediately. Any unmatched quantity is tracked and added to the orderbook (for limit orders) after matching and swaps are complete. Previously, swaps would execute one at a time and any failed quantity would wait to be retried until all first-attempt swaps completed. This should result in significantly faster order execution for orders that match with multiple peer orders and therefore require multiple swaps. Closes #654.
This commit begins all swaps in parallel when placing/matching an order and retries failed swaps immediately. Any unmatched quantity is tracked and added to the orderbook (for limit orders) after matching and swaps are complete. Previously, swaps would execute one at a time and any failed quantity would wait to be retried until all first-attempt swaps completed. This should result in significantly faster order execution for orders that match with multiple peer orders and therefore require multiple swaps. Closes #654.
This commit begins all swaps in parallel when placing/matching an order and retries failed swaps immediately. Any unmatched quantity is tracked and added to the orderbook (for limit orders) after matching and swaps are complete. Previously, swaps would execute one at a time and any failed quantity would wait to be retried until all first-attempt swaps completed. This should result in significantly faster order execution for orders that match with multiple peer orders and therefore require multiple swaps. Closes #654.
Consider the case when a taker order (market or limit) is matched with multiple orders. As of today, we execute the first match, wait for result, execute the next match etc.
Why all these matches not executed in parallel?
This is not just performance improvement issue. When the first swap fails the orderbook may be completely different compared to what it was when we planned the swaps. It might be that we can swap the user order with a peer order having a better price.
Maybe we should perform all swaps in parallel and have a seperate retry for each swap that failed?
Maybe we should extend swap-request to include multiple swaps?
The text was updated successfully, but these errors were encountered: