-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Improve time to create Auction #2831
Comments
I looked into this. Here are some numbers for production database.
Two most costly filters in OPEN_ORDERS query are on
If we start with a simplified OPEN_ORDERS query where we take all 3.2M orders from If we add filtering by This means that we have to eliminate these two conditions and cache them somehow, so we could work with reduced list of 20k orders. Possible solutions:
I think I would be in favor of (4) as simplest and fastest solution. |
I wonder if the btree index on the trades table is sufficient, or if - specifically for this join - we should add another order_uid index? I wouldn't persist any additional information but simply move to an incremental solvable orders implementation which is managed in memory:
We might need some additional btree indices for efficient range queries but I believe this should speed up things quite significantly. |
Before submiting a PR, just wanted to clarify the idea.
I assume only all solvable orders are implied here since the |
You could potentially even use the trade events that we index in the autopilot directly to update the cache (instead from first writing and then reading to disk), but I'd be surprised if this is necessary. Generally I think your idea sounds good. |
# Description > Updating solvable orders (ie creating a new auction) currently takes >2s with some pretty heavy outliers ([logs](https://production-6de61f.kb.eu-central-1.aws.cloud.es.io/app/r/s/ALsEK)) > > This makes it hard to bring CoW protocol's auction rate down to one batch per block as simply creating up to date state would take >15% of the total time we have at hand. We should at least be able to half this time (if not getting it down even more) In order to relieve the situation, it was proposed to introduce incremental solvable orders cache update, which selects all the solvable orders using the old heavy query only at startup, stores the latest received order's creation timestamp in memory, and then makes much faster incremental bounded queries to the orders and additional tables that select fewer data and executes faster. # Changes Since incremental fetching retrieves orders created/cancelled after the specific timestamps, it is also required now to fetch orders that have any onchain update based on the last fetched block number. Having said that, the data needs to be fetched within a single TX, so there is no way to run all the queries in parallel. 1. If the current solvable orders cache is empty, execute the original heavy SQL query to fetch all current solvable orders and store them in memory. 2. Otherwise, fetch full orders that created or cancelled after the last stored timestamp and also find UIDs of the order that have any onchain data updated after the latest observed block number. This includes fetching data from the following tables: trades, ethflow_data, order_execution, invalidations, onchain_order_invalidations, onchain_placed_orders, presignature_events. 3. Fetch quotes for all the collected orders. 4. Add all the newly received orders to the cache. 5. Filter out all the orders that are one of: contain on-chain errors, expired, fulfilled, invalidated. 6. Calculate the latest observed order creation timestamp. 7. Continue with the regular auction creation process. As a result, we now have 3 SQL queries where each executes in ~50ms instead of a single one taking ~2s. ## How to test New DB tests. Existing e2e tests. ## Related Issues Fixes #2831
Background
Updating solvable orders (ie creating a new auction) currently takes >2s with some pretty heavy outliers (logs)
This makes it hard to bring CoW protocol's auction rate down to one batch per block as simply creating up to date state would take >15% of the total time we have at hand. We should at least be able to half this time (if not getting it down even more)
Details
The first step should be to get a better understanding on which of the different parts in the auction creation takes the most time.
It's very likely that this is related to us computing the auction always "from scratch". We should see if we can switch to a more incremental way where only the necessary "on chain state" is queried after the block for the next auction has been processed, but all other things are kept ready and warm throughout the 12s (e.g. whatever we do when a new order is placed should happen as soon as we notice a new order)
Acceptance criteria
p95 time to create a new auction is <1s
The text was updated successfully, but these errors were encountered: