-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve DB indexes for better worst case accounting time #760
Conversation
-- For account lookup | ||
CREATE INDEX IF NOT EXISTS account_asset_by_addr ON account_asset ( addr ); | ||
-- For lookup up existing assets by account | ||
CREATE INDEX IF NOT EXISTS account_asset_by_addr_partial ON account_asset(addr) WHERE NOT deleted; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a non-partial index on addr
- the primary key.
Codecov Report
@@ Coverage Diff @@
## develop #760 +/- ##
========================================
Coverage 55.20% 55.20%
========================================
Files 26 26
Lines 3817 3817
========================================
Hits 2107 2107
Misses 1435 1435
Partials 275 275 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Do we need to manage migration for existing deployments, or are we just assuming no backwards compatibility with upcoming release / simply deleting references to pre-existing indexes? cc @winder
I.E. Some of these indexes being updated it seems we would want runners to drop before applying the new ones, yet we give no on/off ramp to do this at first glance.
The coming "refactoring" release will no backward compatibility with the previous ones. The users will have to create a new database and do catch up from round 0. This is simply because 2.6.* has accounting problems, and 2.7 will in all likelihood crash if we try using a 2.6 database. Hence, we should use the opportunity to make any changes that would require a migration if we were to support backward compatibility. |
I'm approving, but have we run tests with these indexes at least locally to show the performance improvement? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to keep in mind that not everyone uses the same indexes, although these in particular -should- be universal. We'll need to work with release eng to communicate the changes in existing ones for people who pick and choose which ones they apply today..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems fine to me.
The performance is asymptotically better in the worst case (when the number of deleted items is much larger than the number of non-deleted items). I'm not sure testing performance is even necessary. |
Summary
Closes https://github.com/algorand/go-algorand-internal/issues/1718.
Test Plan
Ran block generator. No noticeable difference in performance.