-
Notifications
You must be signed in to change notification settings - Fork 246
refactor(sequencers): persist prepended batch #2907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the batch sequencing mechanism to enhance data durability. Specifically, it modifies the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
|
Claude finished @julienrbrt's task —— View job Comprehensive Code ReviewI've performed a thorough review of PR #2907, which adds persistence for prepended batches. This is an important durability improvement for the force inclusion feature. Here's my detailed analysis: 1. Code Quality & Architecture ✅Strengths:
Observations:
2. Error Handling & Resilience
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to make prepended batches persistent, which is a good improvement for durability. The implementation correctly persists the batch to the database before adding it to the in-memory queue, and a new test case verifies this persistence across restarts.
However, I've identified a couple of issues:
- There is significant code duplication between the
PrependandAddBatchfunctions for the database persistence logic. This should be refactored into a helper method to improve maintainability. - A more critical issue is that the ordering of prepended batches is not guaranteed after a restart. The
Loadfunction reloads batches in an order determined by their hashes, not their insertion order. This breaks the expectation that a prepended item remains at the front of thequeue. - The new test case for persistence is currently making assumptions about this load order, which makes it brittle and hides the underlying conceptual issue.
I've left detailed comments on these points. The ordering issue is particularly important to address to ensure the correctness of the queue's behavior after a crash or restart.
sequencers/single/queue_test.go
Outdated
| // First should be prepended batch | ||
| nextBatch, err := queue2.Next(ctx) | ||
| require.NoError(t, err) | ||
| assert.Equal(t, 1, len(nextBatch.Transactions)) | ||
| assert.Equal(t, []byte("prepended"), nextBatch.Transactions[0]) | ||
|
|
||
| // Then tx2 | ||
| nextBatch, err = queue2.Next(ctx) | ||
| require.NoError(t, err) | ||
| assert.Equal(t, 1, len(nextBatch.Transactions)) | ||
| assert.Equal(t, []byte("tx2"), nextBatch.Transactions[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of the test assumes a specific order of batches after Load is called. However, Load retrieves batches from the datastore based on the lexicographical order of their keys (hashes), which doesn't correspond to the order they were added or prepended. The test might pass with the current data ("prepended" vs "tx2"), but it's brittle and could fail with different test data if the hash order changes.
This test's assumption of order masks a conceptual issue with Prepend's persistence where the prepended order is not guaranteed after a restart. If the ordering issue is addressed, this test will be valid. If not, the test should be changed to only verify the presence of batches, not their order, to avoid being flaky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good one! This makes sense. I think we do want to preserve ordering, especially for force included transactions. The order must be deterministic between all based sequencers. However, this is the single sequencer, so it doesn't really matter. This is why the queue implementation only uses hash I suppose. I'll update the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2907 +/- ##
==========================================
- Coverage 65.65% 65.54% -0.12%
==========================================
Files 87 87
Lines 7926 7932 +6
==========================================
- Hits 5204 5199 -5
- Misses 2156 2165 +9
- Partials 566 568 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
alpe
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Minor comment on doc an DRY.
sequencers/single/queue.go
Outdated
| bq.mu.Lock() | ||
| defer bq.mu.Unlock() | ||
|
|
||
| hash, err := batch.Hash() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prepend does not check for max queue size. It makes sense for me to grow beyond the limit to get the high priority TX included. Nevertheless, some doc would be good to manage expectations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* main: refactor(sequencers): persist prepended batch (#2907) feat(evm): add force inclusion command (#2888) feat: DA client, remove interface part 1: copy subset of types needed for the client using blob rpc. (#2905) feat: forced inclusion (#2797) fix: fix and cleanup metrics (sequencers + block) (#2904) build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900) refactor(block): centralize timeout in client (#2903) build(deps): Bump the all-go group across 2 directories with 3 updates (#2898) chore: bump default timeout (#2902) fix: revert default db (#2897) refactor: remove obsolete // +build tag (#2899) fix:da visualiser namespace (#2895)
Overview
ref: #2906