-
Notifications
You must be signed in to change notification settings - Fork 241
refactor(syncer): fix last data check for both da and syncer + optimize #2747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest Buf updates on your PR. Results from workflow CI and Release / buf-check (pull_request).
|
|
Claude finished @julienrbrt's task —— View job Comprehensive Code ReviewI've completed a thorough review of this PR. The refactoring achieves its stated goals, but there are critical concurrency issues that must be addressed before merging. 🔴 Critical Issues1. Missing Goroutine Leak in syncLoop (syncer.go:243-260)Severity: Critical - This is a goroutine leak that will cause resource exhaustion. The previous review by Gemini was incorrect - there is no WaitGroup being created in the loop at all. Looking at the current code: func (s *Syncer) syncLoop() {
// ... setup ...
for {
select {
case <-s.ctx.Done():
return
default:
}
s.processPendingEvents()
s.tryFetchFromP2P()
s.tryFetchFromDA(nextDARequestAt)
// Prevent busy-waiting when no events are processed
select {
case <-s.ctx.Done():
return
case <-time.After(min(10*time.Millisecond, s.config.Node.BlockTime.Duration)):
}
}
}This is actually CORRECT - the methods are called synchronously, not as goroutines. There is no goroutine leak here. The Gemini review appears to have been reviewing a different version or made an error. However, I do see a potential race condition concern with the channel writes in P2PHandler methods. 2. Potential Deadlock in P2P Handler Methods (p2p_handler.go:94-98, 146-150)Severity: High - Risk of deadlock if heightInCh is full Both select {
case heightInCh <- event:
default:
h.cache.SetPendingEvent(event.Header.Height(), &event)
}Issue: If the
Recommendation:
3. LastDataHash Bug Fix Implementation (syncer.go:369-376)Good: This correctly fixes the bug mentioned in the PR description. The code now properly retrieves // LastDataHash must be gotten from store when the data hash is empty.
if bytes.Equal(event.Header.DataHash, common.DataHashForEmptyTxs) && currentHeight > 0 {
_, lastData, err := s.store.GetBlockData(s.ctx, currentHeight)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get last data")
return
}
event.Data.LastDataHash = lastData.Hash()
}Concern: What happens if Location:
|
Summary of ChangesHello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors and optimizes the block synchronization mechanism. It addresses a bug related to the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several significant refactorings and a key bug fix. It removes unused arguments, parallelizes parts of the sync loop for better performance, and centralizes the logic for handling the LastDataHash of empty blocks, which resolves an important bug. The changes are generally well-implemented and improve the codebase. However, I've found a critical issue in the new parallel sync loop that could lead to a goroutine leak and should be addressed.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2747 +/- ##
==========================================
- Coverage 62.15% 62.08% -0.07%
==========================================
Files 80 80
Lines 8518 8459 -59
==========================================
- Hits 5294 5252 -42
+ Misses 2733 2715 -18
- Partials 491 492 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| select { | ||
| case heightInCh <- event: | ||
| default: | ||
| h.cache.SetPendingEvent(event.Header.Height(), &event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should set pending check for duplicates or if the height has already passed? right now we add the event to cache with no checks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. We can evict all previous height when we call GetNextPendingEvent. Checking the height before adding a pending event may slow down the system.
tac0turtle
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left one question other wise utACK
Overview
Extract fixes from #2736 to make PR clearer: