fix(router): do not reuse planners in the plan generator#2537
Conversation
Engine 253 has introduce a breaking change that requires that planners
should not be reused. Engine did it to simplify certain actions in the
planner. Reusing the state make planner more complicated.
PG was the last place where we reused planners. Things change remove
the reuse. It does not affect performance of the planners.
Added benchmark shows that internal overhead is minimal.
BenchmarkPlanGenerator-14 1364 2,582,604 ns/op 6,734,461 B/op 11,191 allocs/op
That's ~2.6ms per full plan generation cycle (parsing + normalizing + validating + planning the full.graphql query against 7 subgraphs).
WalkthroughPlanner acquisition moved from goroutine bootstrap into the per-query loop; error returns were standardized with %w across I/O, parsing, planner, and marshal paths. Added a benchmark and large GraphQL test query. Two module dependencies bumped to github.com/wundergraph/graphql-go-tools/v2 v2.0.0-rc.253. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Comment |
Router image scan passed✅ No security vulnerabilities found in image: |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2537 +/- ##
==========================================
+ Coverage 61.68% 62.09% +0.41%
==========================================
Files 239 239
Lines 25423 25424 +1
==========================================
+ Hits 15681 15788 +107
+ Misses 8422 8293 -129
- Partials 1320 1343 +23
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
router/pkg/plan_generator/plan_generator.go (1)
118-123: Usesync.WaitGroup.Gofor consistency with codebase patternsLines 118–123 use manual
wg.Add(cfg.Concurrency)followed bygo func() { defer wg.Done() }(). The codebase already standardizes onwg.Go(func() { ... })throughout production code (e.g.,router/core/router.go). Since the router targets Go 1.25+, refactor to usewg.Go()to align with established patterns and reduce error-prone lifecycle management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@router/pkg/plan_generator/plan_generator.go` around lines 118 - 123, Replace the manual lifecycle management that does wg.Add(cfg.Concurrency) and go func(i int) { defer wg.Done(); ... }(i) with the codebase-standard wg.Go(...) usage: call wg.Go(func() { ... }) inside the for i := 0; i < cfg.Concurrency; i++ loop and move the worker body into that closure, ensuring the loop index is correctly captured (e.g., assign local := i before using it in the closure or accept i as parameter) and remove the explicit defer wg.Done() / wg.Add call; keep the same worker logic and references to cfg.Concurrency and wg.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@router/pkg/plan_generator/plan_generator.go`:
- Around line 118-123: Replace the manual lifecycle management that does
wg.Add(cfg.Concurrency) and go func(i int) { defer wg.Done(); ... }(i) with the
codebase-standard wg.Go(...) usage: call wg.Go(func() { ... }) inside the for i
:= 0; i < cfg.Concurrency; i++ loop and move the worker body into that closure,
ensuring the loop index is correctly captured (e.g., assign local := i before
using it in the closure or accept i as parameter) and remove the explicit defer
wg.Done() / wg.Add call; keep the same worker logic and references to
cfg.Concurrency and wg.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
router/pkg/plan_generator/plan_generator.go (1)
121-127: Consider usingsync.WaitGroup.Go()for cleaner concurrency.Go 1.25+ provides
wg.Go(func())which managesAdd/Doneautomatically. This would simplify the goroutine spawning pattern.♻️ Suggested refactor
- wg.Add(cfg.Concurrency) for i := 0; i < cfg.Concurrency; i++ { - go func(i int) { - defer wg.Done() + wg.Go(func() { for { select { case <-ctxError.Done(): returnAnd at the end of the goroutine:
} } - }(i) + }) }Note: The loop variable
iappears unused inside the goroutine, so the closure parameter can be removed entirely.Based on learnings: "In Go code (Go 1.25+), prefer using sync.WaitGroup.Go(func()) to run a function in a new goroutine, letting the WaitGroup manage Add/Done automatically."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@router/pkg/plan_generator/plan_generator.go` around lines 121 - 127, Replace the manual Add/Done goroutine spawning for wg in generatePlan (the block creating wg := sync.WaitGroup{} and looping i := 0; i < cfg.Concurrency) with the Go 1.25+ pattern using wg.Go(func() { ... }) so the WaitGroup handles Add/Done automatically; remove the unused closure parameter i from the anonymous function since it isn't referenced inside the goroutine, and keep the existing select/for loop body intact inside the new wg.Go call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@router/pkg/plan_generator/plan_generator.go`:
- Around line 121-127: Replace the manual Add/Done goroutine spawning for wg in
generatePlan (the block creating wg := sync.WaitGroup{} and looping i := 0; i <
cfg.Concurrency) with the Go 1.25+ pattern using wg.Go(func() { ... }) so the
WaitGroup handles Add/Done automatically; remove the unused closure parameter i
from the anonymous function since it isn't referenced inside the goroutine, and
keep the existing select/for loop body intact inside the new wg.Go call.
Engine 253 has introduce a breaking change that requires that planners should not be reused. Engine did it to simplify certain actions in the planner. Reusing the state make planner more complicated.
PG was the last place where we reused planners. Things change remove the reuse. It does not affect performance of the planners. Added benchmark shows that internal overhead is minimal.
That's ~2.6ms per full plan generation cycle (parsing + normalizing + validating + planning the full.graphql query against 7 subgraphs).
benchstat:
Summary by CodeRabbit
Refactor
Tests
Chores