Conversation
|
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
📝 WalkthroughWalkthroughThe batch processing settings in the ClickHouse client were updated: the flush interval for all three batch processors was increased from 1 to 5 seconds, and the number of concurrent consumers for each processor was reduced, with the ratelimits batch seeing the largest reduction. No other logic or exported declarations were changed. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
Thank you for following the naming conventions for pull request titles! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
go/pkg/clickhouse/client.go (1)
178-184: Shutdown only closes one processor — leaks goroutines & data.
c.keyVerificationsandc.ratelimitsremain open, risking lost data and goroutine leaks on shutdown.func (c *clickhouse) Shutdown(ctx context.Context) error { - c.requests.Close() + c.requests.Close() + c.keyVerifications.Close() + c.ratelimits.Close() err := c.conn.Close()
♻️ Duplicate comments (1)
go/pkg/clickhouse/client.go (1)
123-125: Same configurability concern for key-verification batch.If this batch backs up, API calls that rely on verification metrics may lag. Propagating the tunable approach here keeps behaviour consistent across processors.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
go/pkg/clickhouse/client.go(3 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: chronark
PR: unkeyed/unkey#3161
File: go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql:31-33
Timestamp: 2025-04-22T14:40:51.459Z
Learning: The ClickHouse table schemas in the codebase mirror the production environment and cannot be modified directly in PRs without careful migration planning.
go/pkg/clickhouse/client.go (1)
Learnt from: Flo4604
PR: #3606
File: go/pkg/db/replica.go:8-11
Timestamp: 2025-07-16T15:38:53.491Z
Learning: For debugging database replica usage in go/pkg/db/replica.go, it's acceptable to mark QueryRowContext operations as "success" even though SQL errors only surface during row.Scan() calls. The timing metrics are the primary concern for debugging replica performance patterns.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test Agent Local / test_agent_local
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
- GitHub Check: Test Packages / Test
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (1)
go/pkg/clickhouse/client.go (1)
142-144: Largest down-scaling (8 → 2 consumers) – watch for backlog/drops.
Drop: truemeans overflowed events vanish silently. Rate-limit traffic often spikes; a ×4 reduction in workers plus a 5 s flush could increase drop rate.
- Deploy with elevated metrics:
• queue depth / dropped events
• flush latency- Be ready to revert the consumer count if drops rise.
Graphite Automations"Notify author when CI fails" took an action on this PR • (07/23/25)1 teammate was notified to this PR based on Andreas Thomas's automation. "Post a GIF when PR approved" took an action on this PR • (07/23/25)1 gif was posted to this PR based on Andreas Thomas's automation. |


What does this PR do?
Adjusts ClickHouse client configuration to optimize performance by:
These changes aim to reduce the frequency of database writes and decrease resource consumption while maintaining adequate throughput.
Fixes #(issue)
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainSummary by CodeRabbit