Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
📝 WalkthroughWalkthroughMigrated many ClickHouse queries and timeseries intervals from legacy schemas to default v2 tables; renamed ratelimit log field Changes
Sequence Diagram(s)Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
0e96cbc to
87c76b8
Compare
87d55e5 to
051dd5e
Compare
|
Please, disregard anything related to ratelimit latencies for now. It will be updated in the next iterations. |
|
Thank you for following the naming conventions for pull request titles! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
internal/clickhouse/src/keys/keys.ts (1)
270-279: Confirm v2 raw schema parity (time/outcome/tags).filtered_keys now reads from default.key_verifications_raw_v2. Ensure the v2 table exposes time, key_id, request_id, tags, outcome with compatible types. If time is DateTime64, consider casting params (fromUnixTimestamp64Milli) for strict typing.
internal/clickhouse/src/logs.ts (1)
211-257: Replace metrics._v1 timeseries tables with default._v2INTERVALS in internal/clickhouse/src/logs.ts still point at metrics.api_requests_per_v1 while the repo contains default.api_requests_per_v2 rollups (migrations and other internal/clickhouse code already use default.*_v2). Update INTERVALS to the v2 tables to avoid broken charts.
File: internal/clickhouse/src/logs.ts (INTERVALS, ~lines 211–257)
- table: "metrics.api_requests_per_minute_v1", + table: "default.api_requests_per_minute_v2", @@ - table: "metrics.api_requests_per_minute_v1", + table: "default.api_requests_per_minute_v2", @@ - table: "metrics.api_requests_per_minute_v1", + table: "default.api_requests_per_minute_v2", @@ - table: "metrics.api_requests_per_hour_v1", + table: "default.api_requests_per_hour_v2", @@ - table: "metrics.api_requests_per_hour_v1", + table: "default.api_requests_per_hour_v2", @@ - table: "metrics.api_requests_per_hour_v1", + table: "default.api_requests_per_hour_v2", @@ - table: "metrics.api_requests_per_hour_v1", + table: "default.api_requests_per_hour_v2", @@ - table: "metrics.api_requests_per_day_v1", + table: "default.api_requests_per_day_v2",internal/clickhouse/src/verifications.ts (1)
181-184: Return shape bug: logs returns the wrapper, not rows.You await the queries but return logs: clickhouseResults instead of logs: clickhouseResults.val. This is inconsistent with totalCount handling and likely breaks callers.
- return { - logs: clickhouseResults, - totalCount: totalResults.val ? totalResults.val[0].total_count : 0, - }; + return { + logs: clickhouseResults.val ?? [], + totalCount: totalResults.val ? totalResults.val[0].total_count : 0, + };internal/clickhouse/src/ratelimits.ts (2)
226-234: Table switch for last-used looks good; verify multiSearchAny index coverage.Confirm default.ratelimits_last_used_v2 has an index/projection supporting multiSearchAny(identifier, ...) to avoid full scans on large namespaces.
355-404: Data source migrations LGTM; add workspace filter to metrics join for safety.The LEFT JOIN subquery already filters by workspace_id; keep it, but consider also AND m.workspace_id = fr.workspace_id in the ON to prevent accidental cross-tenant joins if request_id ever clashes.
-) m ON fr.request_id = m.request_id +) m ON fr.request_id = m.request_id AND m.workspace_id = fr.workspace_idAlso applies to: 412-418
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (12)
apps/dashboard/app/(app)/ratelimits/[namespaceId]/logs/components/table/logs-table.tsx(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-api-keys/get-all-keys.ts(1 hunks)internal/clickhouse/src/active_keys.ts(0 hunks)internal/clickhouse/src/billing.ts(2 hunks)internal/clickhouse/src/keys/active_keys.ts(1 hunks)internal/clickhouse/src/keys/keys.ts(1 hunks)internal/clickhouse/src/latest_verifications.ts(1 hunks)internal/clickhouse/src/logs.ts(4 hunks)internal/clickhouse/src/ratelimits.ts(14 hunks)internal/clickhouse/src/requests.ts(1 hunks)internal/clickhouse/src/success.ts(1 hunks)internal/clickhouse/src/verifications.ts(5 hunks)
💤 Files with no reviewable changes (1)
- internal/clickhouse/src/active_keys.ts
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: chronark
PR: unkeyed/unkey#3161
File: go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql:31-33
Timestamp: 2025-04-22T14:40:51.459Z
Learning: The ClickHouse table schemas in the codebase mirror the production environment and cannot be modified directly in PRs without careful migration planning.
📚 Learning: 2024-10-20T07:05:55.471Z
Learnt from: chronark
PR: unkeyed/unkey#2294
File: apps/api/src/pkg/keys/service.ts:268-271
Timestamp: 2024-10-20T07:05:55.471Z
Learning: In `apps/api/src/pkg/keys/service.ts`, `ratelimitAsync` is a table relation, not a column selection. When querying, ensure that table relations are included appropriately, not as columns.
Applied to files:
internal/clickhouse/src/ratelimits.ts
🧬 Code graph analysis (1)
apps/dashboard/app/(app)/ratelimits/[namespaceId]/logs/components/table/logs-table.tsx (1)
internal/clickhouse/src/logs.ts (1)
log(24-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
- GitHub Check: Test API / API Test Local
🔇 Additional comments (13)
internal/clickhouse/src/success.ts (1)
11-17: LGTM: migrated to v2 sourceSource table switch to default.active_workspaces_per_month_v2 looks consistent; ordering and schema mapping stay intact.
apps/dashboard/lib/trpc/routers/api/keys/query-api-keys/get-all-keys.ts (1)
113-118: LGTM: tag filter now reads from v2Switch to default.key_verifications_raw_v2 aligns with the broader migration. Please confirm tags column exists unchanged in v2 for this query.
internal/clickhouse/src/latest_verifications.ts (1)
19-22: LGTM: verifications source updated to v2Region/tags selected from default.key_verifications_raw_v2; schema unchanged.
internal/clickhouse/src/logs.ts (2)
136-138: LGTM: total count now from v2 rawCounts sourced from default.api_requests_raw_v2; paramization intact.
159-161: LGTM: logs page now reads from v2 rawSelect list and filters unchanged; ordering by time DESC preserved.
internal/clickhouse/src/keys/active_keys.ts (1)
65-78: LGTM: ACTIVE_KEYS interval sources migrated to v2All per-minute/hour/day/month sources now default.*_v2; query builder and casts remain consistent.
Also applies to: 81-104, 107-137
internal/clickhouse/src/verifications.ts (4)
8-8: Insert target moved to default.key_verifications_raw_v2 — check column types match.Double-check the raw v2 table column types (especially time and identity_id) align with the insert schema to avoid implicit casts.
69-71: Type alias change is fine.Switching to a type alias with an index signature is appropriate for dynamic param extension.
264-336: Timeseries tables migrated to v2 — confirm retention and materialization windows.Ensure the v2 per_minute/hour/day/month tables have identical retention/materialization to v1 to avoid gaps during charting.
149-150: ```shell
#!/bin/bash
set -euo pipefail
FILE="internal/clickhouse/src/verifications.ts"echo "PWD: $(pwd)"
echo
echo "---- list internal/clickhouse/src ----"
ls -la internal/clickhouse/src || true
echoif [ -f "$FILE" ]; then
echo "---- show $FILE (lines 1-400) ----"
sed -n '1,400p' "$FILE" || true
echo
echo "---- show $FILE (lines 140-170) ----"
sed -n '140,170p' "$FILE" || true
else
echo "MISSING: $FILE"
fiecho
echo "---- repo search for relevant patterns ----"
PATTERNS=(
"key_verifications_raw_v2"
"default.key_verifications_raw_v2"
"baseConditions"
"fromUnixTimestamp64Milli"
"fromUnixTimestamp64"
"fromUnixTimestamp"
"DateTime64"
"DateTime("
"UInt64"
"{startTime"
"{endTime"
"startTime:"
"endTime:"
"time BETWEEN"
"workspace_id ="
"key_space_id ="
"key_id ="
)if command -v rg >/dev/null 2>&1; then
for p in "${PATTERNS[@]}"; do
echo "---- rg for: $p ----"
rg -n -S -C3 "$p" || true
done
else
for p in "${PATTERNS[@]}"; do
echo "---- grep for: $p ----"
grep -RIn --line-number -C3 --exclude-dir=.git "$p" . || true
done
fiecho
echo "---- done ----"</blockquote></details> <details> <summary>internal/clickhouse/src/ratelimits.ts (3)</summary><blockquote> `290-290`: **Public contract change: colo → region.** Backend type updated; ensure all downstream consumers use region. The dashboard table was updated per PR notes; verify any API clients too. --- `618-643`: **Overview query looks correct; ordering guarded.** Allowed-columns map prevents ORDER BY injection; good. No action needed. Also applies to: 652-656 --- `702-746`: **Latency intervals moved to v2 — acknowledged per PR note.** Per the author’s comment, skipping deep review of latency changes now. Surfaces look consistent with other v2 migrations. </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
apps/dashboard/app/(app)/ratelimits/[namespaceId]/logs/components/table/logs-table.tsx
Show resolved
Hide resolved
chronark
left a comment
There was a problem hiding this comment.
the code looks good, I haven't ran it in the dashboard yet to see if everything loads though
|
@chronark I couldn't test the billing properly, could you make sure for me? It acts weird for me locally |
|
Yeah I’ll do a proper test on monday |
|
I need to clean this up cause it does writes too |
|
ok this is reverted, now it's actually just changing reads @ogzhanolguncu @perkinsjr please sanity check me when you have some time |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
apps/api/src/pkg/testutil/harness.ts (1)
274-279: Same concern for user workspace slugApply the same URL-safe pattern to avoid underscore/mixed-case issues and update any tests that relied on
"user-workspace".- slug: newId("test"), + // URL-safe, kebab-case; keeps human prefix for readability + slug: `user-${newId("test").split("_")[1]}`,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/api/src/pkg/testutil/harness.ts(2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: chronark
PR: unkeyed/unkey#3958
File: internal/clickhouse/src/requests.ts:7-8
Timestamp: 2025-09-12T13:25:41.814Z
Learning: In the ClickHouse v2 table migration, the location field was renamed from `colo` to `region` across all tables including `default.api_requests_raw_v2`. The zod insert schemas must be updated accordingly to avoid runtime insert failures.
Learnt from: chronark
PR: unkeyed/unkey#3161
File: go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql:31-33
Timestamp: 2025-04-22T14:40:51.459Z
Learning: The ClickHouse table schemas in the codebase mirror the production environment and cannot be modified directly in PRs without careful migration planning.
📚 Learning: 2025-08-21T12:37:40.996Z
Learnt from: Flo4604
PR: unkeyed/unkey#3821
File: apps/dashboard/lib/trpc/routers/key/updateRootKeyPermissions.ts:74-74
Timestamp: 2025-08-21T12:37:40.996Z
Learning: Root keys in Unkey have two workspace fields: `workspaceId` (always set to env().UNKEY_WORKSPACE_ID for the Unkey workspace that owns the key) and `forWorkspaceId` (set to ctx.workspace.id for the user's workspace that the key is for). When querying root keys, the system filters by forWorkspaceId to get keys for the current user's workspace, but the returned rootKey.workspaceId is always the Unkey workspace ID.
Applied to files:
apps/api/src/pkg/testutil/harness.ts
🧬 Code graph analysis (1)
apps/api/src/pkg/testutil/harness.ts (1)
internal/id/src/generate.ts (1)
newId(35-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test Packages / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
🔇 Additional comments (1)
apps/api/src/pkg/testutil/harness.ts (1)
254-259: Dynamic slug is good for test isolation — verify slug format & consumersFile: apps/api/src/pkg/testutil/harness.ts (lines 254-259)
const unkeyWorkspace: Workspace = { id: newId("test"), name: "unkey", slug: newId("test"), orgId: newId("test"), plan: "enterprise",Switching to
newId("test")removes hard-coded collisions butnewIdyieldstest_<b58>(contains an underscore and mixed-case). If slug validators or URL routes assume kebab-case this can break; tests/fixtures referencing"unkey-workspace"will fail.Consider emitting a URL-safe, kebab-case slug while keeping a human prefix:
- slug: newId("test"), + // URL-safe, kebab-case; keeps human prefix for readability + slug: `unkey-${newId("test").split("_")[1]}`,Repo search attempt returned "No files were searched" (rg skipped files), so I couldn't confirm consumers or hard-coded fixtures — re-run locally with:
rg -n --hidden -uu --fixed-strings -S 'unkey-workspace|user-workspace' rg -n -C2 -uu -S '\bslug\b.*(zod|regex|pattern|varchar|check|constraint|slugify)'
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (2)
internal/clickhouse/src/ratelimits.ts (2)
702-746: Consistency: use singular units in LATENCY_INTERVALS too (optional).Not a functional bug here because you map to singular before SQL, but normalizing reduces cognitive overhead and prevents regressions.
fiveMinutes: { - table: "default.ratelimits_identifier_latency_stats_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_identifier_latency_stats_per_minute_v2", + step: "MINUTE", stepSize: 5, }, fifteenMinutes: { - table: "default.ratelimits_identifier_latency_stats_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_identifier_latency_stats_per_minute_v2", + step: "MINUTE", stepSize: 15, }, thirtyMinutes: { - table: "default.ratelimits_identifier_latency_stats_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_identifier_latency_stats_per_minute_v2", + step: "MINUTE", stepSize: 30, }, twoHours: { - table: "default.ratelimits_identifier_latency_stats_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_identifier_latency_stats_per_hour_v2", + step: "HOUR", stepSize: 2, }, fourHours: { - table: "default.ratelimits_identifier_latency_stats_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_identifier_latency_stats_per_hour_v2", + step: "HOUR", stepSize: 4, }, sixHours: { - table: "default.ratelimits_identifier_latency_stats_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_identifier_latency_stats_per_hour_v2", + step: "HOUR", stepSize: 6, },
51-71: Blocker: plural INTERVAL units will break createTimeseriesQuery.
createTimeseriesQueryinjectsinterval.stepverbatim intoINTERVAL … ${interval.step}; ClickHouse expects singular units (MINUTE/HOUR). Entries using "MINUTES"/"HOURS" will error at runtime. Normalize to singular or map before SQL generation.Apply:
fiveMinutes: { - table: "default.ratelimits_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_per_minute_v2", + step: "MINUTE", stepSize: 5, }, fifteenMinutes: { - table: "default.ratelimits_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_per_minute_v2", + step: "MINUTE", stepSize: 15, }, thirtyMinutes: { - table: "default.ratelimits_per_minute_v2", - step: "MINUTES", + table: "default.ratelimits_per_minute_v2", + step: "MINUTE", stepSize: 30, }, twoHours: { - table: "default.ratelimits_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_per_hour_v2", + step: "HOUR", stepSize: 2, }, fourHours: { - table: "default.ratelimits_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_per_hour_v2", + step: "HOUR", stepSize: 4, }, sixHours: { - table: "default.ratelimits_per_hour_v2", - step: "HOURS", + table: "default.ratelimits_per_hour_v2", + step: "HOUR", stepSize: 6, },Also applies to: 73-101
🧹 Nitpick comments (1)
internal/clickhouse/src/ratelimits.ts (1)
355-355: Reads on v2 look good; tighten JOIN to prevent rare cross‑workspace collisions.Tables migrated to v2 and
regionselected—good. To be defensive, join on bothrequest_idandworkspace_id(and select it in the subquery) to avoid any theoreticalrequest_idcollision.SELECT fr.request_id, @@ - m.user_agent, - m.region + m.user_agent, + m.region FROM filtered_ratelimits fr LEFT JOIN ( SELECT - request_id, + request_id, + workspace_id, host, method, path, @@ - user_agent, - region + user_agent, + region FROM default.api_requests_raw_v2 WHERE workspace_id = {workspaceId: String} AND time BETWEEN {startTime: UInt64} AND {endTime: UInt64} ) m ON fr.request_id = m.request_id + AND fr.workspace_id = m.workspace_idAlso applies to: 382-383, 385-399, 412-412
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/clickhouse/src/ratelimits.ts(13 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: chronark
PR: unkeyed/unkey#3958
File: internal/clickhouse/src/requests.ts:7-8
Timestamp: 2025-09-12T13:25:41.814Z
Learning: In the ClickHouse v2 table migration, the location field was renamed from `colo` to `region` across all tables including `default.api_requests_raw_v2`. The zod insert schemas must be updated accordingly to avoid runtime insert failures.
Learnt from: chronark
PR: unkeyed/unkey#3161
File: go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql:31-33
Timestamp: 2025-04-22T14:40:51.459Z
Learning: The ClickHouse table schemas in the codebase mirror the production environment and cannot be modified directly in PRs without careful migration planning.
📚 Learning: 2025-09-12T13:25:41.814Z
Learnt from: chronark
PR: unkeyed/unkey#3958
File: internal/clickhouse/src/requests.ts:7-8
Timestamp: 2025-09-12T13:25:41.814Z
Learning: In the ClickHouse v2 table migration, the location field was renamed from `colo` to `region` across all tables including `default.api_requests_raw_v2`. The zod insert schemas must be updated accordingly to avoid runtime insert failures.
Applied to files:
internal/clickhouse/src/ratelimits.ts
📚 Learning: 2024-10-20T07:05:55.471Z
Learnt from: chronark
PR: unkeyed/unkey#2294
File: apps/api/src/pkg/keys/service.ts:268-271
Timestamp: 2024-10-20T07:05:55.471Z
Learning: In `apps/api/src/pkg/keys/service.ts`, `ratelimitAsync` is a table relation, not a column selection. When querying, ensure that table relations are included appropriately, not as columns.
Applied to files:
internal/clickhouse/src/ratelimits.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Packages / Test
- GitHub Check: Build / Build
🔇 Additional comments (3)
internal/clickhouse/src/ratelimits.ts (3)
226-226: LGTM: last-used reads migrated to v2.Source table updated to
default.ratelimits_last_used_v2as expected.
289-290: LGTM: colo → region rename reflected in schema.
ratelimitLogsnow exposesregion: string(), matching v2 metrics.Please confirm all downstream consumers (Dashboard/SDK) read
regionand no code still expectscolo.
617-617: LGTM: overview queries source v2 and aggregate correctly.Switched to
default.ratelimits_raw_v2; grouping and ordering guardrails look solid.Also applies to: 626-635, 652-652
|
Lemme test it properly from scratch |
|
it works |
|
oops wrong thread |
|
I tested this across the dashboard and everything looks good, all data loaded and matched another branch that didn't use the v2 reads. |
ogzhanolguncu
left a comment
There was a problem hiding this comment.
@chronark /logs, /apis/, /ratelimits loads correctly. LGTM
|
@ogzhanolguncu I highly agree
|
|
FYI I have this in a branch that fixes all the destiny. I was waiting for the CH stuff to drop to finish the branch. https://app.cushion.so/unkey/post/post_GJeqCuQMqJnk00eOi7MFA |


What does this PR do?
This PR updates table names to accommodate CH structure changes. Also removed unused CH queries.
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainAppreciated
Summary by CodeRabbit
Bug Fixes
Chores