Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the 📝 WalkthroughWalkthroughAdds spent_credits tracking to key verification logs end-to-end: computes and logs on verification, persists in ClickHouse raw and aggregated tables (minute/hour/day/month v2/v4), exposes totals/timeseries and logs via ClickHouse queries and tRPC, updates API payloads, dashboard charts/tables/controls, seeds, and tests. Removes legacy CH DDL. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Client
participant API as API Keys Service
participant Verif as KeyVerifier (Go)
participant Analytics as Analytics.insertKeyVerification
participant CH as ClickHouse
Client->>API: verifyKey(request)
API->>Verif: validate and compute usage
alt usage valid
Verif->>Verif: k.spentCredits = cost
else exceeded/invalid
Verif->>Verif: k.spentCredits = 0
end
API->>Analytics: {request_id,..., tags, spent_credits}
Analytics->>CH: insert into verifications.raw_key_verifications_v1
note over CH: MVs aggregate per minute/hour/day/month<br/>including sum(spent_credits)
sequenceDiagram
autonumber
participant User as Dashboard UI
participant TRPC as tRPC Router
participant CHSDK as ClickHouse SDK
participant CH as ClickHouse
User->>TRPC: api.keys.timeseries (creditSpendMode?)<br/>key.spentCredits(params)
TRPC->>CHSDK: timeseries / spentCreditsTotal
CHSDK->>CH: SELECT SUM(spent_credits) ... WHERE filters
CH-->>CHSDK: aggregates (spent_credits)
CHSDK-->>TRPC: timeseries/total with spent_credits
TRPC-->>User: data
User->>User: render Credits Spent charts/tables
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 23
🔭 Outside diff range comments (2)
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (1)
4-16: Fix required — canonicalizetagsbefore using in ORDER BY / GROUP BY (verifications.key_verifications_per_minute_v2)Short summary: the table ording includes tags (Array(String)) but the MV that populates it inserts tags verbatim. I found no existing normalization (arraySort/arrayDistinct) in the repo — this will prevent proper SummingMergeTree merges when tag order/duplicates differ and will hurt pruning/locality.
Files to fix:
- internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql — ORDER BY contains tags
- internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql — MV SELECT / GROUP BY uses tags directly (no normalization)
Suggested fix (make tags canonical before grouping/inserting). For example, change the MV SELECT / GROUP BY to use a sorted/distinct array:
Replace
tags
in SELECT with
arraySort(arrayDistinct(tags)) AS tagsand replace
tags
in GROUP BY with
arraySort(arrayDistinct(tags))Or, if you prefer a single-string key, use:
arrayStringConcat(arraySort(arrayDistinct(tags)), ',') AS tag_set
and use tag_set in ORDER BY / GROUP BY instead.If tags must remain part of the PK, ensure all feeders produce a canonical tag representation (sorted+deduped). Otherwise consider removing tags from the ORDER BY and modeling tags as a separate dimension.
internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)
16-23: Critical: GROUP BY is using the raw timestamp, not the day-bucket alias — leads to duplicate/day-split rows.ClickHouse doesn’t allow SELECT aliases in GROUP BY. The current GROUP BY time actually groups by the raw millisecond timestamp, while SELECT projects the start-of-day alias. This will emit multiple rows per day (one per distinct raw timestamp) instead of one daily aggregate.
Fix by grouping on the same expression used in SELECT:
GROUP BY workspace_id, key_space_id, identity_id, key_id, outcome, - time, + toStartOfDay(fromUnixTimestamp64Milli(time)), tagsAlternatively (cleaner), use WITH to avoid repeating the expression:
WITH toStartOfDay(fromUnixTimestamp64Milli(time)) AS day SELECT ..., day AS time, ... FROM ... GROUP BY ..., day, ...
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (38)
apps/agent/pkg/clickhouse/schema/requests.go(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx(3 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts(1 hunks)apps/dashboard/lib/trpc/routers/index.ts(4 hunks)apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts(1 hunks)go/internal/services/keys/validation.go(1 hunks)go/internal/services/keys/verifier.go(2 hunks)go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql(0 hunks)go/pkg/clickhouse/schema/requests.go(1 hunks)internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql(1 hunks)internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql(2 hunks)internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql(3 hunks)internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql(2 hunks)internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql(3 hunks)internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql(2 hunks)internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql(3 hunks)internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql(2 hunks)internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql(3 hunks)internal/clickhouse/src/index.ts(2 hunks)internal/clickhouse/src/verifications.ts(5 hunks)
💤 Files with no reviewable changes (14)
- go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql
- go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-24T13:29:10.129Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: The `buildQueryParams` function in `apps/dashboard/app/(app)/logs/filters.query-params.ts` calls `useFilters()` hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.
Applied to files:
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts
🧬 Code Graph Analysis (9)
apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (4)
apps/dashboard/lib/trpc/trpc.ts (4)
t(8-8)requireUser(10-21)requireWorkspace(23-36)withRatelimit(113-129)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/query-timeseries.schema.ts (1)
keysOverviewQueryTimeseriesPayload(6-59)apps/dashboard/lib/trpc/routers/api/keys/api-query.ts (2)
getApi(388-410)queryApiKeys(86-312)tools/local/src/cmd/seed/utils.ts (1)
clickhouse(17-17)
apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts (6)
apps/dashboard/providers/query-time-provider.tsx (1)
useQueryTime(74-80)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/query-timeseries.schema.ts (1)
KeysOverviewQueryTimeseriesPayload(61-61)apps/dashboard/components/logs/constants.ts (1)
HISTORICAL_DATA_WINDOW(2-2)apps/dashboard/app/(app)/apis/[apiId]/_overview/filters.schema.ts (1)
keysOverviewFilterFieldConfig(11-46)internal/clickhouse/src/keys/keys.ts (1)
KEY_VERIFICATION_OUTCOMES(4-13)apps/dashboard/lib/trpc/server.ts (1)
trpc(7-14)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (5)
apps/dashboard/providers/query-time-provider.tsx (1)
useQueryTime(74-80)apps/dashboard/components/logs/constants.ts (1)
HISTORICAL_DATA_WINDOW(2-2)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/filters.schema.ts (1)
keyDetailsFilterFieldConfig(15-38)internal/clickhouse/src/keys/keys.ts (1)
KEY_VERIFICATION_OUTCOMES(4-13)apps/dashboard/lib/trpc/server.ts (1)
trpc(7-14)
apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (4)
apps/dashboard/lib/trpc/trpc.ts (4)
t(8-8)requireUser(10-21)requireWorkspace(23-36)withRatelimit(113-129)apps/dashboard/lib/db.ts (1)
db(5-26)packages/rbac/src/queries.ts (1)
and(52-56)tools/local/src/cmd/seed/utils.ts (1)
clickhouse(17-17)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (6)
apps/dashboard/components/logs/controls-container.tsx (2)
ControlsContainer(1-7)ControlsLeft(9-11)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-filters/index.tsx (1)
LogsFilters(12-173)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-datetime/index.tsx (1)
LogsDateTime(9-96)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/components/status-badge.tsx (1)
StatusBadge(11-26)apps/dashboard/app/(app)/apis/[apiId]/settings/components/status-badge.tsx (1)
StatusBadge(28-35)internal/icons/src/icons/chart-usage.tsx (1)
ChartUsage(15-51)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (5)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)
useSpentCredits(9-106)internal/ui/src/components/separator.tsx (1)
Separator(26-26)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/components/status-badge.tsx (1)
StatusBadge(11-26)apps/dashboard/app/(app)/apis/[apiId]/settings/components/status-badge.tsx (1)
StatusBadge(28-35)internal/icons/src/icons/chart-usage.tsx (1)
ChartUsage(15-51)
apps/dashboard/lib/trpc/routers/index.ts (2)
apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (1)
queryKeySpentCredits(38-86)apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (1)
queryApiSpentCredits(8-61)
internal/clickhouse/src/verifications.ts (3)
internal/clickhouse/src/keys/keys.ts (1)
KEY_VERIFICATION_OUTCOMES(4-13)internal/clickhouse/src/client/interface.ts (1)
Querier(4-17)internal/clickhouse/src/client/client.ts (1)
query(25-62)
internal/clickhouse/src/index.ts (1)
internal/clickhouse/src/verifications.ts (1)
getSpentCreditsTotal(721-721)
🔇 Additional comments (33)
go/pkg/clickhouse/schema/requests.go (1)
91-95: LGTM: schema extended withSpentCreditsaligned to CH columnField name, type, and tags match the ClickHouse column. Placement before
Tagskeeps the shape tidy.go/internal/services/keys/verifier.go (2)
44-47: LGTM: internal fieldspentCreditsadded to verifier stateClear naming and type. This keeps the runtime state separate from logged payloads.
126-136: LGTM:SpentCreditsincluded in logged verification payloadPayload now forwards the tracked credits; ordering and tags are consistent with schema.
internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql (1)
1-7: Migration addsspent_credits— most artifacts updated; please verify the writer/flush includes the columnShort summary: migrations, MVs and query code already reference
spent_creditsand agent/server types carry the field. I could not locate the concrete flush/INSERT implementation that writes rows to verifications.raw_key_verifications_v1, so please confirm the writer actually includesspent_credits(via ch tags or explicit column list) to avoid INSERT/SELECT mismatches.Files to note (found by search)
- internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — adds column.
- internal/clickhouse/schema/_create_verifications.sql (e.g. 053,054,055,056,057,058,059,060) — many V4/V2 tables/MVs include
spent_creditsor use sum(spent_credits).- internal/clickhouse/src/verifications.ts — zod schema + aggregations and queries use
spent_credits(SUM and SELECT).- apps/agent/pkg/clickhouse/schema/requests.go and go/pkg/clickhouse/schema/requests.go — KeyVerificationRequestV1 includes SpentCredits
ch:"spent_credits".- go/internal/services/keys/verifier.go — buffers KeyVerification events with SpentCredits set.
- go/pkg/clickhouse/client.go and apps/agent/pkg/clickhouse/client.go — call flush(ctx, conn, table, rows) to write to verifications.raw_key_verifications_v1, but the actual flush function/implementation was not found in this repo.
Action requested
- Confirm where the flush/INSERT implementation lives and ensure it writes the
spent_creditscolumn (either by honoring thech:"spent_credits"struct tag or by including the column in the INSERT column list). If the flush function is external/generated, update that code or confirm compatibility.apps/agent/pkg/clickhouse/schema/requests.go (1)
18-28: SpentCredits signedness: ClickHouse = Int64, Go = int64 — confirm desired semanticsConfirmed: spent_credits is Int64 in ClickHouse migrations/MVs and SpentCredits is int64 in Go (and treated as integer in TypeScript). No mismatch found.
Key places checked:
- internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — ADD COLUMN IF NOT EXISTS spent_credits Int64 DEFAULT 0
- internal/clickhouse/schema/_create_verifications.sql (e.g. 053, 055, 056, 057, 059, 058, 054) — spent_credits Int64 / SUM(spent_credits) in MVs
- internal/clickhouse/src/verifications.ts — z.number().int().default(0) and SUM(spent_credits) usage
- apps/agent/pkg/clickhouse/schema/requests.go — KeyVerificationRequestV1: SpentCredits int64
ch:"spent_credits"- go/pkg/clickhouse/schema/requests.go — SpentCredits int64
ch:"spent_credits" json:"spent_credits"- go/internal/services/keys/verifier.go — populates SpentCredits when buffering KeyVerificationRequestV1
- go/pkg/clickhouse/client.go — writes to verifications.raw_key_verifications_v1 via batch processor (no explicit SQL INSERT with column list found)
Recommendation:
- If credits are guaranteed non-negative, consider migrating to UInt64 (ClickHouse) and uint64 (Go) and update affected MVs/aggregations and consumer code.
- If negative adjustments (refunds) are possible, keep Int64 and add brief documentation on semantics.
If you want, I can open a follow-up PR to convert to unsigned types across migrations, Go structs, and related queries once you confirm non-negative semantics.
apps/dashboard/lib/trpc/routers/index.ts (4)
12-12: Import wiring looks correct.New spent credits API for APIs is registered in the correct namespace and matches usage in the UI hooks.
52-52: Import wiring looks correct.Key-level spent credits endpoint is properly exposed under key router.
131-132: Good placement and naming consistency.Exposing spentCredits at the key router alongside logs/timeseries is consistent and discoverable.
187-188: Good placement and naming consistency.API-wide spentCredits under api.keys aligns with other keys analytics endpoints.
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (1)
32-81: UI nit: use nullish coalescing when formatting and consider surfacing the active time range.
- If formatNumber ever receives undefined, guard with ?? 0 (defensive).
- Optional: add a small subtitle/tooltip to reflect the current time filter (since users often ask “spent in what window?”).
- text={formatNumber(spentCredits)} + text={formatNumber(spentCredits ?? 0)}If you want, I can wire a tooltip showing start–end or “since” derived from useFilters()/useQueryTime().
Likely an incorrect or invalid review comment.
internal/clickhouse/src/index.ts (2)
68-75: New spent credits querier is correctly wired into verifications API surface.The import of getSpentCreditsTotal and its exposure through the verifications getter aligns with existing patterns (curried by querier). No concerns on this wiring.
111-116: Confirm return shape of spentCreditsTotal to match downstream usage.Downstream code (e.g., TRPC endpoint queryApiSpentCredits) accesses result.val?.[0]?.spent_credits from clickhouse.verifications.spentCreditsTotal(...). If getSpentCreditsTotal returns a scalar (e.g., number) or a different record shape, this will break at runtime. Ensure the returned value’s structure includes a top-level val array with a spent_credits field, or update the consumers accordingly.
If needed, I can scan and align the usage across callers to a consistent return shape.
internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (2)
1-16: Schema v4 includes spent_credits and is suitable for SummingMergeTree.Including spent_credits Int64 alongside count works well with SummingMergeTree’s aggregation semantics over your ORDER BY key. The up/down goose markers look correct.
18-19: Spent_credits present in all v4 tables/MVs — verifiedAll v4 tables and materialized views include spent_credits and the MVs project sum(spent_credits). The raw source column was added; there are no active INSERT/backfill statements targeting v4 tables.
Files checked:
- internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
- internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql — MV selects sum(spent_credits) as spent_credits FROM verifications.raw_key_verifications_v1
- internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
- internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — MV selects sum(spent_credits)
- internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
- internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql — MV selects sum(spent_credits)
- internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — ALTER adds spent_credits Int64 DEFAULT 0 to raw_key_verifications_v1
- internal/clickhouse/src/verifications.ts — queries aggregate SUM(spent_credits) and map results; code references updated to v4 tables
Notes:
- No active "INSERT INTO verifications.key_verifications_per_*_v4" backfills were found; older migrations contain commented INSERTs for v1/v2/v3 only.
internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (2)
1-16: Day-level v4 table includes spent_credits; matches hour-level design.The structure mirrors the per-hour v4 with spent_credits added and the same ORDER BY. Looks consistent.
18-19: Cross-check dependent MVs and readers/writers for v4 adoption.Confirm the day-level v4 MV includes spent_credits and all reads now target v4 where appropriate to avoid mixing v3/v4.
You can reuse the script from the per-hour comment; it enumerates all v4 schemas/MVs for spent_credits coverage.
apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (1)
52-55: Only the first tag is used; confirm that this is intended.transformVerificationFilters appears to normalize tags to an array; here you pick only the first tag. If multi-tag filtering is expected (e.g., OR/AND semantics), this will silently drop additional tag filters.
If multi-tag is desired, extend the CH query to accept an array and update this endpoint accordingly.
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (2)
9-13: Spent-credits hook integration looks correct.Imports and the useSpentCredits(keyId, keyspaceId) usage align with the surrounding data-fetch pattern. Props are already available in this component, so no plumbing issues.
33-38: Good separation of loading/error state for spent credits.Using hasSpentCreditsData = !spentCreditsLoading && !spentCreditsError avoids flicker and hides the block if the query fails—consistent with the “Remaining Credits” handling.
internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (2)
11-12: Addition of spent_credits to monthly rollup is correctIncluding
spent_credits Int64alongsidecountmatches SummingMergeTree semantics and aligns with the new MVs aggregating this metric.
4-12: Verify monthly time bucket type alignment (Date vs DateTime)If the MV feeding this table uses
toStartOfMonth(...), ClickHouse returnsDate, while this table definestimeasDateTime. Ensure the MV casts toDateTime(e.g.,toDateTime(toStartOfMonth(...))) to avoid implicit casts or type mismatches.Would you like me to scan the MV definitions to confirm the cast is present and propose a patch if needed?
internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (2)
12-12: LGTM: Aggregating spent_credits alongside count
sum(spent_credits) as spent_creditsis the right aggregation and aligns with the v4 table expectations.
2-24: Confirm target table schema includes spent_credits and matching keysMV outputs
spent_creditsand groups by(workspace_id, key_space_id, identity_id, key_id, outcome, time, tags). Ensureverifications.key_verifications_per_hour_v4defines the same dimension columns and includesspent_credits Int64so SummingMergeTree aligns.If helpful, I can scan the schema files for
key_verifications_per_hour_v4and verify column parity and ORDER BY keys.apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)
97-99: Refetch gating logic is soundConditionally enabling refetch when
endTime === timestampcorrectly toggles live updates for “now” views while avoiding churn for historical ranges.internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (2)
1-24: LGTM! The materialized view correctly aggregates spent credits.The materialized view properly includes the new
spent_creditscolumn usingsum(spent_credits)aggregation, consistent with the v4 table schema and other interval-based MVs in the PR.
26-27: Consider adding IF EXISTS to the DROP statement for safety.While the current
DROP VIEWstatement will work, addingIF EXISTSwould make it more robust against edge cases where the view might not exist.-- +goose down -DROP VIEW IF EXISTS verifications.key_verifications_per_month_mv_v4; +DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_month_mv_v4;Likely an incorrect or invalid review comment.
apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts (1)
66-74: Potential issue with keyIds operator normalization.The code normalizes both "is" and "contains" operators to the same value, which might not be the intended behavior. The "contains" operator typically implies partial matching while "is" implies exact matching.
case "keyIds": { if (typeof filter.value === "string" && filter.value.trim()) { - const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is"; + // Keep the original operator if it's valid, otherwise default to "is" + const keyIdOperator = ["is", "contains"].includes(operator) ? operator : "is"; params.keyIds?.filters?.push({ operator: keyIdOperator, value: filter.value, }); } break; }Likely an incorrect or invalid review comment.
internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)
1-24: LGTM! The materialized view correctly implements per-minute aggregation with spent credits.The view properly aggregates the new
spent_creditscolumn and targets the correct v2 table, maintaining consistency with the overall schema versioning strategy.internal/clickhouse/src/verifications.ts (3)
249-249: LGTM! Properly extends the data point schema with spent credits.The addition of
spent_creditsto the verification timeseries data point is consistent with the schema changes across the codebase.
374-375: Good implementation of spent credits aggregation in timeseries query.The query correctly aggregates
spent_creditsusingSUM()and includes it in the output map, maintaining consistency with other metric aggregations.
598-598: Correct handling of spent credits in merge logic.The merge function properly accumulates
spent_creditswhen combining results from multiple batches, using the same null-safe pattern as other metrics.internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (2)
1-3: Destination table schema verified — spent_credits column present and Int64Verified: verifications.key_verifications_per_day_v4 exists and declares spent_credits Int64; the materialized view selects sum(spent_credits) AS spent_credits — types align.
Files checked:
- internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql — spent_credits Int64
- internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — sum(spent_credits) as spent_credits
12-12: No change required — spent_credits is non-nullable (DEFAULT 0)The raw column is defined as a non-Nullable Int64 with a default of 0, and the destination tables also use Int64, so SUM(spent_credits) cannot return NULL and coalescing is unnecessary.
Evidence:
- internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql —
ADD COLUMN IF NOT EXISTS spent_credits Int64 DEFAULT 0;- internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql —
spent_credits Int64- internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — uses
sum(spent_credits) as spent_credits
...d/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts
Show resolved
Hide resolved
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx
Outdated
Show resolved
Hide resolved
apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts
Outdated
Show resolved
Hide resolved
...s/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts
Show resolved
Hide resolved
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql
Show resolved
Hide resolved
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql
Show resolved
Hide resolved
| -- +goose down | ||
| DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2; |
There was a problem hiding this comment.
Use MATERIALIZED VIEW instead of VIEW in the DROP statement.
The DROP statement should specify MATERIALIZED VIEW to match the creation statement.
-- +goose down
-DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
+DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- +goose down | |
| DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2; | |
| -- +goose down | |
| DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2; |
🤖 Prompt for AI Agents
In
internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql
around lines 26-27, the DROP statement uses DROP VIEW but the object created is
a MATERIALIZED VIEW; update the statement to drop the same object type by
replacing the DROP VIEW line with DROP MATERIALIZED VIEW IF EXISTS
verifications.key_verifications_per_minute_mv_v2; so the drop matches the
creation and avoids mismatched object type errors.
…ey into eng-1849-track-spent-credits
This adds seeding data with credit spend so you can automatically see info about credit spend in the dashboard. It adds about 15% of remaining requests to have a +1 credit spend.
Adds Seeding Adds Charts Adds Logs Allows filtering
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 63
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (14)
tools/local/src/cmd/seed/utils.ts (1)
256-261: Filter hard-coded specialDates to the last 30-day windowThe current implementation in
generateTimestamp()unconditionally returns timestamps from fixed dates (March 15 and April 1, 2025) 10% of the time—well outside the rolling 30-day range defined bythirtyDaysAgoandnow. As a result, seed data will unexpectedly spike on stale dates rather than within the recent window.Please update the “special dates” branch to first filter out any out-of-range entries, falling back to the weighted distribution when none remain. For example:
// Special dates that might have meaning (you can customize these) - const specialDates = [ - new Date(2025, 2, 15), // March 15, 2025 - new Date(2025, 3, 1), // April 1, 2025 - ]; + const specialDates = [ + new Date(2025, 2, 15), // March 15, 2025 + new Date(2025, 3, 1), // April 1, 2025 + ]; // 10% chance to generate a timestamp on a special date if (Math.random() < 0.1 && specialDates.length > 0) { - const specialDate = specialDates[Math.floor(Math.random() * specialDates.length)]; + const candidates = specialDates.filter((d) => d >= thirtyDaysAgo && d <= now); + if (candidates.length > 0) { + const specialDate = + candidates[Math.floor(Math.random() * candidates.length)]; // Set to business hours for special dates specialDate.setHours( 9 + Math.floor(Math.random() * 8), Math.floor(Math.random() * 60), Math.floor(Math.random() * 60), 0, ); - return Math.floor(specialDate.getTime()); + return specialDate.getTime(); + } // No valid recent special dates → fall back to weighted distribution }• File:
tools/local/src/cmd/seed/utils.ts
• Affects lines around 256–261 and the special‐date branch at 303–309.If you’d like to preserve the ability to seed truly out-of-range spikes, consider adding an explicit flag (e.g.
allowOutOfRangeSpecialDates) and guarding this filter behind it.apps/api/src/pkg/clickhouse-proxy.ts (1)
17-31: Validate spent_credits before sending to prevent corrupt analytics.
spent_creditsis optional (good for back-compat), but when present it should be a non‑negative safe integer to match ClickHouse Int64 semantics. Add a lightweight guard to fail fast on bad inputs instead of ingesting junk.async insertVerifications( events: Array<{ request_id: string; time: number; workspace_id: string; key_space_id: string; key_id: string; region: string; outcome: string; identity_id?: string; tags?: string[]; spent_credits?: number; }>, ): Promise<void> { + // Basic shape checks for new numeric field + for (const e of events) { + if (e.spent_credits != null) { + if (!Number.isSafeInteger(e.spent_credits) || e.spent_credits < 0) { + throw new Error(`Invalid spent_credits: expected a non-negative safe integer, got ${e.spent_credits}`); + } + } + } await this.sendEvents("/_internal/chproxy/verifications", events); }Follow-up: consider extracting a shared VerificationEvent type (used here and in analytics.ts) to avoid drift.
tools/local/src/cmd/seed/event-generator.ts (1)
178-192: Seeded verification events now carry spent_credits; propagate into generated API response bodiesGreat to initialize spent_credits here and finalize it in biasVerificationOutcome. To keep seed API-response logs consistent with the live API (which returns spentCredits/spent_credits in payloads), also include it in the generated response bodies.
Apply this additional change outside the current hunk to include spent credits in API responses generated by seeds:
@@ if (verificationEvent.outcome !== "VALID") { @@ - apiRequest.response_body = JSON.stringify({ - valid: false, - code: String(apiRequest.response_status), - message: apiRequest.error, - }); + apiRequest.response_body = JSON.stringify({ + valid: false, + code: String(apiRequest.response_status), + message: apiRequest.error, + spentCredits: 0, + }); } else { @@ apiRequest.response_status = 200; apiRequest.response_body = JSON.stringify({ valid: true, keyId: verificationEvent.key_id, ownerId: `user_${generateRandomString(16)}`, meta: Math.random() < 0.7 ? generateMetadata() : undefined, + spentCredits: verificationEvent.spent_credits, });apps/api/src/pkg/keys/service.ts (1)
660-672: USAGE_EXCEEDED payload shape includes extra fields not in InvalidResponseThe object literal includes keyId, apiId, ownerId, expires, isRootKey, authorizedWorkspaceId, which aren’t part of InvalidResponse. TS will accept this under the union, but it bloats the contract.
Either expand InvalidResponse to include these as optional (documented) or trim them here to keep the error shape minimal and predictable.
apps/api/src/routes/v1_keys_verifyKey.ts (1)
399-412: Normalizespent_creditsto a non-negative integer before loggingWe’ve confirmed that the analytics contract uses a snake_case
spent_creditsfield of type Int64 (ClickHouse schema, Go proxy, and Zod definitions), so passing negative, fractional, or non-numeric values could violate the contract or lead to unexpected behavior. Clamp and truncateval.spentCreditsto ensure it’s always a finite, non-negative integer.Apply the following diff at
apps/api/src/routes/v1_keys_verifyKey.ts(around lines 398–412):- c.executionCtx.waitUntil( - analytics - .insertKeyVerification({ + // Normalize spent credits: ensure a finite, non-negative integer for ClickHouse Int64 + const rawSpent = (typeof val.spentCredits === "number" && Number.isFinite(val.spentCredits)) + ? val.spentCredits + : 0; + const spentCredits = rawSpent > 0 ? Math.trunc(rawSpent) : 0; + + c.executionCtx.waitUntil( + analytics + .insertKeyVerification({ request_id: c.get("requestId"), time: Date.now(), workspace_id: val.key.workspaceId, key_space_id: val.key.keyAuthId, key_id: val.key.id, // @ts-expect-error region: c.req.raw.cf.colo ?? "", outcome: val.code, identity_id: val.identity?.id, tags: req.tags ?? [], - spent_credits: val.spentCredits ?? 0, + spent_credits: spentCredits, })This enforces the ClickHouse Int64 requirement and aligns with backend guidance on clamping credit values.
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)
129-139: Add explicitspent_creditsto your Timeseries item typesTypeScript currently treats timeseries items as either the very generic
TimeseriesData(which uses an index signature) orBaseTimeseriesData(only definesoriginalTimestampandtotal), so typos againstspent_creditswon’t be caught. To lock down the mapped shape and enable accurate autocomplete, please update your types as follows:• In
apps/dashboard/components/logs/overview-charts/types.ts, changeexport type TimeseriesData = { originalTimestamp: number; [key: string]: unknown; };to something like:
export type TimeseriesData = { originalTimestamp: number; spent_credits: number; [key: string]: unknown; };• In
apps/dashboard/components/stats-card/components/chart/stats-chart.ts, extendBaseTimeseriesDatato includespent_credits:export type BaseTimeseriesData = { originalTimestamp: number; total: number; spent_credits?: number; // add this line };• In your bar-chart hook at
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts, annotate the mapped return type so it includesspent_credits: number:interface VerificationTimeseriesItem { displayX: string; originalTimestamp: number; valid: number; total: number; success: number; error: number; spent_credits: number; [outcome: string]: number | string; } const timeseries: VerificationTimeseriesItem[] = data.timeseries.map(ts => ({ /* … */, spent_credits: ts.y.spent_credits ?? 0, }));These updates will prevent typos (e.g.
spentCreditsvs.spent_credits), keep charts in sync, and restore full editor support.go/internal/services/keys/verifier.go (1)
124-148: Consider emitting a Prometheus counter for spent creditsIf credits influence billing or quotas, a Prometheus Counter (e.g., key_verifications_spent_credits_total{key_type}) can complement CH for near-real-time alerting.
If desired, I can sketch a minimal metric wiring aligned with your metrics.KeyVerificationsTotal pattern.
apps/api/src/pkg/analytics.ts (1)
63-89: Normalizespent_creditsin the proxy branchWe’ve confirmed that both the proxy client’s
insertVerifications(apps/api/src/pkg/clickhouse-proxy.ts) and the direct ClickHouse inserter (this.clickhouse.verifications.insert, backed by a Zod schema with default 0) accept an optionalspent_creditsfield, and that all ClickHouse schemas/migrations definespent_creditswithDEFAULT 0. To guard against undefined or negative upstream values, apply this minimal normalization in the proxy branch of apps/api/src/pkg/analytics.ts:public get insertKeyVerification() { if (this.proxyClient) { return async (event: { request_id: string; time: number; workspace_id: string; key_space_id: string; key_id: string; region: string; outcome: string; identity_id?: string; tags?: string[]; - spent_credits?: number; + spent_credits?: number; }) => { - return await wrap( + const normalizedEvent = { + ...event, + // default missing to 0 and clamp negative values + spent_credits: Math.max(0, event.spent_credits ?? 0), + }; + return await wrap( // biome-ignore lint/style/noNonNullAssertion: proxyClient existence verified above - this.proxyClient!.insertVerifications([event]), + this.proxyClient!.insertVerifications([normalizedEvent]), (err) => new FetchError({ message: err.message, retry: true }), ); }; } return this.clickhouse.verifications.insert; }apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx (1)
54-56: Selection buffer uses active-keys granularity even when selecting in credit-spend chartWhen the user selects on the left (credit-spend) chart, the buffer still derives from activeKeys granularity. If these differ, the range may be off.
Recommend passing the emitting chart’s granularity through onSelectionChange or having each chart call a granularity-aware handler.
Would you like me to draft a small change to OverviewBarChart/AreaChart to include granularity in the selection callback so the handler can compute the correct buffer?
internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)
2-15: Consider POPULATE/backfill strategyMV omits POPULATE. If you need historical data in v4, document/backfill steps (manual INSERT … SELECT or a one-off job). If fresh-only is intended, add a comment noting that.
internal/clickhouse/src/keys/keys.ts (1)
14-21: Add schema-level time range validationMirror the TRPC refinement here so callers of this module (outside TRPC) also get guardrails.
-export const keysOverviewLogsParams = z.object({ +export const keysOverviewLogsParams = z + .object({ workspaceId: z.string(), keyspaceId: z.string(), limit: z.number().int(), startTime: z.number().int(), endTime: z.number().int(), creditSpendMode: z.boolean().optional().default(false), outcomes: z .array( z.object({ value: z.enum(KEY_VERIFICATION_OUTCOMES), operator: z.literal("is"), }), ) .nullable(), @@ cursorTime: z.number().int().nullable(), sorts: z .array( z.object({ column: z.enum(["time", "valid", "invalid", "spent_credits"]), direction: z.enum(["asc", "desc"]), }), ) .nullable(), -}); + }) + .refine((v) => v.endTime >= v.startTime, { + path: ["endTime"], + message: "endTime must be greater than or equal to startTime", + });internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (1)
1-16: Optional: add TTL for retentionIf you only need N months of minute‑level data, add a TTL to prune old rows automatically (keeps storage/cost in check).
Example:
ENGINE = SummingMergeTree() PARTITION BY toYYYYMM(time) +TTL time + INTERVAL 6 MONTH ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)Adjust the interval to your retention policy.
internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (1)
5-12: Use LowCardinality for repeated string dimensionsApply LowCardinality to workspace_id, key_space_id, identity_id, key_id to reduce storage and speed aggregations at monthly granularity as well.
- workspace_id String, - key_space_id String, - identity_id String, - key_id String, + workspace_id LowCardinality(String), + key_space_id LowCardinality(String), + identity_id LowCardinality(String), + key_id LowCardinality(String),internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)
6-23: Cast high‑repeat dimensions to LowCardinality in the MV SELECT (if table DDL is updated)If you update the target table to LowCardinality, cast here to avoid type mismatch and let CH store dictionary‑encoded keys end‑to‑end.
-SELECT - workspace_id, - key_space_id, - identity_id, - key_id, +SELECT + toLowCardinality(workspace_id) AS workspace_id, + toLowCardinality(key_space_id) AS key_space_id, + toLowCardinality(identity_id) AS identity_id, + toLowCardinality(key_id) AS key_id, outcome, count(*) as count, sum(spent_credits) as spent_credits, toStartOfMinute(fromUnixTimestamp64Milli(time)) AS time, tags FROM verifications.raw_key_verifications_v1 GROUP BY - workspace_id, - key_space_id, - identity_id, - key_id, + toLowCardinality(workspace_id), + toLowCardinality(key_space_id), + toLowCardinality(identity_id), + toLowCardinality(key_id), outcome, time, tags
♻️ Duplicate comments (18)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)
132-133: Correct fallback: use nullish coalescing for numeric spent_creditsts.y.spent_credits ?? 0 avoids clobbering valid 0 values. This aligns with earlier guidance.
Confirm all other chart hooks use ?? for spent_credits:
#!/bin/bash set -euo pipefail rg -n -C2 --type=ts '\bspent_credits\s*\?\?\s*0\b' apps/dashboard || true rg -n -C2 --type=ts '\bspent_credits\s*\|\|\s*0\b' apps/dashboard || truego/internal/services/keys/validation.go (1)
45-52: Clamp negative cost before tracking spentCreditsIf a negative cost slips through, this records negative credits. Clamp before assignment to avoid corrupt analytics and to stay consistent with usage accounting.
Apply:
if !usage.Valid { k.setInvalid(StatusUsageExceeded, "Key usage limit exceeded.") } else { - // Only track spent credits if the usage was valid (credits were actually deducted) - k.spentCredits = int64(cost) + // Only track spent credits if the usage was valid (credits were actually deducted) + tracked := cost + if tracked < 0 { + tracked = 0 + } + k.spentCredits = int64(tracked) }Based on retrieved learnings (keys with unlimited usage bypass the limiter; non-idempotent credit decrements must not be retried; zero-cost requests should not alter remaining), verify behavior across edge cases with a quick grep:
#!/bin/bash # Verify no retries wrap usage limiter updates rg -nP -C2 'WithRetry|Retry|retry' go/internal/services/usagelimiter go/internal/services/keys # Ensure unlimited keys short-circuit and don't set spentCredits rg -nP -C3 'RemainingRequests\.Valid|spentCredits' go/internal/services/keys # Ensure zero-cost requests do not change remaining and track 0 credits rg -nP -C3 'cost.*0|Cost:\s*0' go/internal/services/usagelimiter go/internal/services/keysinternal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (1)
11-12: Add DEFAULT 0 to numeric aggregates to harden insertsMirroring the earlier feedback: count and spent_credits should default to 0 to survive partial rollouts/omitted fields.
Apply this diff:
- count Int64, - spent_credits Int64 + count Int64 DEFAULT 0, + spent_credits Int64 DEFAULT 0internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (1)
11-12: Add DEFAULT 0 to count and spent_credits for SummingMergeTree safety and phased rolloutsWithout DEFAULT 0, inserts can fail during mixed-deploy windows and SUM semantics may produce NULL-like surprises. Align with your hourly table convention.
- count Int64, - spent_credits Int64 + count Int64 DEFAULT 0, + spent_credits Int64 DEFAULT 0If this migration already ran in any environment, follow up with:
ALTER TABLE verifications.key_verifications_per_day_v4 MODIFY COLUMN count Int64 DEFAULT 0, MODIFY COLUMN spent_credits Int64 DEFAULT 0;Do you want a separate migration file for that ALTER?
internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (1)
14-15: Normalize tags in MV to prevent PK cardinality explosionGroup by raw tags will fragment rows due to non-deterministic ordering/duplicates. Canonicalize with arraySort(arrayDistinct(tags)) in SELECT (and rely on alias in GROUP BY).
Apply this diff:
SELECT @@ - tags + arraySort(arrayDistinct(tags)) AS tags @@ -GROUP BY +GROUP BY workspace_id, key_space_id, identity_id, key_id, outcome, time, - tags + tagsThis keeps GROUP BY stable via the alias while ensuring consistent ordering and deduplication.
Also applies to: 17-24
internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (1)
14-15: Normalize tags in MV to prevent PK cardinality explosion (month-level)Same as hour-level MV: normalize tags via arraySort(arrayDistinct(tags)) to stabilize GROUP BY/ORDER BY.
Apply this diff:
SELECT @@ - tags + arraySort(arrayDistinct(tags)) AS tags @@ -GROUP BY +GROUP BY workspace_id, key_space_id, identity_id, key_id, outcome, time, - tags + tagsAlso applies to: 17-24
apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (3)
2-2: Fix isNull import to avoid relying on a re-exportImport isNull from drizzle-orm (or explicitly re-export it from "@/lib/db"). Current import can break typing/builds.
Apply this diff:
-import { db, isNull } from "@/lib/db"; +import { db } from "@/lib/db"; +import { isNull } from "drizzle-orm";
7-36: Validate time range at the schema levelReject invalid ranges early so the resolver doesn’t hit ClickHouse with bad inputs.
-const querySpentCreditsSchema = z.object({ - keyId: z.string(), - keyspaceId: z.string(), - startTime: z.number().int(), - endTime: z.number().int(), - outcomes: z - .array( - z.object({ - value: z.enum([ - "VALID", - "RATE_LIMITED", - "INSUFFICIENT_PERMISSIONS", - "FORBIDDEN", - "DISABLED", - "EXPIRED", - "USAGE_EXCEEDED", - ]), - operator: z.literal("is"), - }), - ) - .nullable() - .optional(), - tags: z - .object({ - operator: z.enum(["is", "contains", "startsWith", "endsWith"]), - value: z.string(), - }) - .nullable() - .optional(), -}); +const querySpentCreditsSchema = z + .object({ + keyId: z.string(), + keyspaceId: z.string(), + startTime: z.number().int(), + endTime: z.number().int(), + outcomes: z + .array( + z.object({ + value: z.enum([ + "VALID", + "RATE_LIMITED", + "INSUFFICIENT_PERMISSIONS", + "FORBIDDEN", + "DISABLED", + "EXPIRED", + "USAGE_EXCEEDED", + ]), + operator: z.literal("is"), + }), + ) + .nullable() + .optional(), + tags: z + .object({ + operator: z.enum(["is", "contains", "startsWith", "endsWith"]), + value: z.string(), + }) + .nullable() + .optional(), + }) + .refine((v) => v.endTime >= v.startTime, { + path: ["endTime"], + message: "endTime must be greater than or equal to startTime", + });
73-81: Harden ClickHouse call with explicit TRPCError on failurePrevent leaking raw errors and provide a consistent client surface.
- const result = await clickhouse.verifications.spentCreditsTotal({ - workspaceId: ctx.workspace.id, - keyspaceId: input.keyspaceId, - keyId: input.keyId, - startTime: input.startTime, - endTime: input.endTime, - outcomes: input.outcomes || null, - tags: input.tags || null, - }); + let result; + try { + result = await clickhouse.verifications.spentCreditsTotal({ + workspaceId: ctx.workspace.id, + keyspaceId: input.keyspaceId, + keyId: input.keyId, + startTime: input.startTime, + endTime: input.endTime, + outcomes: input.outcomes || null, + tags: input.tags || null, + }); + } catch (_err) { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: + "Failed to query spent credits. If this persists, contact support@unkey.dev with the time this occurred.", + }); + }internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)
26-27: Action required — remove/deprecate v3 artifacts or migrate all consumers to v4This v4 MV coexists with v3 artifacts referenced across code/tests/docs. Migrate consumers to v4 and drop v3 or clearly deprecate v3.
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (1)
127-198: Duplication with “Remaining Credits” block; extract a tiny componentThe animated container + StatusBadge pattern is duplicated. Extract a MotionMetricBadge to DRY this up and centralize animation/styling.
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)
56-60: Trim tag values before sending to the backendYou already guard with .trim() in the condition, but you still pass the untrimmed value to params.tags. Normalize it to avoid backend mismatches and duplicate cache keys due to trailing spaces.
- params.tags = { - operator, - value: filter.value, - }; + params.tags = { + operator, + value: filter.value.trim(), + };internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (2)
5-11: Use LowCardinality for high‑repeat dimensionsworkspace_id, key_space_id, identity_id, and key_id repeat heavily; LowCardinality reduces storage and speeds GROUP BY.
- workspace_id String, - key_space_id String, - identity_id String, - key_id String, + workspace_id LowCardinality(String), + key_space_id LowCardinality(String), + identity_id LowCardinality(String), + key_id LowCardinality(String),
14-16: Add PARTITION BY to avoid unbounded single‑partition growthWithout partitioning, the minute table will accumulate into one partition hurting merges and retention. Partition monthly on time.
ENGINE = SummingMergeTree() +PARTITION BY toYYYYMM(time) ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (1)
14-15: Partition the monthly aggregate tableAdd PARTITION BY toYYYYMM(time) for predictable retention and pruning.
ENGINE = SummingMergeTree() +PARTITION BY toYYYYMM(time) ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)
26-27: Drop the correct object type in down migrationThe object created is a MATERIALIZED VIEW. Use the matching DROP statement to avoid goose rollback failures.
--- +goose down -DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2; +-- +goose down +DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;internal/clickhouse/src/verifications.ts (2)
682-694: Fix non-"is" tag operator semantics (contains/startsWith/endsWith).Using a generic
LIKEis incorrect and misses proper anchoring. Reuse the pattern used in other queries.- if (args.tags) { - const tagCondition = - args.tags.operator === "is" - ? "has(tags, {tagValue: String})" - : "arrayExists(tag -> tag LIKE {tagValue: String}, tags)"; - conditions.push(tagCondition); - - paramSchemaExtension = { - ...paramSchemaExtension, - tagValue: z.string(), - }; - } + if (args.tags) { + // Single tag object version + let tagCondition: string; + switch (args.tags.operator) { + case "is": + tagCondition = "has(tags, {tagValue: String})"; + break; + case "contains": + tagCondition = "arrayExists(tag -> position(tag, {tagValue: String}) > 0, tags)"; + break; + case "startsWith": + tagCondition = "arrayExists(tag -> startsWith(tag, {tagValue: String}), tags)"; + break; + case "endsWith": + tagCondition = "arrayExists(tag -> endsWith(tag, {tagValue: String}), tags)"; + break; + default: + tagCondition = "TRUE"; + } + conditions.push(tagCondition); + paramSchemaExtension = { ...paramSchemaExtension, tagValue: z.string() }; + }If you adopt an array shape for
tags, iterate and join withANDsimilar togetVerificationTimeseriesWhereClause.
722-727: Normalize and validate spent_credits result (non-negative) before returning.Protects callers from negative totals and ensures
nullis returned as 0. Mirrors earlier guidance.- return ch.query({ - query, - params: spentCreditsParams.extend(paramSchemaExtension), - schema: spentCreditsResult, - })(parameters); + const result = await ch.query({ + query, + params: spentCreditsParams.extend(paramSchemaExtension), + schema: spentCreditsResult, + })(parameters); + + // Defensive normalization + const val = result?.val?.[0]?.spent_credits ?? 0; + if (val < 0) { + console.error( + `[getSpentCreditsTotal] Negative spent_credits=${val} for workspace=${args.workspaceId}, keyspace=${args.keyspaceId}, keyId=${args.keyId ?? "<all>"}, range=[${args.startTime},${args.endTime}], outcomes=${(args.outcomes ?? []).map(o => o.value).join(",")}, tags=${args.tags ? `${args.tags.operator}:${args.tags.value}` : "<none>"}` + ); + result.val[0].spent_credits = 0; + } + return result;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (62)
apps/agent/pkg/clickhouse/schema/requests.go(1 hunks)apps/api/src/pkg/analytics.ts(1 hunks)apps/api/src/pkg/auth/root_key.ts(1 hunks)apps/api/src/pkg/clickhouse-proxy.ts(1 hunks)apps/api/src/pkg/keys/service.ts(14 hunks)apps/api/src/routes/v1_keys_verifyKey.ts(2 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx(3 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-metric-type.tsx(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx(2 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/components/override-indicator.tsx(3 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts(5 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/logs-table.tsx(6 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/query-logs.schema.ts(2 hunks)apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-metric-type.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/index.tsx(3 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/components/logs-metric-type.tsx(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx(4 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/logs-table.tsx(4 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-metric-type.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts(1 hunks)apps/dashboard/app/(app)/apis/[apiId]/page.tsx(1 hunks)apps/dashboard/components/logs/overview-charts/overview-bar-chart.tsx(6 hunks)apps/dashboard/lib/fmt.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/utils.ts(1 hunks)apps/dashboard/lib/trpc/routers/index.ts(4 hunks)apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts(1 hunks)go/internal/services/keys/validation.go(1 hunks)go/internal/services/keys/verifier.go(2 hunks)go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql(0 hunks)go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql(0 hunks)go/pkg/clickhouse/schema/requests.go(1 hunks)internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql(1 hunks)internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql(2 hunks)internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql(3 hunks)internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql(2 hunks)internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql(3 hunks)internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql(2 hunks)internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql(3 hunks)internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql(2 hunks)internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql(3 hunks)internal/clickhouse/src/index.ts(2 hunks)internal/clickhouse/src/keys/keys.test.ts(1 hunks)internal/clickhouse/src/keys/keys.ts(10 hunks)internal/clickhouse/src/verifications.ts(8 hunks)tools/local/src/cmd/seed/batch-operations.ts(4 hunks)tools/local/src/cmd/seed/event-generator.ts(6 hunks)tools/local/src/cmd/seed/utils.ts(1 hunks)
💤 Files with no reviewable changes (14)
- go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql
- go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql
- go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql
- go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-08-19T09:46:03.702Z
Learnt from: Flo4604
PR: unkeyed/unkey#3800
File: go/internal/services/usagelimiter/redis.go:176-186
Timestamp: 2025-08-19T09:46:03.702Z
Learning: Zero-cost requests in the usage limiter should not change the remaining credits but should still return the actual remaining credits for that key, not treat the key as unlimited. A key with 100 credits remaining should still report 100 credits remaining after a zero-cost request.
Applied to files:
apps/api/src/pkg/keys/service.tsgo/internal/services/keys/validation.go
📚 Learning: 2025-02-27T14:08:53.557Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#2918
File: apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts:0-0
Timestamp: 2025-02-27T14:08:53.557Z
Learning: The useFetchVerificationTimeseries hook intentionally uses TIMESERIES_DATA_WINDOW * 24 to create a daily interval (24 hours) instead of the standard 1-hour interval used in other hooks.
Applied to files:
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts
📚 Learning: 2025-06-24T13:29:10.129Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: The `buildQueryParams` function in `apps/dashboard/app/(app)/logs/filters.query-params.ts` calls `useFilters()` hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.
Applied to files:
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.tsapps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts
📚 Learning: 2025-08-19T09:42:40.919Z
Learnt from: Flo4604
PR: unkeyed/unkey#3800
File: go/internal/services/keys/validation.go:45-52
Timestamp: 2025-08-19T09:42:40.919Z
Learning: In go/internal/services/keys/validation.go, keys with unlimited usage (RemainingRequests.Valid = false) have an early return that bypasses the usage limiter entirely. The usage limiter is only called for keys with finite remaining request counts.
Applied to files:
go/internal/services/keys/validation.go
📚 Learning: 2025-08-21T15:54:45.183Z
Learnt from: chronark
PR: unkeyed/unkey#3825
File: go/internal/services/usagelimiter/limit.go:38-0
Timestamp: 2025-08-21T15:54:45.183Z
Learning: In go/internal/services/usagelimiter/limit.go, the UpdateKeyCreditsDecrement operation cannot be safely wrapped with db.WithRetry due to the lack of idempotency mechanisms in the current tech stack. Retrying this non-idempotent write operation risks double-charging users if the first attempt commits but the client sees a transient error.
Applied to files:
go/internal/services/keys/validation.go
📚 Learning: 2025-08-22T12:50:06.992Z
Learnt from: perkinsjr
PR: unkeyed/unkey#3775
File: apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts:1-2
Timestamp: 2025-08-22T12:50:06.992Z
Learning: The team at Unkey is comfortable accepting TRPC schema imports from app route folders in server-side TRPC routers as technical debt, since they're planning to migrate away from TRPC to calling their v2 API directly, making these imports temporary and not worth refactoring.
Applied to files:
apps/dashboard/lib/trpc/routers/index.tsapps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts
🔇 Additional comments (34)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/components/override-indicator.tsx (1)
121-121: LGTM: href now reflects current filtersUsing
href={buildKeyDetailUrl()}preservesmetricType,since,tags, andoutcomesacross navigation, aligning the overview with key details. With the refactor above, you’ll also avoid malformed paths and keep multi-valued filters intact.Please confirm the key-details route shape is
/apis/[apiId]/keys/[keyAuthId]/[keyId]and that the key-details page recognizes all preserved query keys (including the newmetricTypefor spent credits).apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/query-logs.schema.ts (1)
15-15: Default is good—propagate to fetch layer.Adding
creditSpendModewith a defaultfalseis sensible. Ensure downstream hooks/components (e.g., use-logs-query) pass through this flag when toggled in the UI so that server queries include credit-aware projections/sorts.I can provide a quick repo scan to verify all call sites set this when in “Credits” mode.
tools/local/src/cmd/seed/event-generator.ts (2)
1-1: Good switch to node:crypto’s createHashUsing a named import from node:crypto is correct for Node-only tooling and avoids bundler polyfills.
250-252: Correct: finalize spent_credits after outcome biasingSetting spent_credits after the outcome is chosen ensures cost rules apply only to VALID results and usage-limited keys. Looks good.
apps/api/src/pkg/keys/service.ts (3)
401-403: NOT_FOUND response includes spentCredits = 0 — good consistencyConsistent response shape helps clients. No concerns here.
442-443: All early-return error paths consistently set spentCredits: 0Nice attention to detail; this prevents UI and analytics conditionals.
Also applies to: 456-457, 477-478, 494-495, 508-509, 553-554, 624-625
629-654: Validation complete: spentCredits mapping and semantics correct
- Credit deduction logic uses
limited.validappropriately; zero-cost requests reportspentCredits = 0and leaveremainingunchanged.- Verified camelCase
spentCreditsmaps to snake_casespent_creditsin analytics/ClickHouse paths:
- apps/api/src/routes/v1_keys_verifyKey.ts
- apps/api/src/pkg/clickhouse-proxy.ts
- apps/api/src/pkg/analytics.ts
- apps/api/src/pkg/auth/root_key.ts
Approving these code changes.
apps/api/src/pkg/auth/root_key.ts (1)
66-67: Analytics logs spent_credits: 0 for root keys — correctRoot key verifications should not consume credits; explicit 0 keeps the schema uniform.
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (1)
8-19: LGTM: Metric-type control placement is correctImport and placement of LogsMetricType between filters and date-time aligns with expected UX and doesn’t introduce prop drilling or re-renders beyond the container boundary.
apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/utils.ts (1)
52-64: creditSpendMode schema and params are consistent – no action neededI’ve verified that:
- The client Zod schema (
keysQueryOverviewLogsPayload) definescreditSpendModeasz.boolean().optional().default(false), yielding a boolean on input.- The TRPC transform (
transformKeysFilters) appliesparams.creditSpendMode ?? false, matching the client’s default.- The backend ClickHouse params (
keysOverviewLogsParams) likewise declarecreditSpendModeasz.boolean().optional().default(false), and use it to build the correct filter (args.creditSpendMode ? "spent_credits > 0" : "TRUE").All layers use the same field name, type, default, and semantics—there’s no risk of inadvertently disabling credit-spend mode.
apps/dashboard/app/(app)/apis/[apiId]/page.tsx (1)
2-5: Good: navbar label follows selected metric typePulling the active label from METRIC_TYPE_LABELS[metricType] keeps the header consistent with the controls. Hook usage in a client component is appropriate.
Also applies to: 11-11, 19-19
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)
159-163: Good addition: defaulting spent_credits with nullish coalescingUsing
ts.y.spent_credits ?? 0avoids unintended coalescing of valid falsy numbers and matches the pattern we want here. LGTM.apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/logs-table.tsx (1)
22-25: Integration with metric-type looks goodImporting and using
isCreditSpendModeto both adjust the Tags width and inject the Credits column is clean and non-invasive to the rest of the table definition.Also applies to: 185-185, 309-310
apps/dashboard/lib/trpc/routers/index.ts (1)
52-53: The output will help confirm how queryKeySpentCredits is wired under both router.key and router.api.keys and what middleware each uses. This ensures we verify intent, ownership checks, and RBAC at both call sites.apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts (1)
9-10: Threading creditSpendMode through the query is correctIncluding
creditSpendModein the payload and dependency array ensures proper cache-keying and refetch on mode toggles. Looks solid.Also applies to: 25-25, 43-44, 125-126
go/internal/services/keys/verifier.go (1)
76-122: Spent credits only tracked on successful deduction
I verified that the only write tok.spentCreditsoccurs inwithCreditsand only inside the “valid usage” branch (validation.go:51). No other methods—withIPWhitelist,withPermissions, orwithRateLimits—touchspentCredits, and any early-return paths (permission checks, rate-limit failures, whitelist errors) leave it at zero. No further changes needed.apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/index.tsx (1)
3-6: Action Required: Wire in credit-spend hook for key-level chartsIt looks like this chart component currently only imports and calls
useFetchVerificationTimeseries; to support the “Credits Spent” metric you’ll need to:
- Import the credit-spend hook:
import { useFetchCreditSpendTimeseries } from "./credit-spend-chart/hooks/use-fetch-timeseries";- Retrieve the toggle state from
useMetricType():const { isCreditSpendMode } = useMetricType();- Conditionally call the appropriate timeseries hook based on
isCreditSpendMode:import { useFetchVerificationTimeseries } from "./bar-chart/hooks/use-fetch-timeseries";
import { useFetchCreditSpendTimeseries } from "./credit-spend-chart/hooks/use-fetch-timeseries";
const { isCreditSpendMode } = useMetricType();
- const { data, loading } = useFetchVerificationTimeseries(apiId);
- const { data, loading } = isCreditSpendMode
- ? useFetchCreditSpendTimeseries(apiId, keyId)
- : useFetchVerificationTimeseries(apiId, keyId);
- Ensure the returned timeseries data uses the `spent_credits` field for the credit-spend mode. This will align the key-level charts with the logs metric-type toggle and correctly surface credit-spend data. <details> <summary>⛔ Skipped due to learnings</summary>Learnt from: unrenamed
PR: #2660
File: apps/play/app/page-bk.tsx:24-24
Timestamp: 2024-11-13T14:58:01.321Z
Learning: In React, refs created with useRef do not change between renders, so including them in the dependency array of useEffect hooks is unnecessary.Learnt from: ogzhanolguncu
PR: #3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: ThebuildQueryParamsfunction inapps/dashboard/app/(app)/logs/filters.query-params.tscallsuseFilters()hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.</details> </blockquote></details> <details> <summary>internal/clickhouse/src/index.ts (2)</summary><blockquote> `68-75`: **New export: getSpentCreditsTotal wire-up looks correct** Import location and naming are consistent with other verifications exports. --- `111-116`: It looks like the `getSpentCreditsTotal` function signature and its TRPC usage weren’t surfaced. Let’s confirm: - That `verifications.ts` exists under `internal/clickhouse/src` and contains an exported `getSpentCreditsTotal` returning a `Promise<number>`. - That the TRPC router at `apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts` returns the result as a number (and/or has appropriate zod schema). Once we have that, we can be confident that downstream callers expect a number and update types as needed. I’ll review those outputs next. </blockquote></details> <details> <summary>tools/local/src/cmd/seed/batch-operations.ts (2)</summary><blockquote> `126-130`: **Good: clearer multi-line error reporting** The reformat improves readability of error context without changing behavior. --- `118-121`: **Return shape change is safe—no callsites consume the returned object** I searched the entire repository for any usages of `insertVerificationEvents` that destructure or assign its return value: - Only two references found: - Definition in `tools/local/src/cmd/seed/batch-operations.ts` - Call in `tools/local/src/cmd/seed/apis.ts` at line 390, where it’s simply `await`ed with no assignment - No destructuring patterns (e.g. `const { keyUsageStats } = await insertVerificationEvents(...)`) exist - No variables capture its return value anywhere Since no code reads from the returned object, changing it to `{ keyUsageStats, creditStats }` cannot break existing consumers. The original warning is no longer needed. </blockquote></details> <details> <summary>internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (1)</summary><blockquote> `1-2`: **Migration markers and naming are consistent** goose up/down markers and v4 naming look correct. </blockquote></details> <details> <summary>apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-metric-type.ts (1)</summary><blockquote> `13-24`: **LGTM: clean URL-state wiring with clear mode flags** Defaulting via parseAsStringLiteral(...).withDefault("requests") is spot-on. The returned booleans are concise and expressive for render conditions. </blockquote></details> <details> <summary>apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx (1)</summary><blockquote> `122-132`: **TooltipPrefix nullability confirmed** The `OverviewBarChartProps` type declares ```ts tooltipPrefix?: string | null;(line 45 in
apps/dashboard/components/logs/overview-charts/overview-bar-chart.tsx), so it acceptsnullas a valid value. The default is"All"(line 61), but overriding withnullwill not cause a type error. No change needed here.internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (1)
12-12: No changes needed for sum(spent_credits) – column is non-nullable
Thespent_creditscolumn is added in migration 052 asInt64 DEFAULT 0(non-Nullable), sosum(spent_credits)cannot yieldNULLand will default to 0 when there are no rows. Switching tosumOrZeroisn’t necessary in this case.• See
internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql:3
• MV usessum(spent_credits)safely (e.g.internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql:12)apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/logs-table.tsx (1)
27-31: No action needed: hook already handles credit-spend modeThe
useKeysOverviewLogsQueryhook imports and usesuseMetricTypeinternally, then passescreditSpendMode: isCreditSpendModeinto its query parameters, so the logs query automatically switches shape when in credit-spend mode. You don’t need to pass the mode from the table component.apps/agent/pkg/clickhouse/schema/requests.go (1)
26-27: LGTM: Go struct fields align with ClickHouse DDL
- Verified that migration 052 (
052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql) addsspent_credits Int64 DEFAULT 0, matching the Goint64zero‐value andch:"spent_credits"tag.- Verified that migration 027 (
027_add_tags_to_verifications.raw_key_verifications_v1.sql) addstags Array(String) DEFAULT [], matching the Go[]stringandch:"tags"tag.No further changes required—fields are correctly typed and tagged.
apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-metric-type.ts (1)
13-25: Hook implementation looks solidCorrect use of nuqs’ parseAsStringLiteral with a safe default and helpful booleans for consumers. No issues.
internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)
2-15: Ensure MV’stimetype aligns with the target table
- The target table
verifications.key_verifications_per_day_v4defines thetimecolumn asDateTime(see 055_create_verifications.key_verifications_per_day_v4.sql, line 4).- The MV’s SELECT uses
toStartOfDay(fromUnixTimestamp64Milli(time))– by default (withenable_extended_results_for_datetime_functions=0) this returns aDateTime, but if that setting is enabled it will returnDateTime64(3).- To prevent creation or insert failures, please verify your cluster’s
enable_extended_results_for_datetime_functionssetting and either:
- Keep the default behavior (DateTime), or
- Explicitly cast the expression to
DateTime, e.g.toDateTime(toStartOfDay(fromUnixTimestamp64Milli(time))) AS time- Or alter the target column to
DateTime64(3)if sub-second precision is required.internal/clickhouse/src/keys/keys.ts (3)
199-201: Credit-spend filter is clear and safeNice use of a guardable condition string; keeps the query readable and avoids injection.
270-297: Cursor condition correctnessUsing Nullable(UInt64) and guarding with IS NULL for the first page is sound. No changes needed.
312-315: spent_credits type and nullability confirmedThe
raw_key_verifications_v1table addsspent_creditsas a non-nullable Int64 withDEFAULT 0(noNullable(...)wrapper) and all downstream materialized-view schemas definespent_creditsas Int64. Aggregating viasum(spent_credits)therefore returns an Int64 and matches the target schema—no further changes needed.apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts (1)
59-65: Confirm BE expects “since” string vs. explicit timestampsIn this hook you pass since as a string and let the server derive the window, while useSpentCredits computes start/end locally. Consider unifying behavior across hooks.
Would you prefer both hooks to pass since and let the server derive time windows? I can align them either way.
internal/clickhouse/src/verifications.ts (1)
268-337: All referenced v2/v4 tables and MVs exist and includespent_credits.– minute-v2 table (
key_verifications_per_minute_v2) definesspent_credits(column confirmed) and is populated by its MV (key_verifications_per_minute_mv_v2) withsum(spent_credits).
– hour-v4, day-v4 and month-v4 tables each definespent_credits, and their MVs (key_verifications_per_{hour,day,month}_mv_v4) aggregatesum(spent_credits).
– The legacyday_v2table lacksspent_creditsbut is not referenced by the updated code.No missing DDLs or materialized views—ready to merge.
| spentCredits?: number; | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Consider making spentCredits required in all API responses
You already populate spentCredits on every code path (0 or cost). Making it non-optional in the types tightens the contract and simplifies consumers.
Apply this diff:
- spentCredits?: number;
+ spentCredits: number;Repeat for NotFoundResponse, InvalidResponse, and ValidResponse.
Also applies to: 88-89, 114-115
🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 59-60 (and similarly at 88-89
and 114-115), the response types declare spentCredits as optional; since the
code always sets spentCredits (0 or cost) make spentCredits required in all
response types (replace spentCredits?: number with spentCredits: number) for the
main response and for NotFoundResponse, InvalidResponse, and ValidResponse so
the type contract matches runtime behavior; update any related type
imports/usages if needed to reflect the non-optional field.
| spentCredits, | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
VALID response returns spentCredits — consider also emitting it in metrics
You already emit metric.credits.spent with cost/deducted earlier. Adding spentCredits to metric.key.verification (or another dedicated metric) could simplify downstream observability.
If you decide to do it, append spentCredits to the metrics.emit call near Lines 188-195.
🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 691-692 and add to the
metrics.emit call near lines 188-195: append the spentCredits value to the
emitted metric payload (either add spentCredits to the existing
metric.key.verification payload or emit a dedicated metric that includes
spentCredits). Ensure you reference the same spentCredits variable in scope, add
the spentCredits field to the object passed to metrics.emit, and update any
metric payload type/interface if required so the new property is allowed.
| const creditSpendChartConfig = { | ||
| spent_credits: { | ||
| label: "Credits Spent", | ||
| color: "hsl(var(--success-9))", | ||
| }, | ||
| }; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Chart config consistency and theming
Config looks good. Minor nit: verify the token "hsl(var(--success-9))" aligns with design for “spent” (some systems use neutral or warning shades for consumption). If “success” green is intentional, ignore.
🤖 Prompt for AI Agents
In apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx
around lines 95 to 101, the color token for spent_credits is set to
"hsl(var(--success-9))"; confirm this matches the design token for the “spent”
series and if the design expects a different semantic (e.g., neutral or warning)
replace the token with the correct one such as "hsl(var(--neutral-9))" or
"hsl(var(--warning-9))"; if the success (green) token is intentional, leave
as-is.
| // Call onMount for credit spend mode | ||
| useEffect(() => { | ||
| if (isCreditSpendMode && chartContainerRef.current) { | ||
| const rect = chartContainerRef.current.getBoundingClientRect(); | ||
| onMount(rect.top + window.scrollY); | ||
| } | ||
| }, [isCreditSpendMode, onMount]); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid double onMount firing in credit-spend mode
The useEffect calls onMount on entry to credit-spend mode and the OverviewBarChart also receives onMount. This can emit two mount events with potentially different offsets (container vs chart), causing jittery scroll/selection behaviors.
Consider centralizing this to one place—prefer passing a wrapper onMount to the chart that measures the container—then remove the effect.
Apply this diff:
@@
- // Call onMount for credit spend mode
- useEffect(() => {
- if (isCreditSpendMode && chartContainerRef.current) {
- const rect = chartContainerRef.current.getBoundingClientRect();
- onMount(rect.top + window.scrollY);
- }
- }, [isCreditSpendMode, onMount]);
+ const handleBarChartMount = () => {
+ if (!chartContainerRef.current) return;
+ const rect = chartContainerRef.current.getBoundingClientRect();
+ onMount(rect.top + window.scrollY);
+ };
@@
- <OverviewBarChart
+ <OverviewBarChart
data={creditSpendTimeseries}
isLoading={creditSpendIsLoading}
isError={creditSpendIsError}
enableSelection
- onMount={onMount}
+ onMount={handleBarChartMount}
onSelectionChange={handleSelectionChange}
config={creditSpendChartConfig}Also applies to: 114-121
🤖 Prompt for AI Agents
In apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx
around lines 102-109 and 114-121, the component calls onMount twice when
entering credit-spend mode (once via this useEffect measuring the container and
again via OverviewBarChart’s onMount) causing duplicate mount events and jitter;
remove the useEffect calls and instead pass a single wrapper onMount prop down
to OverviewBarChart that measures the container (use
chartContainerRef.current.getBoundingClientRect() and window.scrollY inside that
wrapper) so only the chart reports the measured offset once.
| {activeKeysTimeseries.length > 0 && ( | ||
| <div className="w-full md:w-1/2 max-md:h-72"> | ||
| <OverviewAreaChart | ||
| data={activeKeysTimeseries} | ||
| isLoading={activeKeysIsLoading} | ||
| isError={activeKeysIsError} | ||
| enableSelection | ||
| onSelectionChange={handleSelectionChange} | ||
| config={keysChartConfig} | ||
| labels={keysChartLabels} | ||
| /> | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
UX: Hiding the right chart collapses layout
Conditionally omitting the Active Keys chart can create asymmetric layouts and layout shifts. Consider rendering a skeleton/empty state to keep height and spacing consistent.
If desired, I can add a lightweight placeholder that preserves layout while activeKeysTimeseries is empty.
| // Track credit statistics | ||
| creditStats.totalCreditsSpent += verificationEvent.spent_credits; | ||
| if (verificationEvent.spent_credits === 0) { | ||
| creditStats.verificationsWith0Credits++; | ||
| } else if (verificationEvent.spent_credits === 1) { | ||
| creditStats.verificationsWith1Credit++; | ||
| } else { | ||
| creditStats.verificationsWithMultipleCredits++; | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Defensive math for spent_credits; bucket boundaries explicit
Ensure we aggregate integers and handle unexpected values safely.
Apply this diff:
- creditStats.totalCreditsSpent += verificationEvent.spent_credits;
- if (verificationEvent.spent_credits === 0) {
+ const credits = Number(verificationEvent.spent_credits) || 0;
+ creditStats.totalCreditsSpent += credits;
+ if (credits <= 0) {
creditStats.verificationsWith0Credits++;
- } else if (verificationEvent.spent_credits === 1) {
+ } else if (credits === 1) {
creditStats.verificationsWith1Credit++;
} else {
creditStats.verificationsWithMultipleCredits++;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Track credit statistics | |
| creditStats.totalCreditsSpent += verificationEvent.spent_credits; | |
| if (verificationEvent.spent_credits === 0) { | |
| creditStats.verificationsWith0Credits++; | |
| } else if (verificationEvent.spent_credits === 1) { | |
| creditStats.verificationsWith1Credit++; | |
| } else { | |
| creditStats.verificationsWithMultipleCredits++; | |
| } | |
| // Track credit statistics | |
| - creditStats.totalCreditsSpent += verificationEvent.spent_credits; | |
| const credits = Number(verificationEvent.spent_credits) || 0; | |
| creditStats.totalCreditsSpent += credits; | |
| if (credits <= 0) { | |
| creditStats.verificationsWith0Credits++; | |
| } else if (credits === 1) { | |
| creditStats.verificationsWith1Credit++; | |
| } else { | |
| creditStats.verificationsWithMultipleCredits++; | |
| } |
| spent_credits: number; | ||
| }; |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Type surface extended with spent_credits: ensure all producers/consumers are aligned
Adding spent_credits to VerificationEvent looks right given the broader PR. Please double-check all seed consumers and ClickHouse ingestion paths read this field.
Run to verify end-to-end usage:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching seeders and analytics for spent_credits..."
rg -n -C2 --type=ts --type=sql '\bspent_credits\b' tools internal apps | sed -n '1,200p'Length of output: 15536
Add spent_credits to the seed event generator
The VerificationEvent type surface was extended with spent_credits, and all ClickHouse ingestion paths already handle this field. However, I don’t see any code in the local seed generator that actually populates it—seeded events will end up missing spent_credits. Please update the generator to include a value (e.g. default to 0 or accept it via CLI) when constructing events.
• File: tools/local/src/cmd/seed/event-generator.ts
– Locate where the VerificationEvent payload is built and add a spent_credits property.
– Ensure you choose a sensible default or wiring so that every seeded event includes this field.
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 34–35, the
VerificationEvent payload construction is missing the newly added spent_credits
field; update the place where the event object is built to include spent_credits
(default to 0 or thread through a CLI flag if available). Add spent_credits: 0
to the generated event object (or read a parsed CLI option and set that value)
so every seeded event includes the field, and ensure TypeScript types remain
satisfied.
| export function generateKeyHash(keyContent: string): string { | ||
| return crypto.createHash("sha256").update(keyContent).digest("hex"); | ||
| return createHash("sha256").update(keyContent).digest("hex"); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Nit: make string encoding explicit in hash update
Minor clarity/readability: pass "utf8" explicitly to avoid any ambiguity and to match typical Node patterns.
Apply this diff:
-export function generateKeyHash(keyContent: string): string {
- return createHash("sha256").update(keyContent).digest("hex");
-}
+export function generateKeyHash(keyContent: string): string {
+ return createHash("sha256").update(keyContent, "utf8").digest("hex");
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function generateKeyHash(keyContent: string): string { | |
| return crypto.createHash("sha256").update(keyContent).digest("hex"); | |
| return createHash("sha256").update(keyContent).digest("hex"); | |
| } | |
| export function generateKeyHash(keyContent: string): string { | |
| return createHash("sha256").update(keyContent, "utf8").digest("hex"); | |
| } |
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 81 to 83, the call
to createHash(...).update(keyContent) omits an explicit string encoding; change
the update call to pass "utf8" explicitly (i.e., update(keyContent, "utf8")) so
the hash uses a defined encoding and follows Node conventions.
| /** | ||
| * Calculates credit cost for a verification event | ||
| */ | ||
| function calculateCreditCost(key: KeyInfo, outcome: string): number { | ||
| // Rule 3: If request is rejected, cost should be 0 | ||
| if (outcome !== "VALID") { | ||
| return 0; | ||
| } | ||
|
|
||
| // Only calculate cost if key has usage limits (remaining enabled) | ||
| if (!key.hasUsageLimit) { | ||
| return 0; | ||
| } | ||
|
|
||
| // Rule 1: Default cost is 1 credit | ||
| // Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits) | ||
| // Since we only get here for VALID outcomes, we can apply the 15% rule directly | ||
| if (Math.random() < 0.15) { | ||
| return 2 + Math.floor(Math.random() * 4); // Random between 2-5 | ||
| } | ||
|
|
||
| return 1; | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Make credit-cost randomness injectable for deterministic tests and repeatable seeds
Using Math.random() is fine, but inject a RNG to enable reproducible runs and unit testing of distributions.
Apply this diff:
-function calculateCreditCost(key: KeyInfo, outcome: string): number {
+function calculateCreditCost(
+ key: KeyInfo,
+ outcome: string,
+ rng: () => number = Math.random,
+): number {
@@
- if (Math.random() < 0.15) {
- return 2 + Math.floor(Math.random() * 4); // Random between 2-5
+ if (rng() < 0.15) {
+ return 2 + Math.floor(rng() * 4); // 2-5
}And pass the RNG from the caller when needed (default remains unchanged).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Calculates credit cost for a verification event | |
| */ | |
| function calculateCreditCost(key: KeyInfo, outcome: string): number { | |
| // Rule 3: If request is rejected, cost should be 0 | |
| if (outcome !== "VALID") { | |
| return 0; | |
| } | |
| // Only calculate cost if key has usage limits (remaining enabled) | |
| if (!key.hasUsageLimit) { | |
| return 0; | |
| } | |
| // Rule 1: Default cost is 1 credit | |
| // Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits) | |
| // Since we only get here for VALID outcomes, we can apply the 15% rule directly | |
| if (Math.random() < 0.15) { | |
| return 2 + Math.floor(Math.random() * 4); // Random between 2-5 | |
| } | |
| return 1; | |
| } | |
| /** | |
| * Calculates credit cost for a verification event | |
| */ | |
| function calculateCreditCost( | |
| key: KeyInfo, | |
| outcome: string, | |
| rng: () => number = Math.random, | |
| ): number { | |
| // Rule 3: If request is rejected, cost should be 0 | |
| if (outcome !== "VALID") { | |
| return 0; | |
| } | |
| // Only calculate cost if key has usage limits (remaining enabled) | |
| if (!key.hasUsageLimit) { | |
| return 0; | |
| } | |
| // Rule 1: Default cost is 1 credit | |
| // Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits) | |
| // Since we only get here for VALID outcomes, we can apply the 15% rule directly | |
| if (rng() < 0.15) { | |
| return 2 + Math.floor(rng() * 4); // 2-5 | |
| } | |
| return 1; | |
| } |
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 195 to 218, the
calculateCreditCost function uses Math.random() directly which prevents
deterministic tests and repeatable seeds; change the function signature to
accept an optional rng: () => number parameter defaulting to Math.random,
replace all Math.random() calls inside with rng(), keep the existing logic and
ranges (15% branch and 2-5 credits calculation), and update all call sites to
pass the RNG when deterministic behavior is required (while leaving callers that
don’t pass it to use the default Math.random).
| export function generateUuid() { | ||
| return crypto.randomUUID(); | ||
| return globalThis.crypto.randomUUID(); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Make UUID generation robust across runtimes (add Node fallback).
globalThis.crypto is not guaranteed in all Node environments (or when DOM lib types are absent). Prefer Web Crypto when present, but fall back to Node’s crypto.randomUUID() to avoid runtime errors.
Apply this diff in the function:
export function generateUuid() {
- return globalThis.crypto.randomUUID();
+ const webCrypto = (globalThis as any).crypto;
+ return typeof webCrypto?.randomUUID === "function"
+ ? webCrypto.randomUUID()
+ : nodeRandomUUID();
}And add this import at the top of the file (outside the selected range):
import { randomUUID as nodeRandomUUID } from "node:crypto";🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/utils.ts around lines 6 to 8, the generateUuid
function assumes globalThis.crypto exists and calls randomUUID there; add a
robust fallback to Node’s crypto by importing nodeRandomUUID from "node:crypto"
at the top of the file and change generateUuid to return
globalThis.crypto?.randomUUID?.() when available, otherwise call
nodeRandomUUID(); ensure the import line is added outside the selected range and
the function uses the Web Crypto path first and Node fallback to avoid runtime
errors in environments without globalThis.crypto.
| RequestID string `ch:"request_id"` | ||
| Time int64 `ch:"time"` | ||
| WorkspaceID string `ch:"workspace_id"` | ||
| KeySpaceID string `ch:"key_space_id"` | ||
| KeyID string `ch:"key_id"` | ||
| Region string `ch:"region"` | ||
| Outcome string `ch:"outcome"` | ||
| IdentityID string `ch:"identity_id"` | ||
| SpentCredits int64 `ch:"spent_credits"` | ||
| Tags []string `ch:"tags"` | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
🛠️ Refactor suggestion
Add upstream tag normalization before insertion
Adding Tags []string is good, but duplicates and order variance will fragment aggregated tables. Normalize before writing: unique + stable sort.
Example helper:
// CanonicalizeTags returns a sorted, de-duplicated copy.
func CanonicalizeTags(in []string) []string {
if len(in) == 0 {
return nil
}
seen := make(map[string]struct{}, len(in))
out := make([]string, 0, len(in))
for _, t := range in {
if _, ok := seen[t]; ok {
continue
}
seen[t] = struct{}{}
out = append(out, t)
}
sort.Strings(out)
return out
}Ensure the writer populating KeyVerificationRequestV1 uses Tags: CanonicalizeTags(tags). This complements the DB-side MV normalization and keeps raw writes tidy.
Would you like me to locate the writer(s) and open a follow-up patch?
🤖 Prompt for AI Agents
In apps/agent/pkg/clickhouse/schema/requests.go around lines 18-28, the Tags
[]string field is stored raw which allows duplicates and order variance to
fragment aggregations; add an upstream canonicalization step that deduplicates
and stably sorts tag slices before insertion. Implement a helper (e.g.,
CanonicalizeTags) that returns nil for empty input, removes duplicates, and
returns a sorted slice, then ensure every writer that constructs
KeyVerificationRequestV1 (and any other structs written to ClickHouse with Tags)
calls that helper and assigns Tags: CanonicalizeTags(tags) before persisting.
| case "keyIds": { | ||
| if (typeof filter.value === "string" && filter.value.trim()) { | ||
| const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is"; | ||
|
|
||
| params.keyIds?.filters?.push({ | ||
| operator: keyIdOperator, | ||
| value: filter.value, | ||
| }); | ||
| } | ||
| break; |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Normalize keyIds by trimming whitespace
Prevents mismatched filters and duplicate cache keys from accidental spaces.
- params.keyIds?.filters?.push({
+ params.keyIds?.filters?.push({
operator: keyIdOperator,
- value: filter.value,
+ value: filter.value.trim(),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "keyIds": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is"; | |
| params.keyIds?.filters?.push({ | |
| operator: keyIdOperator, | |
| value: filter.value, | |
| }); | |
| } | |
| break; | |
| case "keyIds": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is"; | |
| params.keyIds?.filters?.push({ | |
| operator: keyIdOperator, | |
| value: filter.value.trim(), | |
| }); | |
| } | |
| break; |
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 66 to 75, the code checks filter.value.trim() but still pushes the
original filter.value which can include leading/trailing spaces and cause
mismatched filters/duplicate cache keys; fix by computing a trimmed value (e.g.,
const trimmed = filter.value.trim()), verify it is non-empty, and push trimmed
as the value into params.keyIds.filters (use trimmed for operator/value),
replacing uses of the untrimmed filter.value.
| case "names": | ||
| case "identities": { | ||
| if (typeof filter.value === "string" && filter.value.trim()) { | ||
| params[filter.field]?.filters?.push({ | ||
| operator, | ||
| value: filter.value, | ||
| }); | ||
| } | ||
| break; |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Normalize names and identities by trimming
Same rationale as keyIds — normalize inputs early.
- params[filter.field]?.filters?.push({
+ params[filter.field]?.filters?.push({
operator,
- value: filter.value,
+ value: filter.value.trim(),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "names": | |
| case "identities": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| params[filter.field]?.filters?.push({ | |
| operator, | |
| value: filter.value, | |
| }); | |
| } | |
| break; | |
| case "names": | |
| case "identities": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| params[filter.field]?.filters?.push({ | |
| operator, | |
| value: filter.value.trim(), | |
| }); | |
| } | |
| break; |
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 78-86, the case branches for "names" and "identities" use
filter.value directly and only check typeof and .trim() in the condition;
normalize inputs by trimming before pushing: compute a trimmedValue = (typeof
filter.value === "string") ? filter.value.trim() : filter.value, check
trimmedValue is non-empty for strings, and push the trimmedValue into
params[filter.field].filters (i.e., use trimmedValue in the value field) so
names and identities are normalized the same way keyIds are.
| case "outcomes": { | ||
| // For credit spend, we might want to include all outcomes to show credit consumption patterns | ||
| if (typeof filter.value === "string") { | ||
| params.outcomes?.filters?.push({ | ||
| operator: "is", | ||
| value: filter.value as | ||
| | "VALID" | ||
| | "INSUFFICIENT_PERMISSIONS" | ||
| | "RATE_LIMITED" | ||
| | "FORBIDDEN" | ||
| | "DISABLED" | ||
| | "EXPIRED" | ||
| | "USAGE_EXCEEDED" | ||
| | "", | ||
| }); | ||
| } | ||
| break; |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Validate outcomes against the allowed set to avoid invalid server filters
Currently any string is pushed (including ""), which can create invalid or no‑op filters. Align with the allowed outcome set (same constant used elsewhere).
+import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";
...
- case "outcomes": {
- // For credit spend, we might want to include all outcomes to show credit consumption patterns
- if (typeof filter.value === "string") {
- params.outcomes?.filters?.push({
- operator: "is",
- value: filter.value as
- | "VALID"
- | "INSUFFICIENT_PERMISSIONS"
- | "RATE_LIMITED"
- | "FORBIDDEN"
- | "DISABLED"
- | "EXPIRED"
- | "USAGE_EXCEEDED"
- | "",
- });
- }
- break;
- }
+ case "outcomes": {
+ if (typeof filter.value === "string" && filter.value) {
+ type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+ if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+ params.outcomes?.filters?.push({
+ operator: "is",
+ value: filter.value as ValidOutcome,
+ });
+ }
+ }
+ break;
+ }Please confirm that the import path for KEY_VERIFICATION_OUTCOMES is correct in this package. If not, point me to the shared constant for outcomes and I’ll update the patch.
🏁 Script executed:
#!/bin/bash
# Search for the declaration or usages of KEY_VERIFICATION_OUTCOMES in the repository
rg -n "KEY_VERIFICATION_OUTCOMES"Length of output: 8391
🏁 Script executed:
#!/bin/bash
# Display the top lines of the target file to inspect existing imports
head -n 20 apps/dashboard/app/\(app\)/apis/\[apiId\]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.tsLength of output: 1093
Validate outcomes against the canonical set
The import path for KEY_VERIFICATION_OUTCOMES is correct—other parts of this package (and related hooks) already import it from @unkey/clickhouse/src/keys/keys. Please apply the refactor below to guard against empty or invalid outcome values:
• Add the import at the top of
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";• Replace the existing case "outcomes": block (lines 89–105) with:
case "outcomes": {
- // For credit spend, we might want to include all outcomes to show credit consumption patterns
- if (typeof filter.value === "string") {
- params.outcomes?.filters?.push({
- operator: "is",
- value: filter.value as
- | "VALID"
- | "INSUFFICIENT_PERMISSIONS"
- | "RATE_LIMITED"
- | "FORBIDDEN"
- | "DISABLED"
- | "EXPIRED"
- | "USAGE_EXCEEDED"
- | "",
- });
- }
+ if (typeof filter.value === "string" && filter.value) {
+ type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+ if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+ params.outcomes?.filters?.push({
+ operator: "is",
+ value: filter.value as ValidOutcome,
+ });
+ }
+ }
break;
}This ensures only legitimate outcomes (and no empty strings) are sent to the server.
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 89–105, the current outcomes branch can push empty or invalid
outcome strings; import KEY_VERIFICATION_OUTCOMES from
"@unkey/clickhouse/src/keys/keys" at the top of the file and replace the case
"outcomes": block to validate filter.value: first ensure typeof filter.value ===
"string" and filter.value is non-empty, then check
KEY_VERIFICATION_OUTCOMES.includes(filter.value) before pushing to
params.outcomes.filters; only push when both checks pass so no empty or
non-canonical outcomes are sent to the server.
| case "tags": { | ||
| if (typeof filter.value === "string" && filter.value.trim()) { | ||
| params.tags = { | ||
| operator, | ||
| value: filter.value, | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Trim tag values to avoid backend mismatches
Keep consistent with other filters; send trimmed values only.
- params.tags = {
+ params.tags = {
operator,
- value: filter.value,
+ value: filter.value.trim(),
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "tags": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| params.tags = { | |
| operator, | |
| value: filter.value, | |
| }; | |
| } | |
| case "tags": { | |
| if (typeof filter.value === "string" && filter.value.trim()) { | |
| params.tags = { | |
| operator, | |
| value: filter.value.trim(), | |
| }; | |
| } |
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 108 to 114, the "tags" branch currently forwards filter.value
untrimmed; change it to trim the string before sending to the backend and only
set params.tags when the trimmed value is non-empty (e.g. const val =
filter.value.trim(); if (val) params.tags = { operator, value: val }). Ensure
you keep the existing typeof string check and do not send whitespace-only
values.
| "INSUFFICIENT_PERMISSIONS", | ||
| ]), | ||
| identity_id: z.string().optional().default(""), | ||
| spent_credits: z.number().int().optional().default(0), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Enforce non-negative spent_credits at ingestion.
Unless refunds are modeled as negative credits, disallow negatives at the schema edge to prevent corrupt rows.
- spent_credits: z.number().int().optional().default(0),
+ spent_credits: z.number().int().nonnegative().optional().default(0),If negatives are a valid business case (e.g., adjustments/refunds), skip this and enforce validation downstream instead.
🏁 Script executed:
#!/bin/bash
set -e
# Print the first 200 lines of the ClickHouse verifications schema to review spent_credits context
sed -n '1,200p' internal/clickhouse/src/verifications.tsLength of output: 5806
Enforce non-negative spent_credits at ingestion
Ensure that any negative spent_credits values are rejected as early as possible to prevent corrupt rows in verifications.raw_key_verifications_v1. If your domain permits negative adjustments/refunds, handle those explicitly downstream instead of allowing them here.
Locations to update:
internal/clickhouse/src/verifications.ts, in theinsertVerificationschema
Suggested change:
- spent_credits: z.number().int().optional().default(0),
+ spent_credits: z.number().int().nonnegative().optional().default(0),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| spent_credits: z.number().int().optional().default(0), | |
| // in internal/clickhouse/src/verifications.ts, inside the insertVerification schema | |
| spent_credits: z.number().int().nonnegative().optional().default(0), |
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around line 27, the
insertVerification schema currently allows negative spent_credits
(spent_credits: z.number().int().optional().default(0)); change the schema to
reject negatives by using z.number().int().min(0).optional().default(0) for
spent_credits so any negative values are validated out at ingestion; update any
related type imports or usages if necessary to reflect the tightened schema and
run tests to ensure validation errors are surfaced where insertVerification is
called.
| tags, | ||
| spent_credits | ||
| FROM verifications.raw_key_verifications_v1 |
There was a problem hiding this comment.
🛠️ Refactor suggestion
COALESCE spent_credits in SELECT to avoid Zod parse failures.
Older rows may have NULL in spent_credits. z.number() will fail on NULL; add COALESCE in the projection.
- tags,
- spent_credits
+ tags,
+ COALESCE(spent_credits, 0) AS spent_credits📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tags, | |
| spent_credits | |
| FROM verifications.raw_key_verifications_v1 | |
| tags, | |
| COALESCE(spent_credits, 0) AS spent_credits | |
| FROM verifications.raw_key_verifications_v1 |
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 166 to 168, the SELECT
projects spent_credits which can be NULL for older rows and causes z.number() to
fail; modify the projection to wrap spent_credits with COALESCE (e.g.,
COALESCE(spent_credits, 0)) so the query always returns a numeric value and the
Zod parse succeeds.
| // Schema for spent credits total query | ||
| const spentCreditsParams = z.object({ | ||
| workspaceId: z.string(), | ||
| keyspaceId: z.string(), | ||
| keyId: z.string().optional(), | ||
| startTime: z.number().int(), | ||
| endTime: z.number().int(), | ||
| outcomes: z | ||
| .array( | ||
| z.object({ | ||
| value: z.enum(KEY_VERIFICATION_OUTCOMES), | ||
| operator: z.literal("is"), | ||
| }), | ||
| ) | ||
| .nullable(), | ||
| tags: z | ||
| .object({ | ||
| operator: z.enum(["is", "contains", "startsWith", "endsWith"]), | ||
| value: z.string(), | ||
| }) | ||
| .nullable(), | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Unify tag filter shape with other endpoints (array vs single).
Here tags is a single object; elsewhere (keyDetailsLogs, timeseries) it’s an array. This inconsistency complicates clients and filter composition.
Consider aligning with the array-of-filters approach:
-const spentCreditsParams = z.object({
+const spentCreditsParams = z.object({
workspaceId: z.string(),
keyspaceId: z.string(),
keyId: z.string().optional(),
startTime: z.number().int(),
endTime: z.number().int(),
outcomes: z
.array(
z.object({
value: z.enum(KEY_VERIFICATION_OUTCOMES),
operator: z.literal("is"),
}),
)
.nullable(),
- tags: z
- .object({
- operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
- value: z.string(),
- })
- .nullable(),
+ tags: z
+ .array(
+ z.object({
+ operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
+ value: z.string(),
+ }),
+ )
+ .nullable(),
});And update the predicate builder accordingly (see next comments).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Schema for spent credits total query | |
| const spentCreditsParams = z.object({ | |
| workspaceId: z.string(), | |
| keyspaceId: z.string(), | |
| keyId: z.string().optional(), | |
| startTime: z.number().int(), | |
| endTime: z.number().int(), | |
| outcomes: z | |
| .array( | |
| z.object({ | |
| value: z.enum(KEY_VERIFICATION_OUTCOMES), | |
| operator: z.literal("is"), | |
| }), | |
| ) | |
| .nullable(), | |
| tags: z | |
| .object({ | |
| operator: z.enum(["is", "contains", "startsWith", "endsWith"]), | |
| value: z.string(), | |
| }) | |
| .nullable(), | |
| }); | |
| // Schema for spent credits total query | |
| const spentCreditsParams = z.object({ | |
| workspaceId: z.string(), | |
| keyspaceId: z.string(), | |
| keyId: z.string().optional(), | |
| startTime: z.number().int(), | |
| endTime: z.number().int(), | |
| outcomes: z | |
| .array( | |
| z.object({ | |
| value: z.enum(KEY_VERIFICATION_OUTCOMES), | |
| operator: z.literal("is"), | |
| }), | |
| ) | |
| .nullable(), | |
| tags: z | |
| .array( | |
| z.object({ | |
| operator: z.enum(["is", "contains", "startsWith", "endsWith"]), | |
| value: z.string(), | |
| }), | |
| ) | |
| .nullable(), | |
| }); |
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 614 to 635, the
spentCreditsParams schema defines tags as a single object while other endpoints
expect an array of tag filters; change tags to be nullable array of the same
tag-filter object shape (operator: enum
["is","contains","startsWith","endsWith"], value: string) to match
keyDetailsLogs/timeseries schemas, and then update any predicate builder code
that consumes spentCreditsParams.tags to iterate the array, combine filters
appropriately, and preserve null handling so existing behavior remains when tags
is absent.
| const conditions = [ | ||
| "workspace_id = {workspaceId: String}", | ||
| "key_space_id = {keyspaceId: String}", | ||
| "time >= {startTime: Int64}", | ||
| "time <= {endTime: Int64}", | ||
| ]; |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Use consistent time types with raw table (UInt64) and BETWEEN.
Other queries on raw_key_verifications_v1 use UInt64 and BETWEEN. Mixing Int64 may cause implicit casts and hinder plan caching.
- "time >= {startTime: Int64}",
- "time <= {endTime: Int64}",
+ "time BETWEEN {startTime: UInt64} AND {endTime: UInt64}",Please confirm the raw table time column is UInt64 epoch millis. If it’s DateTime64(3), then use fromUnixTimestamp64Milli(...) as in timeseries queries.
🏁 Script executed:
#!/bin/bash
# Search for references to the raw_key_verifications_v1 table (including DDL or column usage)
rg -n 'raw_key_verifications_v1' -C3Length of output: 28071
🏁 Script executed:
#!/bin/bash
# Display the first 50 lines of the DDL to inspect the 'time' column type
sed -n '1,50p' internal/clickhouse/schema/003_create_verifications_raw_key_verifications_v1.sqlLength of output: 896
Align time parameter types and use BETWEEN for plan caching
Verified that verifications.raw_key_verifications_v1.time is defined as Int64 (epoch milliseconds) in the DDL (internal/clickhouse/schema/003_create_verifications_raw_key_verifications_v1.sql, lines 2–6) and not UInt64. To preserve type consistency and improve plan caching, combine the two Int64 comparisons into a single BETWEEN clause:
• File: internal/clickhouse/src/verifications.ts
Replace lines 649–654:
- "time >= {startTime: Int64}",
- "time <= {endTime: Int64}",
+ "time BETWEEN {startTime: Int64} AND {endTime: Int64}",• Review any other queries (e.g. in src/keys/keys.ts) that currently use {startTime: UInt64} / {endTime: UInt64} and update them to Int64 to match the raw table’s column type.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const conditions = [ | |
| "workspace_id = {workspaceId: String}", | |
| "key_space_id = {keyspaceId: String}", | |
| "time >= {startTime: Int64}", | |
| "time <= {endTime: Int64}", | |
| ]; | |
| const conditions = [ | |
| "workspace_id = {workspaceId: String}", | |
| "key_space_id = {keyspaceId: String}", | |
| - "time >= {startTime: Int64}", | |
| "time BETWEEN {startTime: Int64} AND {endTime: Int64}", | |
| ]; |
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 649–654, the time
comparisons use two separate Int64 comparisons and should be combined into a
single BETWEEN clause and the parameter types must be Int64 to match the DDL;
change the conditions array to use "time BETWEEN {startTime: Int64} AND
{endTime: Int64}" instead of separate >=/<= comparisons, and audit other query
files (e.g. src/keys/keys.ts) to change any {startTime: UInt64}/{endTime:
UInt64} usages to {startTime: Int64}/{endTime: Int64} so types align with the
raw table and improve plan caching.
| "time <= {endTime: Int64}", | ||
| ]; | ||
|
|
||
| let paramSchemaExtension = {}; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Type paramSchemaExtension to zod types to satisfy .extend() at compile-time.
Current {} may not satisfy ZodRawShape and can break type inference.
- let paramSchemaExtension = {};
+ let paramSchemaExtension: Record<string, z.ZodType> = {};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let paramSchemaExtension = {}; | |
| let paramSchemaExtension: Record<string, z.ZodType> = {}; |
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around line 656, the variable
paramSchemaExtension is declared as an untyped {} which can fail TypeScript/Zod
compile-time checks when used with .extend(); change its declaration to a Zod
type shape (e.g., import and use ZodRawShape or Record<string, ZodTypeAny>) and
initialize it with that type (for example: const paramSchemaExtension:
ZodRawShape = {}), so .extend() will have the correct compile-time type
information.
| <DropdownMenuItem | ||
| key={type} | ||
| onClick={() => setMetricType(type)} | ||
| className={cn("cursor-pointer", metricType === type && "bg-gray-3")} |
There was a problem hiding this comment.
you really love pushing the limits of what javascript does with && operators, do you?
| }); | ||
|
|
||
| return { | ||
| spentCredits: result.val?.[0]?.spent_credits ?? 0, |
There was a problem hiding this comment.
doesn't this throws if result.val is an empty array?
use result.val?.at(0)?.spent_credits ?? 0, instead
There was a problem hiding this comment.
why do you delete all of our schemas?
There was a problem hiding this comment.
this and I suspect the followings should be done different.
Put these where you deleted the other ones (pkg/clickhouse/schema)
the tldr is:
internal/clickhouse is the old system that fell flat due to the mess of migrations.
pkg/clickhouse/schema is where all our new tables should go to.
If you want I can revert this and clean up
chronark
left a comment
There was a problem hiding this comment.
The clickhouse schemas are not where I want them, but we can fix them
@ogzhanolguncu can you take a look at the frontend code as well
|
FYI none of this needs reviewing. It's in a draft for a reason. The frontend for sure is no where near being ready for review. I have to go through it, I "made it work" now I will "make it ready" then it's reviewable. Outside the schemas of course |
What does this PR do?
Fixes # (issue)
If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainAppreciated
Summary by CodeRabbit
New Features
Enhancements
API