Skip to content

feat: Eng 1849 track spent credits#3798

Closed
perkinsjr wants to merge 23 commits intomainfrom
eng-1849-track-spent-credits
Closed

feat: Eng 1849 track spent credits#3798
perkinsjr wants to merge 23 commits intomainfrom
eng-1849-track-spent-credits

Conversation

@perkinsjr
Copy link
Member

@perkinsjr perkinsjr commented Aug 15, 2025

What does this PR do?

Fixes # (issue)

If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Chore (refactoring code, technical debt, workflow improvements)
  • Enhancement (small improvements)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How should this be tested?

  • Test A
  • Test B

Checklist

Required

  • Filled out the "How to test" section in this PR
  • Read Contributing Guide
  • Self-reviewed my own code
  • Commented on my code in hard-to-understand areas
  • Ran pnpm build
  • Ran pnpm fmt
  • Checked for warnings, there are none
  • Removed all console.logs
  • Merged the latest changes from main onto my branch with git pull origin main
  • My changes don't cause any responsiveness issues

Appreciated

  • If a UI change was made: Added a screen recording or screenshots to this PR
  • Updated the Unkey Docs if changes were necessary

Summary by CodeRabbit

  • New Features

    • Introduced Credits Spent metric across the dashboard: new charts, timeseries, totals, and table column for API overview and key details.
    • Added a metric type switcher to toggle between Requests and Credit Spend (state preserved in URL).
    • Displayed Credits Spent badge in key details controls; improved raw number formatting.
  • Enhancements

    • Logs support filtering and sorting by Credits Spent; drill-down preserves current filters and metric type.
  • API

    • Verification responses now include spentCredits; analytics and logs capture and surface spent credits.

@linear
Copy link

linear bot commented Aug 15, 2025

@changeset-bot
Copy link

changeset-bot bot commented Aug 15, 2025

⚠️ No Changeset found

Latest commit: b89664f

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@vercel
Copy link

vercel bot commented Aug 15, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
dashboard Ready Ready Preview Comment Sep 16, 2025 1:13pm
engineering Ready Ready Preview Comment Sep 16, 2025 1:13pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

📝 Walkthrough

Walkthrough

Adds spent_credits tracking to key verification logs end-to-end: computes and logs on verification, persists in ClickHouse raw and aggregated tables (minute/hour/day/month v2/v4), exposes totals/timeseries and logs via ClickHouse queries and tRPC, updates API payloads, dashboard charts/tables/controls, seeds, and tests. Removes legacy CH DDL.

Changes

Cohort / File(s) Summary
Go agent schema
apps/agent/pkg/clickhouse/schema/requests.go
Extends KeyVerificationRequestV1 with SpentCredits int64 and Tags []string.
Go services: key verification
go/internal/services/keys/verifier.go, go/internal/services/keys/validation.go
Track spent credits in KeyVerifier; set spentCredits on valid usage; include SpentCredits in log payload.
Go ClickHouse schema removals (legacy DDL)
go/pkg/clickhouse/schema/databases/001_verifications/*
Deletes database/table/MV DDL files for verifications (raw and aggregate v1/v2).
Go ClickHouse request schema
go/pkg/clickhouse/schema/requests.go
Adds SpentCredits int64 with ch/json tags to KeyVerificationRequestV1.
ClickHouse migrations (Goose, up/down)
internal/clickhouse/schema/052_...raw_key_verifications_v1.sql, 053_...per_hour_v4.sql, 054_...per_hour_mv_v4.sql, 055_...per_day_v4.sql, 056_...per_day_mv_v4.sql, 057_...per_month_v4.sql, 058_...per_month_mv_v4.sql, 059_...per_minute_v2.sql, 060_...per_minute_mv_v2.sql
Adds spent_credits to raw; creates v4/v2 agg tables and MVs with sum(spent_credits); provides down migrations; replaces prior v3/v1/v2 objects.
ClickHouse Node SDK: verifications
internal/clickhouse/src/index.ts, internal/clickhouse/src/verifications.ts
Adds spent_credits to insert/log/timeseries schemas; updates queries to include/aggregate spent_credits; exports getSpentCreditsTotal.
ClickHouse Node SDK: keys overview logs
internal/clickhouse/src/keys/keys.ts, internal/clickhouse/src/keys/keys.test.ts
Adds creditSpendMode param, spent_credits field and sort, filters spent_credits > 0 when enabled; tests for param parsing and SQL construction.
API server: analytics and proxy
apps/api/src/pkg/analytics.ts, apps/api/src/pkg/clickhouse-proxy.ts, apps/api/src/pkg/auth/root_key.ts, apps/api/src/routes/v1_keys_verifyKey.ts
Adds optional spent_credits to insertKeyVerification and proxy payload; root key sets spent_credits: 0; route includes spent_credits in analytics event.
API server: key verification service
apps/api/src/pkg/keys/service.ts
Adds spentCredits to Valid/Invalid/NotFound responses; computes spentCredits on valid path; sets 0 on error paths.
Dashboard TRPC routers
apps/dashboard/lib/trpc/routers/index.ts, apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts
Adds key.spentCredits and api.keys.spentCredits endpoints; validates input; checks key ownership; queries ClickHouse spentCreditsTotal; returns {spentCredits}.
Dashboard overview: charts and hooks
.../_overview/components/charts/index.tsx, .../credit-spend-chart/hooks/use-fetch-timeseries.ts, .../bar-chart/hooks/use-fetch-timeseries.ts
Adds credit spend mode branch; new hook to fetch credit-spend timeseries; extends verification timeseries items with spent_credits.
Dashboard overview: controls and table
.../_overview/components/controls/index.tsx, .../controls/components/logs-metric-type.tsx, .../table/logs-table.tsx, .../table/hooks/use-logs-query.ts, .../table/query-logs.schema.ts, .../_overview/hooks/use-metric-type.ts, .../_overview/components/table/components/override-indicator.tsx
Adds metric type switcher; propagates metricType in URLs; adds creditSpendMode to queries/schema; table adapts columns and sorting (spent_credits); preserves query params on navigation.
Dashboard key details: charts, controls, hooks, table
.../[keyId]/components/charts/index.tsx, .../[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts, .../[keyId]/components/controls/index.tsx, .../[keyId]/components/controls/components/logs-metric-type.tsx, .../[keyId]/hooks/use-metric-type.ts, .../[keyId]/hooks/use-spent-credits.ts, .../[keyId]/components/table/logs-table.tsx
Adds credit spend mode and chart; extends timeseries with spent_credits; shows Credits Spent badge via useSpentCredits (new TRPC query); adds metric type switcher; table adds Credits Spent column.
Dashboard shared components/util
apps/dashboard/components/logs/overview-charts/overview-bar-chart.tsx, apps/dashboard/lib/fmt.ts, apps/dashboard/app/(app)/apis/[apiId]/page.tsx
OverviewBarChart gains props and raw-number formatting for credit charts; adds formatRawNumber; navbar label reflects metricType.
Dashboard TRPC utils
apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/utils.ts
transformKeysFilters now returns creditSpendMode.
Seeding tools
tools/local/src/cmd/seed/event-generator.ts, .../batch-operations.ts, .../utils.ts
Adds spent_credits to generated events and creditStats aggregation; uses createHash import; updates UUID source.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client as Client
  participant API as API Keys Service
  participant Verif as KeyVerifier (Go)
  participant Analytics as Analytics.insertKeyVerification
  participant CH as ClickHouse

  Client->>API: verifyKey(request)
  API->>Verif: validate and compute usage
  alt usage valid
    Verif->>Verif: k.spentCredits = cost
  else exceeded/invalid
    Verif->>Verif: k.spentCredits = 0
  end
  API->>Analytics: {request_id,..., tags, spent_credits}
  Analytics->>CH: insert into verifications.raw_key_verifications_v1
  note over CH: MVs aggregate per minute/hour/day/month<br/>including sum(spent_credits)
Loading
sequenceDiagram
  autonumber
  participant User as Dashboard UI
  participant TRPC as tRPC Router
  participant CHSDK as ClickHouse SDK
  participant CH as ClickHouse

  User->>TRPC: api.keys.timeseries (creditSpendMode?)<br/>key.spentCredits(params)
  TRPC->>CHSDK: timeseries / spentCreditsTotal
  CHSDK->>CH: SELECT SUM(spent_credits) ... WHERE filters
  CH-->>CHSDK: aggregates (spent_credits)
  CHSDK-->>TRPC: timeseries/total with spent_credits
  TRPC-->>User: data
  User->>User: render Credits Spent charts/tables
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Possibly related PRs

✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch eng-1849-track-spent-credits

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@perkinsjr
Copy link
Member Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 23

🔭 Outside diff range comments (2)
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (1)

4-16: Fix required — canonicalize tags before using in ORDER BY / GROUP BY (verifications.key_verifications_per_minute_v2)

Short summary: the table ording includes tags (Array(String)) but the MV that populates it inserts tags verbatim. I found no existing normalization (arraySort/arrayDistinct) in the repo — this will prevent proper SummingMergeTree merges when tag order/duplicates differ and will hurt pruning/locality.

Files to fix:

  • internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql — ORDER BY contains tags
  • internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql — MV SELECT / GROUP BY uses tags directly (no normalization)

Suggested fix (make tags canonical before grouping/inserting). For example, change the MV SELECT / GROUP BY to use a sorted/distinct array:

Replace
tags
in SELECT with
arraySort(arrayDistinct(tags)) AS tags

and replace
tags
in GROUP BY with
arraySort(arrayDistinct(tags))

Or, if you prefer a single-string key, use:
arrayStringConcat(arraySort(arrayDistinct(tags)), ',') AS tag_set
and use tag_set in ORDER BY / GROUP BY instead.

If tags must remain part of the PK, ensure all feeders produce a canonical tag representation (sorted+deduped). Otherwise consider removing tags from the ORDER BY and modeling tags as a separate dimension.

internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)

16-23: Critical: GROUP BY is using the raw timestamp, not the day-bucket alias — leads to duplicate/day-split rows.

ClickHouse doesn’t allow SELECT aliases in GROUP BY. The current GROUP BY time actually groups by the raw millisecond timestamp, while SELECT projects the start-of-day alias. This will emit multiple rows per day (one per distinct raw timestamp) instead of one daily aggregate.

Fix by grouping on the same expression used in SELECT:

 GROUP BY
   workspace_id,
   key_space_id,
   identity_id,
   key_id,
   outcome,
-  time,
+  toStartOfDay(fromUnixTimestamp64Milli(time)),
   tags

Alternatively (cleaner), use WITH to avoid repeating the expression:

WITH toStartOfDay(fromUnixTimestamp64Milli(time)) AS day
SELECT
  ...,
  day AS time,
  ...
FROM ...
GROUP BY
  ...,
  day,
  ...
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6024e32 and fb3edeb.

📒 Files selected for processing (38)
  • apps/agent/pkg/clickhouse/schema/requests.go (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (3 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/index.ts (4 hunks)
  • apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (1 hunks)
  • go/internal/services/keys/validation.go (1 hunks)
  • go/internal/services/keys/verifier.go (2 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/requests.go (1 hunks)
  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql (1 hunks)
  • internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (2 hunks)
  • internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (2 hunks)
  • internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (2 hunks)
  • internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (2 hunks)
  • internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (3 hunks)
  • internal/clickhouse/src/index.ts (2 hunks)
  • internal/clickhouse/src/verifications.ts (5 hunks)
💤 Files with no reviewable changes (14)
  • go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-24T13:29:10.129Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: The `buildQueryParams` function in `apps/dashboard/app/(app)/logs/filters.query-params.ts` calls `useFilters()` hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.

Applied to files:

  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts
🧬 Code Graph Analysis (9)
apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (4)
apps/dashboard/lib/trpc/trpc.ts (4)
  • t (8-8)
  • requireUser (10-21)
  • requireWorkspace (23-36)
  • withRatelimit (113-129)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/query-timeseries.schema.ts (1)
  • keysOverviewQueryTimeseriesPayload (6-59)
apps/dashboard/lib/trpc/routers/api/keys/api-query.ts (2)
  • getApi (388-410)
  • queryApiKeys (86-312)
tools/local/src/cmd/seed/utils.ts (1)
  • clickhouse (17-17)
apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts (6)
apps/dashboard/providers/query-time-provider.tsx (1)
  • useQueryTime (74-80)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/query-timeseries.schema.ts (1)
  • KeysOverviewQueryTimeseriesPayload (61-61)
apps/dashboard/components/logs/constants.ts (1)
  • HISTORICAL_DATA_WINDOW (2-2)
apps/dashboard/app/(app)/apis/[apiId]/_overview/filters.schema.ts (1)
  • keysOverviewFilterFieldConfig (11-46)
internal/clickhouse/src/keys/keys.ts (1)
  • KEY_VERIFICATION_OUTCOMES (4-13)
apps/dashboard/lib/trpc/server.ts (1)
  • trpc (7-14)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (5)
apps/dashboard/providers/query-time-provider.tsx (1)
  • useQueryTime (74-80)
apps/dashboard/components/logs/constants.ts (1)
  • HISTORICAL_DATA_WINDOW (2-2)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/filters.schema.ts (1)
  • keyDetailsFilterFieldConfig (15-38)
internal/clickhouse/src/keys/keys.ts (1)
  • KEY_VERIFICATION_OUTCOMES (4-13)
apps/dashboard/lib/trpc/server.ts (1)
  • trpc (7-14)
apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (4)
apps/dashboard/lib/trpc/trpc.ts (4)
  • t (8-8)
  • requireUser (10-21)
  • requireWorkspace (23-36)
  • withRatelimit (113-129)
apps/dashboard/lib/db.ts (1)
  • db (5-26)
packages/rbac/src/queries.ts (1)
  • and (52-56)
tools/local/src/cmd/seed/utils.ts (1)
  • clickhouse (17-17)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (6)
apps/dashboard/components/logs/controls-container.tsx (2)
  • ControlsContainer (1-7)
  • ControlsLeft (9-11)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-filters/index.tsx (1)
  • LogsFilters (12-173)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-datetime/index.tsx (1)
  • LogsDateTime (9-96)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/components/status-badge.tsx (1)
  • StatusBadge (11-26)
apps/dashboard/app/(app)/apis/[apiId]/settings/components/status-badge.tsx (1)
  • StatusBadge (28-35)
internal/icons/src/icons/chart-usage.tsx (1)
  • ChartUsage (15-51)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (5)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)
  • useSpentCredits (9-106)
internal/ui/src/components/separator.tsx (1)
  • Separator (26-26)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/components/status-badge.tsx (1)
  • StatusBadge (11-26)
apps/dashboard/app/(app)/apis/[apiId]/settings/components/status-badge.tsx (1)
  • StatusBadge (28-35)
internal/icons/src/icons/chart-usage.tsx (1)
  • ChartUsage (15-51)
apps/dashboard/lib/trpc/routers/index.ts (2)
apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (1)
  • queryKeySpentCredits (38-86)
apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (1)
  • queryApiSpentCredits (8-61)
internal/clickhouse/src/verifications.ts (3)
internal/clickhouse/src/keys/keys.ts (1)
  • KEY_VERIFICATION_OUTCOMES (4-13)
internal/clickhouse/src/client/interface.ts (1)
  • Querier (4-17)
internal/clickhouse/src/client/client.ts (1)
  • query (25-62)
internal/clickhouse/src/index.ts (1)
internal/clickhouse/src/verifications.ts (1)
  • getSpentCreditsTotal (721-721)
🔇 Additional comments (33)
go/pkg/clickhouse/schema/requests.go (1)

91-95: LGTM: schema extended with SpentCredits aligned to CH column

Field name, type, and tags match the ClickHouse column. Placement before Tags keeps the shape tidy.

go/internal/services/keys/verifier.go (2)

44-47: LGTM: internal field spentCredits added to verifier state

Clear naming and type. This keeps the runtime state separate from logged payloads.


126-136: LGTM: SpentCredits included in logged verification payload

Payload now forwards the tracked credits; ordering and tags are consistent with schema.

internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql (1)

1-7: Migration adds spent_credits — most artifacts updated; please verify the writer/flush includes the column

Short summary: migrations, MVs and query code already reference spent_credits and agent/server types carry the field. I could not locate the concrete flush/INSERT implementation that writes rows to verifications.raw_key_verifications_v1, so please confirm the writer actually includes spent_credits (via ch tags or explicit column list) to avoid INSERT/SELECT mismatches.

Files to note (found by search)

  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — adds column.
  • internal/clickhouse/schema/_create_verifications.sql (e.g. 053,054,055,056,057,058,059,060) — many V4/V2 tables/MVs include spent_credits or use sum(spent_credits).
  • internal/clickhouse/src/verifications.ts — zod schema + aggregations and queries use spent_credits (SUM and SELECT).
  • apps/agent/pkg/clickhouse/schema/requests.go and go/pkg/clickhouse/schema/requests.go — KeyVerificationRequestV1 includes SpentCredits ch:"spent_credits".
  • go/internal/services/keys/verifier.go — buffers KeyVerification events with SpentCredits set.
  • go/pkg/clickhouse/client.go and apps/agent/pkg/clickhouse/client.go — call flush(ctx, conn, table, rows) to write to verifications.raw_key_verifications_v1, but the actual flush function/implementation was not found in this repo.

Action requested

  • Confirm where the flush/INSERT implementation lives and ensure it writes the spent_credits column (either by honoring the ch:"spent_credits" struct tag or by including the column in the INSERT column list). If the flush function is external/generated, update that code or confirm compatibility.
apps/agent/pkg/clickhouse/schema/requests.go (1)

18-28: SpentCredits signedness: ClickHouse = Int64, Go = int64 — confirm desired semantics

Confirmed: spent_credits is Int64 in ClickHouse migrations/MVs and SpentCredits is int64 in Go (and treated as integer in TypeScript). No mismatch found.

Key places checked:

  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — ADD COLUMN IF NOT EXISTS spent_credits Int64 DEFAULT 0
  • internal/clickhouse/schema/_create_verifications.sql (e.g. 053, 055, 056, 057, 059, 058, 054) — spent_credits Int64 / SUM(spent_credits) in MVs
  • internal/clickhouse/src/verifications.ts — z.number().int().default(0) and SUM(spent_credits) usage
  • apps/agent/pkg/clickhouse/schema/requests.go — KeyVerificationRequestV1: SpentCredits int64 ch:"spent_credits"
  • go/pkg/clickhouse/schema/requests.go — SpentCredits int64 ch:"spent_credits" json:"spent_credits"
  • go/internal/services/keys/verifier.go — populates SpentCredits when buffering KeyVerificationRequestV1
  • go/pkg/clickhouse/client.go — writes to verifications.raw_key_verifications_v1 via batch processor (no explicit SQL INSERT with column list found)

Recommendation:

  • If credits are guaranteed non-negative, consider migrating to UInt64 (ClickHouse) and uint64 (Go) and update affected MVs/aggregations and consumer code.
  • If negative adjustments (refunds) are possible, keep Int64 and add brief documentation on semantics.

If you want, I can open a follow-up PR to convert to unsigned types across migrations, Go structs, and related queries once you confirm non-negative semantics.

apps/dashboard/lib/trpc/routers/index.ts (4)

12-12: Import wiring looks correct.

New spent credits API for APIs is registered in the correct namespace and matches usage in the UI hooks.


52-52: Import wiring looks correct.

Key-level spent credits endpoint is properly exposed under key router.


131-132: Good placement and naming consistency.

Exposing spentCredits at the key router alongside logs/timeseries is consistent and discoverable.


187-188: Good placement and naming consistency.

API-wide spentCredits under api.keys aligns with other keys analytics endpoints.

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (1)

32-81: UI nit: use nullish coalescing when formatting and consider surfacing the active time range.

  • If formatNumber ever receives undefined, guard with ?? 0 (defensive).
  • Optional: add a small subtitle/tooltip to reflect the current time filter (since users often ask “spent in what window?”).
-                    text={formatNumber(spentCredits)}
+                    text={formatNumber(spentCredits ?? 0)}

If you want, I can wire a tooltip showing start–end or “since” derived from useFilters()/useQueryTime().

Likely an incorrect or invalid review comment.

internal/clickhouse/src/index.ts (2)

68-75: New spent credits querier is correctly wired into verifications API surface.

The import of getSpentCreditsTotal and its exposure through the verifications getter aligns with existing patterns (curried by querier). No concerns on this wiring.


111-116: Confirm return shape of spentCreditsTotal to match downstream usage.

Downstream code (e.g., TRPC endpoint queryApiSpentCredits) accesses result.val?.[0]?.spent_credits from clickhouse.verifications.spentCreditsTotal(...). If getSpentCreditsTotal returns a scalar (e.g., number) or a different record shape, this will break at runtime. Ensure the returned value’s structure includes a top-level val array with a spent_credits field, or update the consumers accordingly.

If needed, I can scan and align the usage across callers to a consistent return shape.

internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (2)

1-16: Schema v4 includes spent_credits and is suitable for SummingMergeTree.

Including spent_credits Int64 alongside count works well with SummingMergeTree’s aggregation semantics over your ORDER BY key. The up/down goose markers look correct.


18-19: Spent_credits present in all v4 tables/MVs — verified

All v4 tables and materialized views include spent_credits and the MVs project sum(spent_credits). The raw source column was added; there are no active INSERT/backfill statements targeting v4 tables.

Files checked:

  • internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
  • internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql — MV selects sum(spent_credits) as spent_credits FROM verifications.raw_key_verifications_v1
  • internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
  • internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — MV selects sum(spent_credits)
  • internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql — table defines spent_credits Int64, ENGINE = SummingMergeTree()
  • internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql — MV selects sum(spent_credits)
  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — ALTER adds spent_credits Int64 DEFAULT 0 to raw_key_verifications_v1
  • internal/clickhouse/src/verifications.ts — queries aggregate SUM(spent_credits) and map results; code references updated to v4 tables

Notes:

  • No active "INSERT INTO verifications.key_verifications_per_*_v4" backfills were found; older migrations contain commented INSERTs for v1/v2/v3 only.
internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (2)

1-16: Day-level v4 table includes spent_credits; matches hour-level design.

The structure mirrors the per-hour v4 with spent_credits added and the same ORDER BY. Looks consistent.


18-19: Cross-check dependent MVs and readers/writers for v4 adoption.

Confirm the day-level v4 MV includes spent_credits and all reads now target v4 where appropriate to avoid mixing v3/v4.

You can reuse the script from the per-hour comment; it enumerates all v4 schemas/MVs for spent_credits coverage.

apps/dashboard/lib/trpc/routers/api/keys/query-spent-credits/index.ts (1)

52-55: Only the first tag is used; confirm that this is intended.

transformVerificationFilters appears to normalize tags to an array; here you pick only the first tag. If multi-tag filtering is expected (e.g., OR/AND semantics), this will silently drop additional tag filters.

If multi-tag is desired, extend the CH query to accept an array and update this endpoint accordingly.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (2)

9-13: Spent-credits hook integration looks correct.

Imports and the useSpentCredits(keyId, keyspaceId) usage align with the surrounding data-fetch pattern. Props are already available in this component, so no plumbing issues.


33-38: Good separation of loading/error state for spent credits.

Using hasSpentCreditsData = !spentCreditsLoading && !spentCreditsError avoids flicker and hides the block if the query fails—consistent with the “Remaining Credits” handling.

internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (2)

11-12: Addition of spent_credits to monthly rollup is correct

Including spent_credits Int64 alongside count matches SummingMergeTree semantics and aligns with the new MVs aggregating this metric.


4-12: Verify monthly time bucket type alignment (Date vs DateTime)

If the MV feeding this table uses toStartOfMonth(...), ClickHouse returns Date, while this table defines time as DateTime. Ensure the MV casts to DateTime (e.g., toDateTime(toStartOfMonth(...))) to avoid implicit casts or type mismatches.

Would you like me to scan the MV definitions to confirm the cast is present and propose a patch if needed?

internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (2)

12-12: LGTM: Aggregating spent_credits alongside count

sum(spent_credits) as spent_credits is the right aggregation and aligns with the v4 table expectations.


2-24: Confirm target table schema includes spent_credits and matching keys

MV outputs spent_credits and groups by (workspace_id, key_space_id, identity_id, key_id, outcome, time, tags). Ensure verifications.key_verifications_per_hour_v4 defines the same dimension columns and includes spent_credits Int64 so SummingMergeTree aligns.

If helpful, I can scan the schema files for key_verifications_per_hour_v4 and verify column parity and ORDER BY keys.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)

97-99: Refetch gating logic is sound

Conditionally enabling refetch when endTime === timestamp correctly toggles live updates for “now” views while avoiding churn for historical ranges.

internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (2)

1-24: LGTM! The materialized view correctly aggregates spent credits.

The materialized view properly includes the new spent_credits column using sum(spent_credits) aggregation, consistent with the v4 table schema and other interval-based MVs in the PR.


26-27: Consider adding IF EXISTS to the DROP statement for safety.

While the current DROP VIEW statement will work, adding IF EXISTS would make it more robust against edge cases where the view might not exist.

-- +goose down
-DROP VIEW IF EXISTS verifications.key_verifications_per_month_mv_v4;
+DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_month_mv_v4;

Likely an incorrect or invalid review comment.

apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-spent-credits.ts (1)

66-74: Potential issue with keyIds operator normalization.

The code normalizes both "is" and "contains" operators to the same value, which might not be the intended behavior. The "contains" operator typically implies partial matching while "is" implies exact matching.

case "keyIds": {
  if (typeof filter.value === "string" && filter.value.trim()) {
-    const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";
+    // Keep the original operator if it's valid, otherwise default to "is"
+    const keyIdOperator = ["is", "contains"].includes(operator) ? operator : "is";

    params.keyIds?.filters?.push({
      operator: keyIdOperator,
      value: filter.value,
    });
  }
  break;
}

Likely an incorrect or invalid review comment.

internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)

1-24: LGTM! The materialized view correctly implements per-minute aggregation with spent credits.

The view properly aggregates the new spent_credits column and targets the correct v2 table, maintaining consistency with the overall schema versioning strategy.

internal/clickhouse/src/verifications.ts (3)

249-249: LGTM! Properly extends the data point schema with spent credits.

The addition of spent_credits to the verification timeseries data point is consistent with the schema changes across the codebase.


374-375: Good implementation of spent credits aggregation in timeseries query.

The query correctly aggregates spent_credits using SUM() and includes it in the output map, maintaining consistency with other metric aggregations.


598-598: Correct handling of spent credits in merge logic.

The merge function properly accumulates spent_credits when combining results from multiple batches, using the same null-safe pattern as other metrics.

internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (2)

1-3: Destination table schema verified — spent_credits column present and Int64

Verified: verifications.key_verifications_per_day_v4 exists and declares spent_credits Int64; the materialized view selects sum(spent_credits) AS spent_credits — types align.

Files checked:

  • internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql — spent_credits Int64
  • internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — sum(spent_credits) as spent_credits

12-12: No change required — spent_credits is non-nullable (DEFAULT 0)

The raw column is defined as a non-Nullable Int64 with a default of 0, and the destination tables also use Int64, so SUM(spent_credits) cannot return NULL and coalescing is unnecessary.

Evidence:

  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql — ADD COLUMN IF NOT EXISTS spent_credits Int64 DEFAULT 0;
  • internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql — spent_credits Int64
  • internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql — uses sum(spent_credits) as spent_credits

Comment on lines +26 to +27
-- +goose down
DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use MATERIALIZED VIEW instead of VIEW in the DROP statement.

The DROP statement should specify MATERIALIZED VIEW to match the creation statement.

-- +goose down
-DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
+DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
-- +goose down
DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
-- +goose down
DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
🤖 Prompt for AI Agents
In
internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql
around lines 26-27, the DROP statement uses DROP VIEW but the object created is
a MATERIALIZED VIEW; update the statement to drop the same object type by
replacing the DROP VIEW line with DROP MATERIALIZED VIEW IF EXISTS
verifications.key_verifications_per_minute_mv_v2; so the drop matches the
creation and avoids mismatched object type errors.

This adds seeding data with credit spend so you can automatically see
info about credit spend in the dashboard.

It adds about 15% of remaining requests to have a +1 credit spend.
Adds Seeding
Adds Charts
Adds Logs
Allows filtering
@perkinsjr
Copy link
Member Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 24, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 63

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (14)
tools/local/src/cmd/seed/utils.ts (1)

256-261: Filter hard-coded specialDates to the last 30-day window

The current implementation in generateTimestamp() unconditionally returns timestamps from fixed dates (March 15 and April 1, 2025) 10% of the time—well outside the rolling 30-day range defined by thirtyDaysAgo and now. As a result, seed data will unexpectedly spike on stale dates rather than within the recent window.

Please update the “special dates” branch to first filter out any out-of-range entries, falling back to the weighted distribution when none remain. For example:

   // Special dates that might have meaning (you can customize these)
-  const specialDates = [
-    new Date(2025, 2, 15), // March 15, 2025
-    new Date(2025, 3, 1),  // April 1, 2025
-  ];

+  const specialDates = [
+    new Date(2025, 2, 15), // March 15, 2025
+    new Date(2025, 3, 1),  // April 1, 2025
+  ];

   // 10% chance to generate a timestamp on a special date
   if (Math.random() < 0.1 && specialDates.length > 0) {
-    const specialDate = specialDates[Math.floor(Math.random() * specialDates.length)];
+    const candidates = specialDates.filter((d) => d >= thirtyDaysAgo && d <= now);
+    if (candidates.length > 0) {
+      const specialDate =
+        candidates[Math.floor(Math.random() * candidates.length)];

       // Set to business hours for special dates
       specialDate.setHours(
         9 + Math.floor(Math.random() * 8),
         Math.floor(Math.random() * 60),
         Math.floor(Math.random() * 60),
         0,
       );

-    return Math.floor(specialDate.getTime());
+      return specialDate.getTime();
+    }
     // No valid recent special dates → fall back to weighted distribution
   }

• File: tools/local/src/cmd/seed/utils.ts
• Affects lines around 256–261 and the special‐date branch at 303–309.

If you’d like to preserve the ability to seed truly out-of-range spikes, consider adding an explicit flag (e.g. allowOutOfRangeSpecialDates) and guarding this filter behind it.

apps/api/src/pkg/clickhouse-proxy.ts (1)

17-31: Validate spent_credits before sending to prevent corrupt analytics.

spent_credits is optional (good for back-compat), but when present it should be a non‑negative safe integer to match ClickHouse Int64 semantics. Add a lightweight guard to fail fast on bad inputs instead of ingesting junk.

   async insertVerifications(
     events: Array<{
       request_id: string;
       time: number;
       workspace_id: string;
       key_space_id: string;
       key_id: string;
       region: string;
       outcome: string;
       identity_id?: string;
       tags?: string[];
       spent_credits?: number;
     }>,
   ): Promise<void> {
+    // Basic shape checks for new numeric field
+    for (const e of events) {
+      if (e.spent_credits != null) {
+        if (!Number.isSafeInteger(e.spent_credits) || e.spent_credits < 0) {
+          throw new Error(`Invalid spent_credits: expected a non-negative safe integer, got ${e.spent_credits}`);
+        }
+      }
+    }
     await this.sendEvents("/_internal/chproxy/verifications", events);
   }

Follow-up: consider extracting a shared VerificationEvent type (used here and in analytics.ts) to avoid drift.

tools/local/src/cmd/seed/event-generator.ts (1)

178-192: Seeded verification events now carry spent_credits; propagate into generated API response bodies

Great to initialize spent_credits here and finalize it in biasVerificationOutcome. To keep seed API-response logs consistent with the live API (which returns spentCredits/spent_credits in payloads), also include it in the generated response bodies.

Apply this additional change outside the current hunk to include spent credits in API responses generated by seeds:

@@
   if (verificationEvent.outcome !== "VALID") {
@@
-    apiRequest.response_body = JSON.stringify({
-      valid: false,
-      code: String(apiRequest.response_status),
-      message: apiRequest.error,
-    });
+    apiRequest.response_body = JSON.stringify({
+      valid: false,
+      code: String(apiRequest.response_status),
+      message: apiRequest.error,
+      spentCredits: 0,
+    });
   } else {
@@
     apiRequest.response_status = 200;
     apiRequest.response_body = JSON.stringify({
       valid: true,
       keyId: verificationEvent.key_id,
       ownerId: `user_${generateRandomString(16)}`,
       meta: Math.random() < 0.7 ? generateMetadata() : undefined,
+      spentCredits: verificationEvent.spent_credits,
     });
apps/api/src/pkg/keys/service.ts (1)

660-672: USAGE_EXCEEDED payload shape includes extra fields not in InvalidResponse

The object literal includes keyId, apiId, ownerId, expires, isRootKey, authorizedWorkspaceId, which aren’t part of InvalidResponse. TS will accept this under the union, but it bloats the contract.

Either expand InvalidResponse to include these as optional (documented) or trim them here to keep the error shape minimal and predictable.

apps/api/src/routes/v1_keys_verifyKey.ts (1)

399-412: Normalize spent_credits to a non-negative integer before logging

We’ve confirmed that the analytics contract uses a snake_case spent_credits field of type Int64 (ClickHouse schema, Go proxy, and Zod definitions), so passing negative, fractional, or non-numeric values could violate the contract or lead to unexpected behavior. Clamp and truncate val.spentCredits to ensure it’s always a finite, non-negative integer.

Apply the following diff at apps/api/src/routes/v1_keys_verifyKey.ts (around lines 398–412):

-    c.executionCtx.waitUntil(
-      analytics
-        .insertKeyVerification({
+    // Normalize spent credits: ensure a finite, non-negative integer for ClickHouse Int64
+    const rawSpent = (typeof val.spentCredits === "number" && Number.isFinite(val.spentCredits))
+      ? val.spentCredits
+      : 0;
+    const spentCredits = rawSpent > 0 ? Math.trunc(rawSpent) : 0;
+
+    c.executionCtx.waitUntil(
+      analytics
+        .insertKeyVerification({
           request_id: c.get("requestId"),
           time: Date.now(),
           workspace_id: val.key.workspaceId,
           key_space_id: val.key.keyAuthId,
           key_id: val.key.id,
           // @ts-expect-error
           region: c.req.raw.cf.colo ?? "",
           outcome: val.code,
           identity_id: val.identity?.id,
           tags: req.tags ?? [],
-          spent_credits: val.spentCredits ?? 0,
+          spent_credits: spentCredits,
         })

This enforces the ClickHouse Int64 requirement and aligns with backend guidance on clamping credit values.

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)

129-139: Add explicit spent_credits to your Timeseries item types

TypeScript currently treats timeseries items as either the very generic TimeseriesData (which uses an index signature) or BaseTimeseriesData (only defines originalTimestamp and total), so typos against spent_credits won’t be caught. To lock down the mapped shape and enable accurate autocomplete, please update your types as follows:

• In apps/dashboard/components/logs/overview-charts/types.ts, change

export type TimeseriesData = {
  originalTimestamp: number;
  [key: string]: unknown;
};

to something like:

export type TimeseriesData = {
  originalTimestamp: number;
  spent_credits: number;
  [key: string]: unknown;
};

• In apps/dashboard/components/stats-card/components/chart/stats-chart.ts, extend BaseTimeseriesData to include spent_credits:

export type BaseTimeseriesData = {
  originalTimestamp: number;
  total: number;
  spent_credits?: number;  // add this line
};

• In your bar-chart hook at
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts, annotate the mapped return type so it includes spent_credits: number:

interface VerificationTimeseriesItem {
  displayX: string;
  originalTimestamp: number;
  valid: number;
  total: number;
  success: number;
  error: number;
  spent_credits: number;
  [outcome: string]: number | string;
}

const timeseries: VerificationTimeseriesItem[] = data.timeseries.map(ts => ({
  /* … */,
  spent_credits: ts.y.spent_credits ?? 0,
}));

These updates will prevent typos (e.g. spentCredits vs. spent_credits), keep charts in sync, and restore full editor support.

go/internal/services/keys/verifier.go (1)

124-148: Consider emitting a Prometheus counter for spent credits

If credits influence billing or quotas, a Prometheus Counter (e.g., key_verifications_spent_credits_total{key_type}) can complement CH for near-real-time alerting.

If desired, I can sketch a minimal metric wiring aligned with your metrics.KeyVerificationsTotal pattern.

apps/api/src/pkg/analytics.ts (1)

63-89: Normalize spent_credits in the proxy branch

We’ve confirmed that both the proxy client’s insertVerifications (apps/api/src/pkg/clickhouse-proxy.ts) and the direct ClickHouse inserter (this.clickhouse.verifications.insert, backed by a Zod schema with default 0) accept an optional spent_credits field, and that all ClickHouse schemas/migrations define spent_credits with DEFAULT 0. To guard against undefined or negative upstream values, apply this minimal normalization in the proxy branch of apps/api/src/pkg/analytics.ts:

 public get insertKeyVerification() {
   if (this.proxyClient) {
     return async (event: {
       request_id: string;
       time: number;
       workspace_id: string;
       key_space_id: string;
       key_id: string;
       region: string;
       outcome: string;
       identity_id?: string;
       tags?: string[];
-      spent_credits?: number;
+      spent_credits?: number;
     }) => {
-      return await wrap(
+      const normalizedEvent = {
+        ...event,
+        // default missing to 0 and clamp negative values
+        spent_credits: Math.max(0, event.spent_credits ?? 0),
+      };
+      return await wrap(
         // biome-ignore lint/style/noNonNullAssertion: proxyClient existence verified above
-        this.proxyClient!.insertVerifications([event]),
+        this.proxyClient!.insertVerifications([normalizedEvent]),
         (err) =>
           new FetchError({ message: err.message, retry: true }),
       );
     };
   }
   return this.clickhouse.verifications.insert;
 }
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx (1)

54-56: Selection buffer uses active-keys granularity even when selecting in credit-spend chart

When the user selects on the left (credit-spend) chart, the buffer still derives from activeKeys granularity. If these differ, the range may be off.

Recommend passing the emitting chart’s granularity through onSelectionChange or having each chart call a granularity-aware handler.

Would you like me to draft a small change to OverviewBarChart/AreaChart to include granularity in the selection callback so the handler can compute the correct buffer?

internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)

2-15: Consider POPULATE/backfill strategy

MV omits POPULATE. If you need historical data in v4, document/backfill steps (manual INSERT … SELECT or a one-off job). If fresh-only is intended, add a comment noting that.

internal/clickhouse/src/keys/keys.ts (1)

14-21: Add schema-level time range validation

Mirror the TRPC refinement here so callers of this module (outside TRPC) also get guardrails.

-export const keysOverviewLogsParams = z.object({
+export const keysOverviewLogsParams = z
+  .object({
     workspaceId: z.string(),
     keyspaceId: z.string(),
     limit: z.number().int(),
     startTime: z.number().int(),
     endTime: z.number().int(),
     creditSpendMode: z.boolean().optional().default(false),
     outcomes: z
       .array(
         z.object({
           value: z.enum(KEY_VERIFICATION_OUTCOMES),
           operator: z.literal("is"),
         }),
       )
       .nullable(),
@@
     cursorTime: z.number().int().nullable(),
     sorts: z
       .array(
         z.object({
           column: z.enum(["time", "valid", "invalid", "spent_credits"]),
           direction: z.enum(["asc", "desc"]),
         }),
       )
       .nullable(),
-});
+  })
+  .refine((v) => v.endTime >= v.startTime, {
+    path: ["endTime"],
+    message: "endTime must be greater than or equal to startTime",
+  });
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (1)

1-16: Optional: add TTL for retention

If you only need N months of minute‑level data, add a TTL to prune old rows automatically (keeps storage/cost in check).

Example:

 ENGINE = SummingMergeTree()
 PARTITION BY toYYYYMM(time)
+TTL time + INTERVAL 6 MONTH
 ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)

Adjust the interval to your retention policy.

internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (1)

5-12: Use LowCardinality for repeated string dimensions

Apply LowCardinality to workspace_id, key_space_id, identity_id, key_id to reduce storage and speed aggregations at monthly granularity as well.

-  workspace_id  String,
-  key_space_id  String,
-  identity_id   String,
-  key_id        String,
+  workspace_id  LowCardinality(String),
+  key_space_id  LowCardinality(String),
+  identity_id   LowCardinality(String),
+  key_id        LowCardinality(String),
internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)

6-23: Cast high‑repeat dimensions to LowCardinality in the MV SELECT (if table DDL is updated)

If you update the target table to LowCardinality, cast here to avoid type mismatch and let CH store dictionary‑encoded keys end‑to‑end.

-SELECT
-  workspace_id,
-  key_space_id,
-  identity_id,
-  key_id,
+SELECT
+  toLowCardinality(workspace_id) AS workspace_id,
+  toLowCardinality(key_space_id) AS key_space_id,
+  toLowCardinality(identity_id)  AS identity_id,
+  toLowCardinality(key_id)       AS key_id,
   outcome,
   count(*) as count,
   sum(spent_credits) as spent_credits,
   toStartOfMinute(fromUnixTimestamp64Milli(time)) AS time,
   tags
 FROM verifications.raw_key_verifications_v1
 GROUP BY
-  workspace_id,
-  key_space_id,
-  identity_id,
-  key_id,
+  toLowCardinality(workspace_id),
+  toLowCardinality(key_space_id),
+  toLowCardinality(identity_id),
+  toLowCardinality(key_id),
   outcome,
   time,
   tags
♻️ Duplicate comments (18)
apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)

132-133: Correct fallback: use nullish coalescing for numeric spent_credits

ts.y.spent_credits ?? 0 avoids clobbering valid 0 values. This aligns with earlier guidance.

Confirm all other chart hooks use ?? for spent_credits:

#!/bin/bash
set -euo pipefail

rg -n -C2 --type=ts '\bspent_credits\s*\?\?\s*0\b' apps/dashboard || true
rg -n -C2 --type=ts '\bspent_credits\s*\|\|\s*0\b' apps/dashboard || true
go/internal/services/keys/validation.go (1)

45-52: Clamp negative cost before tracking spentCredits

If a negative cost slips through, this records negative credits. Clamp before assignment to avoid corrupt analytics and to stay consistent with usage accounting.

Apply:

 	if !usage.Valid {
 		k.setInvalid(StatusUsageExceeded, "Key usage limit exceeded.")
 	} else {
-		// Only track spent credits if the usage was valid (credits were actually deducted)
-		k.spentCredits = int64(cost)
+		// Only track spent credits if the usage was valid (credits were actually deducted)
+		tracked := cost
+		if tracked < 0 {
+			tracked = 0
+		}
+		k.spentCredits = int64(tracked)
 	}

Based on retrieved learnings (keys with unlimited usage bypass the limiter; non-idempotent credit decrements must not be retried; zero-cost requests should not alter remaining), verify behavior across edge cases with a quick grep:

#!/bin/bash
# Verify no retries wrap usage limiter updates
rg -nP -C2 'WithRetry|Retry|retry' go/internal/services/usagelimiter go/internal/services/keys

# Ensure unlimited keys short-circuit and don't set spentCredits
rg -nP -C3 'RemainingRequests\.Valid|spentCredits' go/internal/services/keys

# Ensure zero-cost requests do not change remaining and track 0 credits
rg -nP -C3 'cost.*0|Cost:\s*0' go/internal/services/usagelimiter go/internal/services/keys
internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (1)

11-12: Add DEFAULT 0 to numeric aggregates to harden inserts

Mirroring the earlier feedback: count and spent_credits should default to 0 to survive partial rollouts/omitted fields.

Apply this diff:

-  count         Int64,
-  spent_credits Int64
+  count         Int64 DEFAULT 0,
+  spent_credits Int64 DEFAULT 0
internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (1)

11-12: Add DEFAULT 0 to count and spent_credits for SummingMergeTree safety and phased rollouts

Without DEFAULT 0, inserts can fail during mixed-deploy windows and SUM semantics may produce NULL-like surprises. Align with your hourly table convention.

-  count         Int64,
-  spent_credits Int64
+  count         Int64 DEFAULT 0,
+  spent_credits Int64 DEFAULT 0

If this migration already ran in any environment, follow up with:

ALTER TABLE verifications.key_verifications_per_day_v4
  MODIFY COLUMN count Int64 DEFAULT 0,
  MODIFY COLUMN spent_credits Int64 DEFAULT 0;

Do you want a separate migration file for that ALTER?

internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (1)

14-15: Normalize tags in MV to prevent PK cardinality explosion

Group by raw tags will fragment rows due to non-deterministic ordering/duplicates. Canonicalize with arraySort(arrayDistinct(tags)) in SELECT (and rely on alias in GROUP BY).

Apply this diff:

 SELECT
@@
-  tags
+  arraySort(arrayDistinct(tags)) AS tags
@@
-GROUP BY
+GROUP BY
   workspace_id,
   key_space_id,
   identity_id,
   key_id,
   outcome,
   time,
-  tags
+  tags

This keeps GROUP BY stable via the alias while ensuring consistent ordering and deduplication.

Also applies to: 17-24

internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (1)

14-15: Normalize tags in MV to prevent PK cardinality explosion (month-level)

Same as hour-level MV: normalize tags via arraySort(arrayDistinct(tags)) to stabilize GROUP BY/ORDER BY.

Apply this diff:

 SELECT
@@
-  tags
+  arraySort(arrayDistinct(tags)) AS tags
@@
-GROUP BY
+GROUP BY
   workspace_id,
   key_space_id,
   identity_id,
   key_id,
   outcome,
   time,
-  tags
+  tags

Also applies to: 17-24

apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (3)

2-2: Fix isNull import to avoid relying on a re-export

Import isNull from drizzle-orm (or explicitly re-export it from "@/lib/db"). Current import can break typing/builds.

Apply this diff:

-import { db, isNull } from "@/lib/db";
+import { db } from "@/lib/db";
+import { isNull } from "drizzle-orm";

7-36: Validate time range at the schema level

Reject invalid ranges early so the resolver doesn’t hit ClickHouse with bad inputs.

-const querySpentCreditsSchema = z.object({
-  keyId: z.string(),
-  keyspaceId: z.string(),
-  startTime: z.number().int(),
-  endTime: z.number().int(),
-  outcomes: z
-    .array(
-      z.object({
-        value: z.enum([
-          "VALID",
-          "RATE_LIMITED",
-          "INSUFFICIENT_PERMISSIONS",
-          "FORBIDDEN",
-          "DISABLED",
-          "EXPIRED",
-          "USAGE_EXCEEDED",
-        ]),
-        operator: z.literal("is"),
-      }),
-    )
-    .nullable()
-    .optional(),
-  tags: z
-    .object({
-      operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
-      value: z.string(),
-    })
-    .nullable()
-    .optional(),
-});
+const querySpentCreditsSchema = z
+  .object({
+    keyId: z.string(),
+    keyspaceId: z.string(),
+    startTime: z.number().int(),
+    endTime: z.number().int(),
+    outcomes: z
+      .array(
+        z.object({
+          value: z.enum([
+            "VALID",
+            "RATE_LIMITED",
+            "INSUFFICIENT_PERMISSIONS",
+            "FORBIDDEN",
+            "DISABLED",
+            "EXPIRED",
+            "USAGE_EXCEEDED",
+          ]),
+          operator: z.literal("is"),
+        }),
+      )
+      .nullable()
+      .optional(),
+    tags: z
+      .object({
+        operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
+        value: z.string(),
+      })
+      .nullable()
+      .optional(),
+  })
+  .refine((v) => v.endTime >= v.startTime, {
+    path: ["endTime"],
+    message: "endTime must be greater than or equal to startTime",
+  });

73-81: Harden ClickHouse call with explicit TRPCError on failure

Prevent leaking raw errors and provide a consistent client surface.

-    const result = await clickhouse.verifications.spentCreditsTotal({
-      workspaceId: ctx.workspace.id,
-      keyspaceId: input.keyspaceId,
-      keyId: input.keyId,
-      startTime: input.startTime,
-      endTime: input.endTime,
-      outcomes: input.outcomes || null,
-      tags: input.tags || null,
-    });
+    let result;
+    try {
+      result = await clickhouse.verifications.spentCreditsTotal({
+        workspaceId: ctx.workspace.id,
+        keyspaceId: input.keyspaceId,
+        keyId: input.keyId,
+        startTime: input.startTime,
+        endTime: input.endTime,
+        outcomes: input.outcomes || null,
+        tags: input.tags || null,
+      });
+    } catch (_err) {
+      throw new TRPCError({
+        code: "INTERNAL_SERVER_ERROR",
+        message:
+          "Failed to query spent credits. If this persists, contact support@unkey.dev with the time this occurred.",
+      });
+    }
internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)

26-27: Action required — remove/deprecate v3 artifacts or migrate all consumers to v4

This v4 MV coexists with v3 artifacts referenced across code/tests/docs. Migrate consumers to v4 and drop v3 or clearly deprecate v3.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (1)

127-198: Duplication with “Remaining Credits” block; extract a tiny component

The animated container + StatusBadge pattern is duplicated. Extract a MotionMetricBadge to DRY this up and centralize animation/styling.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1)

56-60: Trim tag values before sending to the backend

You already guard with .trim() in the condition, but you still pass the untrimmed value to params.tags. Normalize it to avoid backend mismatches and duplicate cache keys due to trailing spaces.

-            params.tags = {
-              operator,
-              value: filter.value,
-            };
+            params.tags = {
+              operator,
+              value: filter.value.trim(),
+            };
internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (2)

5-11: Use LowCardinality for high‑repeat dimensions

workspace_id, key_space_id, identity_id, and key_id repeat heavily; LowCardinality reduces storage and speeds GROUP BY.

-  workspace_id  String,
-  key_space_id  String,
-  identity_id   String,
-  key_id        String,
+  workspace_id  LowCardinality(String),
+  key_space_id  LowCardinality(String),
+  identity_id   LowCardinality(String),
+  key_id        LowCardinality(String),

14-16: Add PARTITION BY to avoid unbounded single‑partition growth

Without partitioning, the minute table will accumulate into one partition hurting merges and retention. Partition monthly on time.

 ENGINE = SummingMergeTree()
+PARTITION BY toYYYYMM(time)
 ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)
internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (1)

14-15: Partition the monthly aggregate table

Add PARTITION BY toYYYYMM(time) for predictable retention and pruning.

 ENGINE = SummingMergeTree()
+PARTITION BY toYYYYMM(time)
 ORDER BY (workspace_id, key_space_id, identity_id, key_id, time, tags, outcome)
internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (1)

26-27: Drop the correct object type in down migration

The object created is a MATERIALIZED VIEW. Use the matching DROP statement to avoid goose rollback failures.

--- +goose down
-DROP VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
+-- +goose down
+DROP MATERIALIZED VIEW IF EXISTS verifications.key_verifications_per_minute_mv_v2;
internal/clickhouse/src/verifications.ts (2)

682-694: Fix non-"is" tag operator semantics (contains/startsWith/endsWith).

Using a generic LIKE is incorrect and misses proper anchoring. Reuse the pattern used in other queries.

-    if (args.tags) {
-      const tagCondition =
-        args.tags.operator === "is"
-          ? "has(tags, {tagValue: String})"
-          : "arrayExists(tag -> tag LIKE {tagValue: String}, tags)";
-      conditions.push(tagCondition);
-
-      paramSchemaExtension = {
-        ...paramSchemaExtension,
-        tagValue: z.string(),
-      };
-    }
+    if (args.tags) {
+      // Single tag object version
+      let tagCondition: string;
+      switch (args.tags.operator) {
+        case "is":
+          tagCondition = "has(tags, {tagValue: String})";
+          break;
+        case "contains":
+          tagCondition = "arrayExists(tag -> position(tag, {tagValue: String}) > 0, tags)";
+          break;
+        case "startsWith":
+          tagCondition = "arrayExists(tag -> startsWith(tag, {tagValue: String}), tags)";
+          break;
+        case "endsWith":
+          tagCondition = "arrayExists(tag -> endsWith(tag, {tagValue: String}), tags)";
+          break;
+        default:
+          tagCondition = "TRUE";
+      }
+      conditions.push(tagCondition);
+      paramSchemaExtension = { ...paramSchemaExtension, tagValue: z.string() };
+    }

If you adopt an array shape for tags, iterate and join with AND similar to getVerificationTimeseriesWhereClause.


722-727: Normalize and validate spent_credits result (non-negative) before returning.

Protects callers from negative totals and ensures null is returned as 0. Mirrors earlier guidance.

-    return ch.query({
-      query,
-      params: spentCreditsParams.extend(paramSchemaExtension),
-      schema: spentCreditsResult,
-    })(parameters);
+    const result = await ch.query({
+      query,
+      params: spentCreditsParams.extend(paramSchemaExtension),
+      schema: spentCreditsResult,
+    })(parameters);
+
+    // Defensive normalization
+    const val = result?.val?.[0]?.spent_credits ?? 0;
+    if (val < 0) {
+      console.error(
+        `[getSpentCreditsTotal] Negative spent_credits=${val} for workspace=${args.workspaceId}, keyspace=${args.keyspaceId}, keyId=${args.keyId ?? "<all>"}, range=[${args.startTime},${args.endTime}], outcomes=${(args.outcomes ?? []).map(o => o.value).join(",")}, tags=${args.tags ? `${args.tags.operator}:${args.tags.value}` : "<none>"}`
+      );
+      result.val[0].spent_credits = 0;
+    }
+    return result;
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4e56a1b and c65ded2.

📒 Files selected for processing (62)
  • apps/agent/pkg/clickhouse/schema/requests.go (1 hunks)
  • apps/api/src/pkg/analytics.ts (1 hunks)
  • apps/api/src/pkg/auth/root_key.ts (1 hunks)
  • apps/api/src/pkg/clickhouse-proxy.ts (1 hunks)
  • apps/api/src/pkg/keys/service.ts (14 hunks)
  • apps/api/src/routes/v1_keys_verifyKey.ts (2 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx (3 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/components/logs-metric-type.tsx (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (2 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/components/override-indicator.tsx (3 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts (5 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/logs-table.tsx (6 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/query-logs.schema.ts (2 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-metric-type.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/index.tsx (3 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/components/logs-metric-type.tsx (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/controls/index.tsx (4 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/logs-table.tsx (4 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-metric-type.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts (1 hunks)
  • apps/dashboard/app/(app)/apis/[apiId]/page.tsx (1 hunks)
  • apps/dashboard/components/logs/overview-charts/overview-bar-chart.tsx (6 hunks)
  • apps/dashboard/lib/fmt.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/utils.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/index.ts (4 hunks)
  • apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts (1 hunks)
  • go/internal/services/keys/validation.go (1 hunks)
  • go/internal/services/keys/verifier.go (2 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql (0 hunks)
  • go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql (0 hunks)
  • go/pkg/clickhouse/schema/requests.go (1 hunks)
  • internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql (1 hunks)
  • internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (2 hunks)
  • internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/055_create_verifications.key_verifications_per_day_v4.sql (2 hunks)
  • internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/057_create_verifications.key_verifications_per_month_v4.sql (2 hunks)
  • internal/clickhouse/schema/058_create_verifications.key_verifications_per_month_mv_v4.sql (3 hunks)
  • internal/clickhouse/schema/059_create_verifications.key_verifications_per_minute_v2.sql (2 hunks)
  • internal/clickhouse/schema/060_create_verifications.key_verifications_per_minute_mv_v2.sql (3 hunks)
  • internal/clickhouse/src/index.ts (2 hunks)
  • internal/clickhouse/src/keys/keys.test.ts (1 hunks)
  • internal/clickhouse/src/keys/keys.ts (10 hunks)
  • internal/clickhouse/src/verifications.ts (8 hunks)
  • tools/local/src/cmd/seed/batch-operations.ts (4 hunks)
  • tools/local/src/cmd/seed/event-generator.ts (6 hunks)
  • tools/local/src/cmd/seed/utils.ts (1 hunks)
💤 Files with no reviewable changes (14)
  • go/pkg/clickhouse/schema/databases/001_verifications/007_key_verifications_per_month_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/001_database.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/014_key_verifications_per_month_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/004_key_verifications_per_hour_mv_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/002_raw_key_verifications_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/003_key_verifications_per_hour_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/009_key_verifications_per_hour_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/010_key_verifications_per_hour_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/008_key_verifications_per_month_mv_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/012_key_verifications_per_day_mv_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/011_key_verifications_per_day_v2.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/006_key_verifications_per_day_mv_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/005_key_verifications_per_day_v1.sql
  • go/pkg/clickhouse/schema/databases/001_verifications/021_key_verifications_per_minute_v1.sql
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-08-19T09:46:03.702Z
Learnt from: Flo4604
PR: unkeyed/unkey#3800
File: go/internal/services/usagelimiter/redis.go:176-186
Timestamp: 2025-08-19T09:46:03.702Z
Learning: Zero-cost requests in the usage limiter should not change the remaining credits but should still return the actual remaining credits for that key, not treat the key as unlimited. A key with 100 credits remaining should still report 100 credits remaining after a zero-cost request.

Applied to files:

  • apps/api/src/pkg/keys/service.ts
  • go/internal/services/keys/validation.go
📚 Learning: 2025-02-27T14:08:53.557Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#2918
File: apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts:0-0
Timestamp: 2025-02-27T14:08:53.557Z
Learning: The useFetchVerificationTimeseries hook intentionally uses TIMESERIES_DATA_WINDOW * 24 to create a daily interval (24 hours) instead of the standard 1-hour interval used in other hooks.

Applied to files:

  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/bar-chart/hooks/use-fetch-timeseries.ts
📚 Learning: 2025-06-24T13:29:10.129Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: The `buildQueryParams` function in `apps/dashboard/app/(app)/logs/filters.query-params.ts` calls `useFilters()` hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.

Applied to files:

  • apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-spent-credits.ts
  • apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts
📚 Learning: 2025-08-19T09:42:40.919Z
Learnt from: Flo4604
PR: unkeyed/unkey#3800
File: go/internal/services/keys/validation.go:45-52
Timestamp: 2025-08-19T09:42:40.919Z
Learning: In go/internal/services/keys/validation.go, keys with unlimited usage (RemainingRequests.Valid = false) have an early return that bypasses the usage limiter entirely. The usage limiter is only called for keys with finite remaining request counts.

Applied to files:

  • go/internal/services/keys/validation.go
📚 Learning: 2025-08-21T15:54:45.183Z
Learnt from: chronark
PR: unkeyed/unkey#3825
File: go/internal/services/usagelimiter/limit.go:38-0
Timestamp: 2025-08-21T15:54:45.183Z
Learning: In go/internal/services/usagelimiter/limit.go, the UpdateKeyCreditsDecrement operation cannot be safely wrapped with db.WithRetry due to the lack of idempotency mechanisms in the current tech stack. Retrying this non-idempotent write operation risks double-charging users if the first attempt commits but the client sees a transient error.

Applied to files:

  • go/internal/services/keys/validation.go
📚 Learning: 2025-08-22T12:50:06.992Z
Learnt from: perkinsjr
PR: unkeyed/unkey#3775
File: apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts:1-2
Timestamp: 2025-08-22T12:50:06.992Z
Learning: The team at Unkey is comfortable accepting TRPC schema imports from app route folders in server-side TRPC routers as technical debt, since they're planning to migrate away from TRPC to calling their v2 API directly, making these imports temporary and not worth refactoring.

Applied to files:

  • apps/dashboard/lib/trpc/routers/index.ts
  • apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts
🔇 Additional comments (34)
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/components/override-indicator.tsx (1)

121-121: LGTM: href now reflects current filters

Using href={buildKeyDetailUrl()} preserves metricType, since, tags, and outcomes across navigation, aligning the overview with key details. With the refactor above, you’ll also avoid malformed paths and keep multi-valued filters intact.

Please confirm the key-details route shape is /apis/[apiId]/keys/[keyAuthId]/[keyId] and that the key-details page recognizes all preserved query keys (including the new metricType for spent credits).

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/query-logs.schema.ts (1)

15-15: Default is good—propagate to fetch layer.

Adding creditSpendMode with a default false is sensible. Ensure downstream hooks/components (e.g., use-logs-query) pass through this flag when toggled in the UI so that server queries include credit-aware projections/sorts.

I can provide a quick repo scan to verify all call sites set this when in “Credits” mode.

tools/local/src/cmd/seed/event-generator.ts (2)

1-1: Good switch to node:crypto’s createHash

Using a named import from node:crypto is correct for Node-only tooling and avoids bundler polyfills.


250-252: Correct: finalize spent_credits after outcome biasing

Setting spent_credits after the outcome is chosen ensures cost rules apply only to VALID results and usage-limited keys. Looks good.

apps/api/src/pkg/keys/service.ts (3)

401-403: NOT_FOUND response includes spentCredits = 0 — good consistency

Consistent response shape helps clients. No concerns here.


442-443: All early-return error paths consistently set spentCredits: 0

Nice attention to detail; this prevents UI and analytics conditionals.

Also applies to: 456-457, 477-478, 494-495, 508-509, 553-554, 624-625


629-654: Validation complete: spentCredits mapping and semantics correct

  • Credit deduction logic uses limited.valid appropriately; zero-cost requests report spentCredits = 0 and leave remaining unchanged.
  • Verified camelCase spentCredits maps to snake_case spent_credits in analytics/ClickHouse paths:
    • apps/api/src/routes/v1_keys_verifyKey.ts
    • apps/api/src/pkg/clickhouse-proxy.ts
    • apps/api/src/pkg/analytics.ts
    • apps/api/src/pkg/auth/root_key.ts

Approving these code changes.

apps/api/src/pkg/auth/root_key.ts (1)

66-67: Analytics logs spent_credits: 0 for root keys — correct

Root key verifications should not consume credits; explicit 0 keeps the schema uniform.

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/controls/index.tsx (1)

8-19: LGTM: Metric-type control placement is correct

Import and placement of LogsMetricType between filters and date-time aligns with expected UX and doesn’t introduce prop drilling or re-renders beyond the container boundary.

apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/utils.ts (1)

52-64: creditSpendMode schema and params are consistent – no action needed

I’ve verified that:

  • The client Zod schema (keysQueryOverviewLogsPayload) defines creditSpendMode as z.boolean().optional().default(false), yielding a boolean on input.
  • The TRPC transform (transformKeysFilters) applies params.creditSpendMode ?? false, matching the client’s default.
  • The backend ClickHouse params (keysOverviewLogsParams) likewise declare creditSpendMode as z.boolean().optional().default(false), and use it to build the correct filter (args.creditSpendMode ? "spent_credits > 0" : "TRUE").

All layers use the same field name, type, default, and semantics—there’s no risk of inadvertently disabling credit-spend mode.

apps/dashboard/app/(app)/apis/[apiId]/page.tsx (1)

2-5: Good: navbar label follows selected metric type

Pulling the active label from METRIC_TYPE_LABELS[metricType] keeps the header consistent with the controls. Hook usage in a client component is appropriate.

Also applies to: 11-11, 19-19

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/bar-chart/hooks/use-fetch-timeseries.ts (1)

159-163: Good addition: defaulting spent_credits with nullish coalescing

Using ts.y.spent_credits ?? 0 avoids unintended coalescing of valid falsy numbers and matches the pattern we want here. LGTM.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/table/logs-table.tsx (1)

22-25: Integration with metric-type looks good

Importing and using isCreditSpendMode to both adjust the Tags width and inject the Credits column is clean and non-invasive to the rest of the table definition.

Also applies to: 185-185, 309-310

apps/dashboard/lib/trpc/routers/index.ts (1)

52-53: The output will help confirm how queryKeySpentCredits is wired under both router.key and router.api.keys and what middleware each uses. This ensures we verify intent, ownership checks, and RBAC at both call sites.

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/hooks/use-logs-query.ts (1)

9-10: Threading creditSpendMode through the query is correct

Including creditSpendMode in the payload and dependency array ensures proper cache-keying and refetch on mode toggles. Looks solid.

Also applies to: 25-25, 43-44, 125-126

go/internal/services/keys/verifier.go (1)

76-122: Spent credits only tracked on successful deduction
I verified that the only write to k.spentCredits occurs in withCredits and only inside the “valid usage” branch (validation.go:51). No other methods—withIPWhitelist, withPermissions, or withRateLimits—touch spentCredits, and any early-return paths (permission checks, rate-limit failures, whitelist errors) leave it at zero. No further changes needed.

apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/components/charts/index.tsx (1)

3-6: Action Required: Wire in credit-spend hook for key-level charts

It looks like this chart component currently only imports and calls useFetchVerificationTimeseries; to support the “Credits Spent” metric you’ll need to:

  • Import the credit-spend hook:
    import { useFetchCreditSpendTimeseries } from "./credit-spend-chart/hooks/use-fetch-timeseries";
  • Retrieve the toggle state from useMetricType():
    const { isCreditSpendMode } = useMetricType();
  • Conditionally call the appropriate timeseries hook based on isCreditSpendMode:
     import { useFetchVerificationTimeseries } from "./bar-chart/hooks/use-fetch-timeseries";
  • import { useFetchCreditSpendTimeseries } from "./credit-spend-chart/hooks/use-fetch-timeseries";

    const { isCreditSpendMode } = useMetricType();

  • const { data, loading } = useFetchVerificationTimeseries(apiId);
  • const { data, loading } = isCreditSpendMode
  • ? useFetchCreditSpendTimeseries(apiId, keyId)
  • : useFetchVerificationTimeseries(apiId, keyId);
- Ensure the returned timeseries data uses the `spent_credits` field for the credit-spend mode.

This will align the key-level charts with the logs metric-type toggle and correctly surface credit-spend data. 

<details>
<summary>⛔ Skipped due to learnings</summary>

Learnt from: unrenamed
PR: #2660
File: apps/play/app/page-bk.tsx:24-24
Timestamp: 2024-11-13T14:58:01.321Z
Learning: In React, refs created with useRef do not change between renders, so including them in the dependency array of useEffect hooks is unnecessary.


Learnt from: ogzhanolguncu
PR: #3401
File: apps/dashboard/app/(app)/logs/filters.query-params.ts:10-0
Timestamp: 2025-06-24T13:29:10.129Z
Learning: The buildQueryParams function in apps/dashboard/app/(app)/logs/filters.query-params.ts calls useFilters() hook inside it, but this is valid because the function is only called from within other React hooks, maintaining the Rules of Hooks compliance.


</details>

</blockquote></details>
<details>
<summary>internal/clickhouse/src/index.ts (2)</summary><blockquote>

`68-75`: **New export: getSpentCreditsTotal wire-up looks correct**

Import location and naming are consistent with other verifications exports.

---

`111-116`: It looks like the `getSpentCreditsTotal` function signature and its TRPC usage weren’t surfaced. Let’s confirm:

- That `verifications.ts` exists under `internal/clickhouse/src` and contains an exported `getSpentCreditsTotal` returning a `Promise<number>`.
- That the TRPC router at `apps/dashboard/lib/trpc/routers/key/query-spent-credits/index.ts` returns the result as a number (and/or has appropriate zod schema).

Once we have that, we can be confident that downstream callers expect a number and update types as needed. I’ll review those outputs next.

</blockquote></details>
<details>
<summary>tools/local/src/cmd/seed/batch-operations.ts (2)</summary><blockquote>

`126-130`: **Good: clearer multi-line error reporting**

The reformat improves readability of error context without changing behavior.

---

`118-121`: **Return shape change is safe—no callsites consume the returned object**

I searched the entire repository for any usages of `insertVerificationEvents` that destructure or assign its return value:

- Only two references found:
  - Definition in `tools/local/src/cmd/seed/batch-operations.ts`  
  - Call in `tools/local/src/cmd/seed/apis.ts` at line 390, where it’s simply `await`ed with no assignment  
- No destructuring patterns (e.g. `const { keyUsageStats } = await insertVerificationEvents(...)`) exist  
- No variables capture its return value anywhere  

Since no code reads from the returned object, changing it to `{ keyUsageStats, creditStats }` cannot break existing consumers. The original warning is no longer needed.

</blockquote></details>
<details>
<summary>internal/clickhouse/schema/053_create_verifications.key_verifications_per_hour_v4.sql (1)</summary><blockquote>

`1-2`: **Migration markers and naming are consistent**

goose up/down markers and v4 naming look correct.

</blockquote></details>
<details>
<summary>apps/dashboard/app/(app)/apis/[apiId]/keys/[keyAuthId]/[keyId]/hooks/use-metric-type.ts (1)</summary><blockquote>

`13-24`: **LGTM: clean URL-state wiring with clear mode flags**

Defaulting via parseAsStringLiteral(...).withDefault("requests") is spot-on. The returned booleans are concise and expressive for render conditions.

</blockquote></details>
<details>
<summary>apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx (1)</summary><blockquote>

`122-132`: **TooltipPrefix nullability confirmed**

The `OverviewBarChartProps` type declares  
```ts
tooltipPrefix?: string | null;

(line 45 in apps/dashboard/components/logs/overview-charts/overview-bar-chart.tsx), so it accepts null as a valid value. The default is "All" (line 61), but overriding with null will not cause a type error. No change needed here.

internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql (1)

12-12: No changes needed for sum(spent_credits) – column is non-nullable
The spent_credits column is added in migration 052 as Int64 DEFAULT 0 (non-Nullable), so sum(spent_credits) cannot yield NULL and will default to 0 when there are no rows. Switching to sumOrZero isn’t necessary in this case.

• See internal/clickhouse/schema/052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql:3
• MV uses sum(spent_credits) safely (e.g. internal/clickhouse/schema/054_create_verifications.key_verifications_per_hour_mv_v4.sql:12)

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/table/logs-table.tsx (1)

27-31: No action needed: hook already handles credit-spend mode

The useKeysOverviewLogsQuery hook imports and uses useMetricType internally, then passes creditSpendMode: isCreditSpendMode into its query parameters, so the logs query automatically switches shape when in credit-spend mode. You don’t need to pass the mode from the table component.

apps/agent/pkg/clickhouse/schema/requests.go (1)

26-27: LGTM: Go struct fields align with ClickHouse DDL

  • Verified that migration 052 (052_add_spent_credits_to_verifications.raw_key_verifications_v1.sql) adds spent_credits Int64 DEFAULT 0, matching the Go int64 zero‐value and ch:"spent_credits" tag.
  • Verified that migration 027 (027_add_tags_to_verifications.raw_key_verifications_v1.sql) adds tags Array(String) DEFAULT [], matching the Go []string and ch:"tags" tag.

No further changes required—fields are correctly typed and tagged.

apps/dashboard/app/(app)/apis/[apiId]/_overview/hooks/use-metric-type.ts (1)

13-25: Hook implementation looks solid

Correct use of nuqs’ parseAsStringLiteral with a safe default and helpful booleans for consumers. No issues.

internal/clickhouse/schema/056_create_verifications.key_verifications_per_day_mv_v4.sql (1)

2-15: Ensure MV’s time type aligns with the target table

  • The target table verifications.key_verifications_per_day_v4 defines the time column as DateTime (see 055_create_verifications.key_verifications_per_day_v4.sql, line 4).
  • The MV’s SELECT uses toStartOfDay(fromUnixTimestamp64Milli(time)) – by default (with enable_extended_results_for_datetime_functions=0) this returns a DateTime, but if that setting is enabled it will return DateTime64(3).
  • To prevent creation or insert failures, please verify your cluster’s enable_extended_results_for_datetime_functions setting and either:
    • Keep the default behavior (DateTime), or
    • Explicitly cast the expression to DateTime, e.g.
      toDateTime(toStartOfDay(fromUnixTimestamp64Milli(time))) AS time
    • Or alter the target column to DateTime64(3) if sub-second precision is required.
internal/clickhouse/src/keys/keys.ts (3)

199-201: Credit-spend filter is clear and safe

Nice use of a guardable condition string; keeps the query readable and avoids injection.


270-297: Cursor condition correctness

Using Nullable(UInt64) and guarding with IS NULL for the first page is sound. No changes needed.


312-315: spent_credits type and nullability confirmed

The raw_key_verifications_v1 table adds spent_credits as a non-nullable Int64 with DEFAULT 0 (no Nullable(...) wrapper) and all downstream materialized-view schemas define spent_credits as Int64. Aggregating via sum(spent_credits) therefore returns an Int64 and matches the target schema—no further changes needed.

apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts (1)

59-65: Confirm BE expects “since” string vs. explicit timestamps

In this hook you pass since as a string and let the server derive the window, while useSpentCredits computes start/end locally. Consider unifying behavior across hooks.

Would you prefer both hooks to pass since and let the server derive time windows? I can align them either way.

internal/clickhouse/src/verifications.ts (1)

268-337: All referenced v2/v4 tables and MVs exist and include spent_credits.

– minute-v2 table (key_verifications_per_minute_v2) defines spent_credits (column confirmed) and is populated by its MV (key_verifications_per_minute_mv_v2) with sum(spent_credits).
– hour-v4, day-v4 and month-v4 tables each define spent_credits, and their MVs (key_verifications_per_{hour,day,month}_mv_v4) aggregate sum(spent_credits).
– The legacy day_v2 table lacks spent_credits but is not referenced by the updated code.

No missing DDLs or materialized views—ready to merge.

Comment on lines +59 to 60
spentCredits?: number;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider making spentCredits required in all API responses

You already populate spentCredits on every code path (0 or cost). Making it non-optional in the types tightens the contract and simplifies consumers.

Apply this diff:

-  spentCredits?: number;
+  spentCredits: number;

Repeat for NotFoundResponse, InvalidResponse, and ValidResponse.

Also applies to: 88-89, 114-115

🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 59-60 (and similarly at 88-89
and 114-115), the response types declare spentCredits as optional; since the
code always sets spentCredits (0 or cost) make spentCredits required in all
response types (replace spentCredits?: number with spentCredits: number) for the
main response and for NotFoundResponse, InvalidResponse, and ValidResponse so
the type contract matches runtime behavior; update any related type
imports/usages if needed to reflect the non-optional field.

Comment on lines +691 to 692
spentCredits,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

VALID response returns spentCredits — consider also emitting it in metrics

You already emit metric.credits.spent with cost/deducted earlier. Adding spentCredits to metric.key.verification (or another dedicated metric) could simplify downstream observability.

If you decide to do it, append spentCredits to the metrics.emit call near Lines 188-195.

🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 691-692 and add to the
metrics.emit call near lines 188-195: append the spentCredits value to the
emitted metric payload (either add spentCredits to the existing
metric.key.verification payload or emit a dedicated metric that includes
spentCredits). Ensure you reference the same spentCredits variable in scope, add
the spentCredits field to the object passed to metrics.emit, and update any
metric payload type/interface if required so the new property is allowed.

Comment on lines +95 to +101
const creditSpendChartConfig = {
spent_credits: {
label: "Credits Spent",
color: "hsl(var(--success-9))",
},
};

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Chart config consistency and theming

Config looks good. Minor nit: verify the token "hsl(var(--success-9))" aligns with design for “spent” (some systems use neutral or warning shades for consumption). If “success” green is intentional, ignore.

🤖 Prompt for AI Agents
In apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx
around lines 95 to 101, the color token for spent_credits is set to
"hsl(var(--success-9))"; confirm this matches the design token for the “spent”
series and if the design expects a different semantic (e.g., neutral or warning)
replace the token with the correct one such as "hsl(var(--neutral-9))" or
"hsl(var(--warning-9))"; if the success (green) token is intentional, leave
as-is.

Comment on lines +102 to +109
// Call onMount for credit spend mode
useEffect(() => {
if (isCreditSpendMode && chartContainerRef.current) {
const rect = chartContainerRef.current.getBoundingClientRect();
onMount(rect.top + window.scrollY);
}
}, [isCreditSpendMode, onMount]);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid double onMount firing in credit-spend mode

The useEffect calls onMount on entry to credit-spend mode and the OverviewBarChart also receives onMount. This can emit two mount events with potentially different offsets (container vs chart), causing jittery scroll/selection behaviors.

Consider centralizing this to one place—prefer passing a wrapper onMount to the chart that measures the container—then remove the effect.

Apply this diff:

@@
-  // Call onMount for credit spend mode
-  useEffect(() => {
-    if (isCreditSpendMode && chartContainerRef.current) {
-      const rect = chartContainerRef.current.getBoundingClientRect();
-      onMount(rect.top + window.scrollY);
-    }
-  }, [isCreditSpendMode, onMount]);
+  const handleBarChartMount = () => {
+    if (!chartContainerRef.current) return;
+    const rect = chartContainerRef.current.getBoundingClientRect();
+    onMount(rect.top + window.scrollY);
+  };
@@
-          <OverviewBarChart
+          <OverviewBarChart
             data={creditSpendTimeseries}
             isLoading={creditSpendIsLoading}
             isError={creditSpendIsError}
             enableSelection
-            onMount={onMount}
+            onMount={handleBarChartMount}
             onSelectionChange={handleSelectionChange}
             config={creditSpendChartConfig}

Also applies to: 114-121

🤖 Prompt for AI Agents
In apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/index.tsx
around lines 102-109 and 114-121, the component calls onMount twice when
entering credit-spend mode (once via this useEffect measuring the container and
again via OverviewBarChart’s onMount) causing duplicate mount events and jitter;
remove the useEffect calls and instead pass a single wrapper onMount prop down
to OverviewBarChart that measures the container (use
chartContainerRef.current.getBoundingClientRect() and window.scrollY inside that
wrapper) so only the chart reports the measured offset once.

Comment on lines +135 to +147
{activeKeysTimeseries.length > 0 && (
<div className="w-full md:w-1/2 max-md:h-72">
<OverviewAreaChart
data={activeKeysTimeseries}
isLoading={activeKeysIsLoading}
isError={activeKeysIsError}
enableSelection
onSelectionChange={handleSelectionChange}
config={keysChartConfig}
labels={keysChartLabels}
/>
</div>
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

UX: Hiding the right chart collapses layout

Conditionally omitting the Active Keys chart can create asymmetric layouts and layout shifts. Consider rendering a skeleton/empty state to keep height and spacing consistent.

If desired, I can add a lightweight placeholder that preserves layout while activeKeysTimeseries is empty.

Comment on lines +74 to +83
// Track credit statistics
creditStats.totalCreditsSpent += verificationEvent.spent_credits;
if (verificationEvent.spent_credits === 0) {
creditStats.verificationsWith0Credits++;
} else if (verificationEvent.spent_credits === 1) {
creditStats.verificationsWith1Credit++;
} else {
creditStats.verificationsWithMultipleCredits++;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Defensive math for spent_credits; bucket boundaries explicit

Ensure we aggregate integers and handle unexpected values safely.

Apply this diff:

-        creditStats.totalCreditsSpent += verificationEvent.spent_credits;
-        if (verificationEvent.spent_credits === 0) {
+        const credits = Number(verificationEvent.spent_credits) || 0;
+        creditStats.totalCreditsSpent += credits;
+        if (credits <= 0) {
           creditStats.verificationsWith0Credits++;
-        } else if (verificationEvent.spent_credits === 1) {
+        } else if (credits === 1) {
           creditStats.verificationsWith1Credit++;
         } else {
           creditStats.verificationsWithMultipleCredits++;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Track credit statistics
creditStats.totalCreditsSpent += verificationEvent.spent_credits;
if (verificationEvent.spent_credits === 0) {
creditStats.verificationsWith0Credits++;
} else if (verificationEvent.spent_credits === 1) {
creditStats.verificationsWith1Credit++;
} else {
creditStats.verificationsWithMultipleCredits++;
}
// Track credit statistics
- creditStats.totalCreditsSpent += verificationEvent.spent_credits;
const credits = Number(verificationEvent.spent_credits) || 0;
creditStats.totalCreditsSpent += credits;
if (credits <= 0) {
creditStats.verificationsWith0Credits++;
} else if (credits === 1) {
creditStats.verificationsWith1Credit++;
} else {
creditStats.verificationsWithMultipleCredits++;
}

Comment on lines +34 to 35
spent_credits: number;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Type surface extended with spent_credits: ensure all producers/consumers are aligned

Adding spent_credits to VerificationEvent looks right given the broader PR. Please double-check all seed consumers and ClickHouse ingestion paths read this field.

Run to verify end-to-end usage:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching seeders and analytics for spent_credits..."
rg -n -C2 --type=ts --type=sql '\bspent_credits\b' tools internal apps | sed -n '1,200p'

Length of output: 15536


Add spent_credits to the seed event generator

The VerificationEvent type surface was extended with spent_credits, and all ClickHouse ingestion paths already handle this field. However, I don’t see any code in the local seed generator that actually populates it—seeded events will end up missing spent_credits. Please update the generator to include a value (e.g. default to 0 or accept it via CLI) when constructing events.

• File: tools/local/src/cmd/seed/event-generator.ts
– Locate where the VerificationEvent payload is built and add a spent_credits property.
– Ensure you choose a sensible default or wiring so that every seeded event includes this field.

🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 34–35, the
VerificationEvent payload construction is missing the newly added spent_credits
field; update the place where the event object is built to include spent_credits
(default to 0 or thread through a CLI flag if available). Add spent_credits: 0
to the generated event object (or read a parsed CLI option and set that value)
so every seeded event includes the field, and ensure TypeScript types remain
satisfied.

Comment on lines 81 to 83
export function generateKeyHash(keyContent: string): string {
return crypto.createHash("sha256").update(keyContent).digest("hex");
return createHash("sha256").update(keyContent).digest("hex");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Nit: make string encoding explicit in hash update

Minor clarity/readability: pass "utf8" explicitly to avoid any ambiguity and to match typical Node patterns.

Apply this diff:

-export function generateKeyHash(keyContent: string): string {
-  return createHash("sha256").update(keyContent).digest("hex");
-}
+export function generateKeyHash(keyContent: string): string {
+  return createHash("sha256").update(keyContent, "utf8").digest("hex");
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function generateKeyHash(keyContent: string): string {
return crypto.createHash("sha256").update(keyContent).digest("hex");
return createHash("sha256").update(keyContent).digest("hex");
}
export function generateKeyHash(keyContent: string): string {
return createHash("sha256").update(keyContent, "utf8").digest("hex");
}
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 81 to 83, the call
to createHash(...).update(keyContent) omits an explicit string encoding; change
the update call to pass "utf8" explicitly (i.e., update(keyContent, "utf8")) so
the hash uses a defined encoding and follows Node conventions.

Comment on lines +195 to +218
/**
* Calculates credit cost for a verification event
*/
function calculateCreditCost(key: KeyInfo, outcome: string): number {
// Rule 3: If request is rejected, cost should be 0
if (outcome !== "VALID") {
return 0;
}

// Only calculate cost if key has usage limits (remaining enabled)
if (!key.hasUsageLimit) {
return 0;
}

// Rule 1: Default cost is 1 credit
// Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits)
// Since we only get here for VALID outcomes, we can apply the 15% rule directly
if (Math.random() < 0.15) {
return 2 + Math.floor(Math.random() * 4); // Random between 2-5
}

return 1;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make credit-cost randomness injectable for deterministic tests and repeatable seeds

Using Math.random() is fine, but inject a RNG to enable reproducible runs and unit testing of distributions.

Apply this diff:

-function calculateCreditCost(key: KeyInfo, outcome: string): number {
+function calculateCreditCost(
+  key: KeyInfo,
+  outcome: string,
+  rng: () => number = Math.random,
+): number {
@@
-  if (Math.random() < 0.15) {
-    return 2 + Math.floor(Math.random() * 4); // Random between 2-5
+  if (rng() < 0.15) {
+    return 2 + Math.floor(rng() * 4); // 2-5
   }

And pass the RNG from the caller when needed (default remains unchanged).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Calculates credit cost for a verification event
*/
function calculateCreditCost(key: KeyInfo, outcome: string): number {
// Rule 3: If request is rejected, cost should be 0
if (outcome !== "VALID") {
return 0;
}
// Only calculate cost if key has usage limits (remaining enabled)
if (!key.hasUsageLimit) {
return 0;
}
// Rule 1: Default cost is 1 credit
// Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits)
// Since we only get here for VALID outcomes, we can apply the 15% rule directly
if (Math.random() < 0.15) {
return 2 + Math.floor(Math.random() * 4); // Random between 2-5
}
return 1;
}
/**
* Calculates credit cost for a verification event
*/
function calculateCreditCost(
key: KeyInfo,
outcome: string,
rng: () => number = Math.random,
): number {
// Rule 3: If request is rejected, cost should be 0
if (outcome !== "VALID") {
return 0;
}
// Only calculate cost if key has usage limits (remaining enabled)
if (!key.hasUsageLimit) {
return 0;
}
// Rule 1: Default cost is 1 credit
// Rule 2: 15% of VALID verifications should have cost > 1 (2-5 credits)
// Since we only get here for VALID outcomes, we can apply the 15% rule directly
if (rng() < 0.15) {
return 2 + Math.floor(rng() * 4); // 2-5
}
return 1;
}
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/event-generator.ts around lines 195 to 218, the
calculateCreditCost function uses Math.random() directly which prevents
deterministic tests and repeatable seeds; change the function signature to
accept an optional rng: () => number parameter defaulting to Math.random,
replace all Math.random() calls inside with rng(), keep the existing logic and
ranges (15% branch and 2-5 credits calculation), and update all call sites to
pass the RNG when deterministic behavior is required (while leaving callers that
don’t pass it to use the default Math.random).

Comment on lines 6 to 8
export function generateUuid() {
return crypto.randomUUID();
return globalThis.crypto.randomUUID();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make UUID generation robust across runtimes (add Node fallback).

globalThis.crypto is not guaranteed in all Node environments (or when DOM lib types are absent). Prefer Web Crypto when present, but fall back to Node’s crypto.randomUUID() to avoid runtime errors.

Apply this diff in the function:

 export function generateUuid() {
-  return globalThis.crypto.randomUUID();
+  const webCrypto = (globalThis as any).crypto;
+  return typeof webCrypto?.randomUUID === "function"
+    ? webCrypto.randomUUID()
+    : nodeRandomUUID();
 }

And add this import at the top of the file (outside the selected range):

import { randomUUID as nodeRandomUUID } from "node:crypto";
🤖 Prompt for AI Agents
In tools/local/src/cmd/seed/utils.ts around lines 6 to 8, the generateUuid
function assumes globalThis.crypto exists and calls randomUUID there; add a
robust fallback to Node’s crypto by importing nodeRandomUUID from "node:crypto"
at the top of the file and change generateUuid to return
globalThis.crypto?.randomUUID?.() when available, otherwise call
nodeRandomUUID(); ensure the import line is added outside the selected range and
the function uses the Web Crypto path first and Node fallback to avoid runtime
errors in environments without globalThis.crypto.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +18 to 28
RequestID string `ch:"request_id"`
Time int64 `ch:"time"`
WorkspaceID string `ch:"workspace_id"`
KeySpaceID string `ch:"key_space_id"`
KeyID string `ch:"key_id"`
Region string `ch:"region"`
Outcome string `ch:"outcome"`
IdentityID string `ch:"identity_id"`
SpentCredits int64 `ch:"spent_credits"`
Tags []string `ch:"tags"`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Add upstream tag normalization before insertion

Adding Tags []string is good, but duplicates and order variance will fragment aggregated tables. Normalize before writing: unique + stable sort.

Example helper:

// CanonicalizeTags returns a sorted, de-duplicated copy.
func CanonicalizeTags(in []string) []string {
    if len(in) == 0 {
        return nil
    }
    seen := make(map[string]struct{}, len(in))
    out := make([]string, 0, len(in))
    for _, t := range in {
        if _, ok := seen[t]; ok {
            continue
        }
        seen[t] = struct{}{}
        out = append(out, t)
    }
    sort.Strings(out)
    return out
}

Ensure the writer populating KeyVerificationRequestV1 uses Tags: CanonicalizeTags(tags). This complements the DB-side MV normalization and keeps raw writes tidy.

Would you like me to locate the writer(s) and open a follow-up patch?

🤖 Prompt for AI Agents
In apps/agent/pkg/clickhouse/schema/requests.go around lines 18-28, the Tags
[]string field is stored raw which allows duplicates and order variance to
fragment aggregations; add an upstream canonicalization step that deduplicates
and stably sorts tag slices before insertion. Implement a helper (e.g.,
CanonicalizeTags) that returns nil for empty input, removes duplicates, and
returns a sorted slice, then ensure every writer that constructs
KeyVerificationRequestV1 (and any other structs written to ClickHouse with Tags)
calls that helper and assigns Tags: CanonicalizeTags(tags) before persisting.

Comment on lines +66 to +75
case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";

params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value,
});
}
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Normalize keyIds by trimming whitespace

Prevents mismatched filters and duplicate cache keys from accidental spaces.

-            params.keyIds?.filters?.push({
+            params.keyIds?.filters?.push({
               operator: keyIdOperator,
-              value: filter.value,
+              value: filter.value.trim(),
             });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";
params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value,
});
}
break;
case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";
params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value.trim(),
});
}
break;
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 66 to 75, the code checks filter.value.trim() but still pushes the
original filter.value which can include leading/trailing spaces and cause
mismatched filters/duplicate cache keys; fix by computing a trimmed value (e.g.,
const trimmed = filter.value.trim()), verify it is non-empty, and push trimmed
as the value into params.keyIds.filters (use trimmed for operator/value),
replacing uses of the untrimmed filter.value.

Comment on lines +78 to +86
case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value,
});
}
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Normalize names and identities by trimming

Same rationale as keyIds — normalize inputs early.

-            params[filter.field]?.filters?.push({
+            params[filter.field]?.filters?.push({
               operator,
-              value: filter.value,
+              value: filter.value.trim(),
             });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value,
});
}
break;
case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value.trim(),
});
}
break;
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 78-86, the case branches for "names" and "identities" use
filter.value directly and only check typeof and .trim() in the condition;
normalize inputs by trimming before pushing: compute a trimmedValue = (typeof
filter.value === "string") ? filter.value.trim() : filter.value, check
trimmedValue is non-empty for strings, and push the trimmedValue into
params[filter.field].filters (i.e., use trimmedValue in the value field) so
names and identities are normalized the same way keyIds are.

Comment on lines +89 to +105
case "outcomes": {
// For credit spend, we might want to include all outcomes to show credit consumption patterns
if (typeof filter.value === "string") {
params.outcomes?.filters?.push({
operator: "is",
value: filter.value as
| "VALID"
| "INSUFFICIENT_PERMISSIONS"
| "RATE_LIMITED"
| "FORBIDDEN"
| "DISABLED"
| "EXPIRED"
| "USAGE_EXCEEDED"
| "",
});
}
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Validate outcomes against the allowed set to avoid invalid server filters

Currently any string is pushed (including ""), which can create invalid or no‑op filters. Align with the allowed outcome set (same constant used elsewhere).

+import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";
...
-        case "outcomes": {
-          // For credit spend, we might want to include all outcomes to show credit consumption patterns
-          if (typeof filter.value === "string") {
-            params.outcomes?.filters?.push({
-              operator: "is",
-              value: filter.value as
-                | "VALID"
-                | "INSUFFICIENT_PERMISSIONS"
-                | "RATE_LIMITED"
-                | "FORBIDDEN"
-                | "DISABLED"
-                | "EXPIRED"
-                | "USAGE_EXCEEDED"
-                | "",
-            });
-          }
-          break;
-        }
+        case "outcomes": {
+          if (typeof filter.value === "string" && filter.value) {
+            type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+            if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+              params.outcomes?.filters?.push({
+                operator: "is",
+                value: filter.value as ValidOutcome,
+              });
+            }
+          }
+          break;
+        }

Please confirm that the import path for KEY_VERIFICATION_OUTCOMES is correct in this package. If not, point me to the shared constant for outcomes and I’ll update the patch.


🏁 Script executed:

#!/bin/bash
# Search for the declaration or usages of KEY_VERIFICATION_OUTCOMES in the repository
rg -n "KEY_VERIFICATION_OUTCOMES"

Length of output: 8391


🏁 Script executed:

#!/bin/bash
# Display the top lines of the target file to inspect existing imports
head -n 20 apps/dashboard/app/\(app\)/apis/\[apiId\]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts

Length of output: 1093


Validate outcomes against the canonical set

The import path for KEY_VERIFICATION_OUTCOMES is correct—other parts of this package (and related hooks) already import it from @unkey/clickhouse/src/keys/keys. Please apply the refactor below to guard against empty or invalid outcome values:

• Add the import at the top of
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts

import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";

• Replace the existing case "outcomes": block (lines 89–105) with:

      case "outcomes": {
-          // For credit spend, we might want to include all outcomes to show credit consumption patterns
-          if (typeof filter.value === "string") {
-            params.outcomes?.filters?.push({
-              operator: "is",
-              value: filter.value as
-                | "VALID"
-                | "INSUFFICIENT_PERMISSIONS"
-                | "RATE_LIMITED"
-                | "FORBIDDEN"
-                | "DISABLED"
-                | "EXPIRED"
-                | "USAGE_EXCEEDED"
-                | "",
-            });
-          }
+          if (typeof filter.value === "string" && filter.value) {
+            type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+            if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+              params.outcomes?.filters?.push({
+                operator: "is",
+                value: filter.value as ValidOutcome,
+              });
+            }
+          }
         break;
       }

This ensures only legitimate outcomes (and no empty strings) are sent to the server.

🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 89–105, the current outcomes branch can push empty or invalid
outcome strings; import KEY_VERIFICATION_OUTCOMES from
"@unkey/clickhouse/src/keys/keys" at the top of the file and replace the case
"outcomes": block to validate filter.value: first ensure typeof filter.value ===
"string" and filter.value is non-empty, then check
KEY_VERIFICATION_OUTCOMES.includes(filter.value) before pushing to
params.outcomes.filters; only push when both checks pass so no empty or
non-canonical outcomes are sent to the server.

Comment on lines +108 to +114
case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value,
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Trim tag values to avoid backend mismatches

Keep consistent with other filters; send trimmed values only.

-            params.tags = {
+            params.tags = {
               operator,
-              value: filter.value,
+              value: filter.value.trim(),
             };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value,
};
}
case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value.trim(),
};
}
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 108 to 114, the "tags" branch currently forwards filter.value
untrimmed; change it to trim the string before sending to the backend and only
set params.tags when the trimmed value is non-empty (e.g. const val =
filter.value.trim(); if (val) params.tags = { operator, value: val }). Ensure
you keep the existing typeof string check and do not send whitespace-only
values.

"INSUFFICIENT_PERMISSIONS",
]),
identity_id: z.string().optional().default(""),
spent_credits: z.number().int().optional().default(0),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Enforce non-negative spent_credits at ingestion.

Unless refunds are modeled as negative credits, disallow negatives at the schema edge to prevent corrupt rows.

-      spent_credits: z.number().int().optional().default(0),
+      spent_credits: z.number().int().nonnegative().optional().default(0),

If negatives are a valid business case (e.g., adjustments/refunds), skip this and enforce validation downstream instead.


🏁 Script executed:

#!/bin/bash
set -e
# Print the first 200 lines of the ClickHouse verifications schema to review spent_credits context
sed -n '1,200p' internal/clickhouse/src/verifications.ts

Length of output: 5806


Enforce non-negative spent_credits at ingestion

Ensure that any negative spent_credits values are rejected as early as possible to prevent corrupt rows in verifications.raw_key_verifications_v1. If your domain permits negative adjustments/refunds, handle those explicitly downstream instead of allowing them here.

Locations to update:

  • internal/clickhouse/src/verifications.ts, in the insertVerification schema

Suggested change:

-      spent_credits: z.number().int().optional().default(0),
+      spent_credits: z.number().int().nonnegative().optional().default(0),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spent_credits: z.number().int().optional().default(0),
// in internal/clickhouse/src/verifications.ts, inside the insertVerification schema
spent_credits: z.number().int().nonnegative().optional().default(0),
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around line 27, the
insertVerification schema currently allows negative spent_credits
(spent_credits: z.number().int().optional().default(0)); change the schema to
reject negatives by using z.number().int().min(0).optional().default(0) for
spent_credits so any negative values are validated out at ingestion; update any
related type imports or usages if necessary to reflect the tightened schema and
run tests to ensure validation errors are surfaced where insertVerification is
called.

Comment on lines +166 to 168
tags,
spent_credits
FROM verifications.raw_key_verifications_v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

COALESCE spent_credits in SELECT to avoid Zod parse failures.

Older rows may have NULL in spent_credits. z.number() will fail on NULL; add COALESCE in the projection.

-          tags,
-          spent_credits
+          tags,
+          COALESCE(spent_credits, 0) AS spent_credits
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tags,
spent_credits
FROM verifications.raw_key_verifications_v1
tags,
COALESCE(spent_credits, 0) AS spent_credits
FROM verifications.raw_key_verifications_v1
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 166 to 168, the SELECT
projects spent_credits which can be NULL for older rows and causes z.number() to
fail; modify the projection to wrap spent_credits with COALESCE (e.g.,
COALESCE(spent_credits, 0)) so the query always returns a numeric value and the
Zod parse succeeds.

Comment on lines +614 to +635
// Schema for spent credits total query
const spentCreditsParams = z.object({
workspaceId: z.string(),
keyspaceId: z.string(),
keyId: z.string().optional(),
startTime: z.number().int(),
endTime: z.number().int(),
outcomes: z
.array(
z.object({
value: z.enum(KEY_VERIFICATION_OUTCOMES),
operator: z.literal("is"),
}),
)
.nullable(),
tags: z
.object({
operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
value: z.string(),
})
.nullable(),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Unify tag filter shape with other endpoints (array vs single).

Here tags is a single object; elsewhere (keyDetailsLogs, timeseries) it’s an array. This inconsistency complicates clients and filter composition.

Consider aligning with the array-of-filters approach:

-const spentCreditsParams = z.object({
+const spentCreditsParams = z.object({
   workspaceId: z.string(),
   keyspaceId: z.string(),
   keyId: z.string().optional(),
   startTime: z.number().int(),
   endTime: z.number().int(),
   outcomes: z
     .array(
       z.object({
         value: z.enum(KEY_VERIFICATION_OUTCOMES),
         operator: z.literal("is"),
       }),
     )
     .nullable(),
-  tags: z
-    .object({
-      operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
-      value: z.string(),
-    })
-    .nullable(),
+  tags: z
+    .array(
+      z.object({
+        operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
+        value: z.string(),
+      }),
+    )
+    .nullable(),
 });

And update the predicate builder accordingly (see next comments).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Schema for spent credits total query
const spentCreditsParams = z.object({
workspaceId: z.string(),
keyspaceId: z.string(),
keyId: z.string().optional(),
startTime: z.number().int(),
endTime: z.number().int(),
outcomes: z
.array(
z.object({
value: z.enum(KEY_VERIFICATION_OUTCOMES),
operator: z.literal("is"),
}),
)
.nullable(),
tags: z
.object({
operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
value: z.string(),
})
.nullable(),
});
// Schema for spent credits total query
const spentCreditsParams = z.object({
workspaceId: z.string(),
keyspaceId: z.string(),
keyId: z.string().optional(),
startTime: z.number().int(),
endTime: z.number().int(),
outcomes: z
.array(
z.object({
value: z.enum(KEY_VERIFICATION_OUTCOMES),
operator: z.literal("is"),
}),
)
.nullable(),
tags: z
.array(
z.object({
operator: z.enum(["is", "contains", "startsWith", "endsWith"]),
value: z.string(),
}),
)
.nullable(),
});
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 614 to 635, the
spentCreditsParams schema defines tags as a single object while other endpoints
expect an array of tag filters; change tags to be nullable array of the same
tag-filter object shape (operator: enum
["is","contains","startsWith","endsWith"], value: string) to match
keyDetailsLogs/timeseries schemas, and then update any predicate builder code
that consumes spentCreditsParams.tags to iterate the array, combine filters
appropriately, and preserve null handling so existing behavior remains when tags
is absent.

Comment on lines +649 to +654
const conditions = [
"workspace_id = {workspaceId: String}",
"key_space_id = {keyspaceId: String}",
"time >= {startTime: Int64}",
"time <= {endTime: Int64}",
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Use consistent time types with raw table (UInt64) and BETWEEN.

Other queries on raw_key_verifications_v1 use UInt64 and BETWEEN. Mixing Int64 may cause implicit casts and hinder plan caching.

-      "time >= {startTime: Int64}",
-      "time <= {endTime: Int64}",
+      "time BETWEEN {startTime: UInt64} AND {endTime: UInt64}",

Please confirm the raw table time column is UInt64 epoch millis. If it’s DateTime64(3), then use fromUnixTimestamp64Milli(...) as in timeseries queries.


🏁 Script executed:

#!/bin/bash
# Search for references to the raw_key_verifications_v1 table (including DDL or column usage)
rg -n 'raw_key_verifications_v1' -C3

Length of output: 28071


🏁 Script executed:

#!/bin/bash
# Display the first 50 lines of the DDL to inspect the 'time' column type
sed -n '1,50p' internal/clickhouse/schema/003_create_verifications_raw_key_verifications_v1.sql

Length of output: 896


Align time parameter types and use BETWEEN for plan caching

Verified that verifications.raw_key_verifications_v1.time is defined as Int64 (epoch milliseconds) in the DDL (internal/clickhouse/schema/003_create_verifications_raw_key_verifications_v1.sql, lines 2–6) and not UInt64. To preserve type consistency and improve plan caching, combine the two Int64 comparisons into a single BETWEEN clause:

• File: internal/clickhouse/src/verifications.ts
Replace lines 649–654:

-      "time >= {startTime: Int64}",
-      "time <= {endTime: Int64}",
+      "time BETWEEN {startTime: Int64} AND {endTime: Int64}",

• Review any other queries (e.g. in src/keys/keys.ts) that currently use {startTime: UInt64} / {endTime: UInt64} and update them to Int64 to match the raw table’s column type.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const conditions = [
"workspace_id = {workspaceId: String}",
"key_space_id = {keyspaceId: String}",
"time >= {startTime: Int64}",
"time <= {endTime: Int64}",
];
const conditions = [
"workspace_id = {workspaceId: String}",
"key_space_id = {keyspaceId: String}",
- "time >= {startTime: Int64}",
"time BETWEEN {startTime: Int64} AND {endTime: Int64}",
];
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around lines 649–654, the time
comparisons use two separate Int64 comparisons and should be combined into a
single BETWEEN clause and the parameter types must be Int64 to match the DDL;
change the conditions array to use "time BETWEEN {startTime: Int64} AND
{endTime: Int64}" instead of separate >=/<= comparisons, and audit other query
files (e.g. src/keys/keys.ts) to change any {startTime: UInt64}/{endTime:
UInt64} usages to {startTime: Int64}/{endTime: Int64} so types align with the
raw table and improve plan caching.

"time <= {endTime: Int64}",
];

let paramSchemaExtension = {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Type paramSchemaExtension to zod types to satisfy .extend() at compile-time.

Current {} may not satisfy ZodRawShape and can break type inference.

-    let paramSchemaExtension = {};
+    let paramSchemaExtension: Record<string, z.ZodType> = {};
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let paramSchemaExtension = {};
let paramSchemaExtension: Record<string, z.ZodType> = {};
🤖 Prompt for AI Agents
In internal/clickhouse/src/verifications.ts around line 656, the variable
paramSchemaExtension is declared as an untyped {} which can fail TypeScript/Zod
compile-time checks when used with .extend(); change its declaration to a Zod
type shape (e.g., import and use ZodRawShape or Record<string, ZodTypeAny>) and
initialize it with that type (for example: const paramSchemaExtension:
ZodRawShape = {}), so .extend() will have the correct compile-time type
information.

<DropdownMenuItem
key={type}
onClick={() => setMetricType(type)}
className={cn("cursor-pointer", metricType === type && "bg-gray-3")}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you really love pushing the limits of what javascript does with && operators, do you?

});

return {
spentCredits: result.val?.[0]?.spent_credits ?? 0,
Copy link
Collaborator

@chronark chronark Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't this throws if result.val is an empty array?
use result.val?.at(0)?.spent_credits ?? 0, instead

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you delete all of our schemas?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this and I suspect the followings should be done different.
Put these where you deleted the other ones (pkg/clickhouse/schema)

the tldr is:
internal/clickhouse is the old system that fell flat due to the mess of migrations.
pkg/clickhouse/schema is where all our new tables should go to.

If you want I can revert this and clean up

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is.. is that... a test?

Copy link
Collaborator

@chronark chronark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The clickhouse schemas are not where I want them, but we can fix them

@ogzhanolguncu can you take a look at the frontend code as well

@chronark chronark self-assigned this Aug 25, 2025
@perkinsjr
Copy link
Member Author

@ogzhanolguncu / @chronark

FYI none of this needs reviewing. It's in a draft for a reason. The frontend for sure is no where near being ready for review. I have to go through it, I "made it work" now I will "make it ready" then it's reviewable.

Outside the schemas of course

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants