Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions apps/agent/pkg/clickhouse/schema/requests.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,14 @@ type ApiRequestV1 struct {
}

type KeyVerificationRequestV1 struct {
RequestID string `ch:"request_id"`
Time int64 `ch:"time"`
WorkspaceID string `ch:"workspace_id"`
KeySpaceID string `ch:"key_space_id"`
KeyID string `ch:"key_id"`
Region string `ch:"region"`
Outcome string `ch:"outcome"`
IdentityID string `ch:"identity_id"`
RequestID string `ch:"request_id"`
Time int64 `ch:"time"`
WorkspaceID string `ch:"workspace_id"`
KeySpaceID string `ch:"key_space_id"`
KeyID string `ch:"key_id"`
Region string `ch:"region"`
Outcome string `ch:"outcome"`
IdentityID string `ch:"identity_id"`
SpentCredits int64 `ch:"spent_credits"`
Tags []string `ch:"tags"`
}
Comment on lines +18 to 28
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Add upstream tag normalization before insertion

Adding Tags []string is good, but duplicates and order variance will fragment aggregated tables. Normalize before writing: unique + stable sort.

Example helper:

// CanonicalizeTags returns a sorted, de-duplicated copy.
func CanonicalizeTags(in []string) []string {
    if len(in) == 0 {
        return nil
    }
    seen := make(map[string]struct{}, len(in))
    out := make([]string, 0, len(in))
    for _, t := range in {
        if _, ok := seen[t]; ok {
            continue
        }
        seen[t] = struct{}{}
        out = append(out, t)
    }
    sort.Strings(out)
    return out
}

Ensure the writer populating KeyVerificationRequestV1 uses Tags: CanonicalizeTags(tags). This complements the DB-side MV normalization and keeps raw writes tidy.

Would you like me to locate the writer(s) and open a follow-up patch?

🤖 Prompt for AI Agents
In apps/agent/pkg/clickhouse/schema/requests.go around lines 18-28, the Tags
[]string field is stored raw which allows duplicates and order variance to
fragment aggregations; add an upstream canonicalization step that deduplicates
and stably sorts tag slices before insertion. Implement a helper (e.g.,
CanonicalizeTags) that returns nil for empty input, removes duplicates, and
returns a sorted slice, then ensure every writer that constructs
KeyVerificationRequestV1 (and any other structs written to ClickHouse with Tags)
calls that helper and assigns Tags: CanonicalizeTags(tags) before persisting.

1 change: 1 addition & 0 deletions apps/api/src/pkg/analytics.ts
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ export class Analytics {
outcome: string;
identity_id?: string;
tags?: string[];
spent_credits?: number;
}) => {
return await wrap(
// biome-ignore lint/style/noNonNullAssertion: proxyClient existence verified above
Expand Down
1 change: 1 addition & 0 deletions apps/api/src/pkg/auth/root_key.ts
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ export async function rootKeyAuth(c: Context<HonoEnv>, permissionQuery?: Permiss
region: c.req.cf?.region,
request_id: c.get("requestId"),
tags: [],
spent_credits: 0, // Root key verifications don't consume credits
}),
);

Expand Down
1 change: 1 addition & 0 deletions apps/api/src/pkg/clickhouse-proxy.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ export class ClickHouseProxyClient {
outcome: string;
identity_id?: string;
tags?: string[];
spent_credits?: number;
}>,
): Promise<void> {
await this.sendEvents("/_internal/chproxy/verifications", events);
Expand Down
21 changes: 19 additions & 2 deletions apps/api/src/pkg/keys/service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ type NotFoundResponse = {
api?: never;
ratelimit?: never;
remaining?: never;
spentCredits?: number;
};
Comment on lines +59 to 60
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider making spentCredits required in all API responses

You already populate spentCredits on every code path (0 or cost). Making it non-optional in the types tightens the contract and simplifies consumers.

Apply this diff:

-  spentCredits?: number;
+  spentCredits: number;

Repeat for NotFoundResponse, InvalidResponse, and ValidResponse.

Also applies to: 88-89, 114-115

🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 59-60 (and similarly at 88-89
and 114-115), the response types declare spentCredits as optional; since the
code always sets spentCredits (0 or cost) make spentCredits required in all
response types (replace spentCredits?: number with spentCredits: number) for the
main response and for NotFoundResponse, InvalidResponse, and ValidResponse so
the type contract matches runtime behavior; update any related type
imports/usages if needed to reflect the non-optional field.


type InvalidResponse = {
Expand Down Expand Up @@ -84,6 +85,7 @@ type InvalidResponse = {
permissions: string[];
roles: string[];
message?: string;
spentCredits?: number;
};

type ValidResponse = {
Expand All @@ -109,6 +111,7 @@ type ValidResponse = {
authorizedWorkspaceId: string;
permissions: string[];
roles: string[];
spentCredits?: number;
};
type VerifyKeyResult = NotFoundResponse | InvalidResponse | ValidResponse;

Expand Down Expand Up @@ -396,7 +399,7 @@ export class KeyService {
}

if (!data) {
return Ok({ valid: false, code: "NOT_FOUND" });
return Ok({ valid: false, code: "NOT_FOUND", spentCredits: 0 });
}

// Quick fix
Expand Down Expand Up @@ -436,6 +439,7 @@ export class KeyService {
permissions: data.permissions,
roles: data.roles,
message: "the key is disabled",
spentCredits: 0,
});
}

Expand All @@ -449,6 +453,7 @@ export class KeyService {
permissions: data.permissions,
roles: data.roles,
message: `the key does not belong to ${req.apiId}`,
spentCredits: 0,
});
}

Expand All @@ -469,6 +474,7 @@ export class KeyService {
permissions: data.permissions,
roles: data.roles,
message: `the key has expired on ${new Date(expires).toISOString()}`,
spentCredits: 0,
});
}
}
Expand All @@ -485,6 +491,7 @@ export class KeyService {
code: "FORBIDDEN",
permissions: data.permissions,
roles: data.roles,
spentCredits: 0,
});
}

Expand All @@ -498,6 +505,7 @@ export class KeyService {
code: "FORBIDDEN",
permissions: data.permissions,
roles: data.roles,
spentCredits: 0,
});
}
}
Expand Down Expand Up @@ -542,6 +550,7 @@ export class KeyService {
permissions: data.permissions,
roles: data.roles,
message: rbacResp.val.message,
spentCredits: 0,
});
}
}
Expand Down Expand Up @@ -612,10 +621,13 @@ export class KeyService {
ratelimit,
permissions: data.permissions,
roles: data.roles,
spentCredits: 0,
});
}

let remaining: number | undefined = undefined;
let spentCredits = 0;

if (data.key.remaining !== null) {
const t0 = performance.now();
const cost = req.remaining?.cost ?? DEFAULT_REMAINING_COST;
Expand All @@ -636,7 +648,10 @@ export class KeyService {
});

remaining = limited.remaining;
if (!limited.valid) {
if (limited.valid) {
// Credits were successfully spent
spentCredits = cost;
} else {
return Ok({
key: data.key,
api: data.api,
Expand All @@ -653,6 +668,7 @@ export class KeyService {
authorizedWorkspaceId: data.key.forWorkspaceId ?? data.key.workspaceId,
permissions: data.permissions,
roles: data.roles,
spentCredits: 0, // No credits spent if usage exceeded
});
}
}
Expand All @@ -672,6 +688,7 @@ export class KeyService {
authorizedWorkspaceId: data.key.forWorkspaceId ?? data.key.workspaceId,
permissions: data.permissions,
roles: data.roles,
spentCredits,
});
Comment on lines +691 to 692
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

VALID response returns spentCredits — consider also emitting it in metrics

You already emit metric.credits.spent with cost/deducted earlier. Adding spentCredits to metric.key.verification (or another dedicated metric) could simplify downstream observability.

If you decide to do it, append spentCredits to the metrics.emit call near Lines 188-195.

🤖 Prompt for AI Agents
In apps/api/src/pkg/keys/service.ts around lines 691-692 and add to the
metrics.emit call near lines 188-195: append the spentCredits value to the
emitted metric payload (either add spentCredits to the existing
metric.key.verification payload or emit a dedicated metric that includes
spentCredits). Ensure you reference the same spentCredits variable in scope, add
the spentCredits field to the object passed to metrics.emit, and update any
metric payload type/interface if required so the new property is allowed.

}

Expand Down
5 changes: 2 additions & 3 deletions apps/api/src/routes/v1_keys_verifyKey.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import { UnkeyApiError, openApiErrorResponses } from "@/pkg/errors";
import type { App } from "@/pkg/hono/app";
import { DisabledWorkspaceError, MissingRatelimitError } from "@/pkg/keys/service";
import { createRoute, z } from "@hono/zod-openapi";

import { SchemaError } from "@unkey/error";
import { permissionQuerySchema } from "@unkey/rbac";

Expand Down Expand Up @@ -331,7 +332,6 @@ export const registerV1KeysVerifyKey = (app: App) =>
app.openapi(route, async (c) => {
const req = c.req.valid("json");
const { keyService, analytics, logger } = c.get("services");

const { val, err } = await keyService.verifyKey(c, {
key: req.key,
apiId: req.apiId,
Expand All @@ -340,7 +340,6 @@ export const registerV1KeysVerifyKey = (app: App) =>
ratelimits: req.ratelimits,
remaining: req.remaining,
});

if (err) {
switch (true) {
case err instanceof SchemaError || err instanceof MissingRatelimitError:
Expand Down Expand Up @@ -409,6 +408,7 @@ export const registerV1KeysVerifyKey = (app: App) =>
outcome: val.code,
identity_id: val.identity?.id,
tags: req.tags ?? [],
spent_credits: val.spentCredits ?? 0,
})
.then(({ err }) => {
if (!err) {
Expand All @@ -419,6 +419,5 @@ export const registerV1KeysVerifyKey = (app: App) =>
});
}),
);

return c.json(responseBody);
});
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ export const useFetchVerificationTimeseries = (apiId: string | null) => {
return {
...result,
...outcomeFields,
spent_credits: ts.y.spent_credits ?? 0,
};
});
}, [data]);
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
import { formatTimestampForChart } from "@/components/logs/chart/utils/format-timestamp";
import { HISTORICAL_DATA_WINDOW } from "@/components/logs/constants";
import { trpc } from "@/lib/trpc/client";
import { useQueryTime } from "@/providers/query-time-provider";
import { useMemo } from "react";
import { keysOverviewFilterFieldConfig } from "../../../../filters.schema";
import { useFilters } from "../../../../hooks/use-filters";
import type { KeysOverviewQueryTimeseriesPayload } from "../../bar-chart/query-timeseries.schema";

export const useFetchCreditSpendTimeseries = (apiId: string | null) => {
const { filters } = useFilters();
const { queryTime: timestamp } = useQueryTime();

const queryParams = useMemo(() => {
const params: KeysOverviewQueryTimeseriesPayload = {
startTime: timestamp - HISTORICAL_DATA_WINDOW,
endTime: timestamp,
keyIds: { filters: [] },
outcomes: { filters: [] },
names: { filters: [] },
identities: { filters: [] },
tags: null,
apiId: apiId ?? "",
since: "",
};

if (!apiId) {
return params;
}

filters.forEach((filter) => {
if (!(filter.field in keysOverviewFilterFieldConfig)) {
return;
}

const fieldConfig = keysOverviewFilterFieldConfig[filter.field];
const validOperators = fieldConfig.operators;

const operator = validOperators.includes(filter.operator)
? filter.operator
: validOperators[0];

switch (filter.field) {
case "startTime":
case "endTime": {
const numValue =
typeof filter.value === "number"
? filter.value
: typeof filter.value === "string"
? Number(filter.value)
: Number.NaN;

if (!Number.isNaN(numValue)) {
params[filter.field] = numValue;
}
break;
}

case "since": {
if (typeof filter.value === "string") {
params.since = filter.value;
}
break;
}

case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";

params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value,
});
}
break;
Comment on lines +66 to +75
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Normalize keyIds by trimming whitespace

Prevents mismatched filters and duplicate cache keys from accidental spaces.

-            params.keyIds?.filters?.push({
+            params.keyIds?.filters?.push({
               operator: keyIdOperator,
-              value: filter.value,
+              value: filter.value.trim(),
             });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";
params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value,
});
}
break;
case "keyIds": {
if (typeof filter.value === "string" && filter.value.trim()) {
const keyIdOperator = operator === "is" || operator === "contains" ? operator : "is";
params.keyIds?.filters?.push({
operator: keyIdOperator,
value: filter.value.trim(),
});
}
break;
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 66 to 75, the code checks filter.value.trim() but still pushes the
original filter.value which can include leading/trailing spaces and cause
mismatched filters/duplicate cache keys; fix by computing a trimmed value (e.g.,
const trimmed = filter.value.trim()), verify it is non-empty, and push trimmed
as the value into params.keyIds.filters (use trimmed for operator/value),
replacing uses of the untrimmed filter.value.

}

case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value,
});
}
break;
Comment on lines +78 to +86
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Normalize names and identities by trimming

Same rationale as keyIds — normalize inputs early.

-            params[filter.field]?.filters?.push({
+            params[filter.field]?.filters?.push({
               operator,
-              value: filter.value,
+              value: filter.value.trim(),
             });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value,
});
}
break;
case "names":
case "identities": {
if (typeof filter.value === "string" && filter.value.trim()) {
params[filter.field]?.filters?.push({
operator,
value: filter.value.trim(),
});
}
break;
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 78-86, the case branches for "names" and "identities" use
filter.value directly and only check typeof and .trim() in the condition;
normalize inputs by trimming before pushing: compute a trimmedValue = (typeof
filter.value === "string") ? filter.value.trim() : filter.value, check
trimmedValue is non-empty for strings, and push the trimmedValue into
params[filter.field].filters (i.e., use trimmedValue in the value field) so
names and identities are normalized the same way keyIds are.

}

case "outcomes": {
// For credit spend, we might want to include all outcomes to show credit consumption patterns
if (typeof filter.value === "string") {
params.outcomes?.filters?.push({
operator: "is",
value: filter.value as
| "VALID"
| "INSUFFICIENT_PERMISSIONS"
| "RATE_LIMITED"
| "FORBIDDEN"
| "DISABLED"
| "EXPIRED"
| "USAGE_EXCEEDED"
| "",
});
}
break;
Comment on lines +89 to +105
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Validate outcomes against the allowed set to avoid invalid server filters

Currently any string is pushed (including ""), which can create invalid or no‑op filters. Align with the allowed outcome set (same constant used elsewhere).

+import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";
...
-        case "outcomes": {
-          // For credit spend, we might want to include all outcomes to show credit consumption patterns
-          if (typeof filter.value === "string") {
-            params.outcomes?.filters?.push({
-              operator: "is",
-              value: filter.value as
-                | "VALID"
-                | "INSUFFICIENT_PERMISSIONS"
-                | "RATE_LIMITED"
-                | "FORBIDDEN"
-                | "DISABLED"
-                | "EXPIRED"
-                | "USAGE_EXCEEDED"
-                | "",
-            });
-          }
-          break;
-        }
+        case "outcomes": {
+          if (typeof filter.value === "string" && filter.value) {
+            type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+            if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+              params.outcomes?.filters?.push({
+                operator: "is",
+                value: filter.value as ValidOutcome,
+              });
+            }
+          }
+          break;
+        }

Please confirm that the import path for KEY_VERIFICATION_OUTCOMES is correct in this package. If not, point me to the shared constant for outcomes and I’ll update the patch.


🏁 Script executed:

#!/bin/bash
# Search for the declaration or usages of KEY_VERIFICATION_OUTCOMES in the repository
rg -n "KEY_VERIFICATION_OUTCOMES"

Length of output: 8391


🏁 Script executed:

#!/bin/bash
# Display the top lines of the target file to inspect existing imports
head -n 20 apps/dashboard/app/\(app\)/apis/\[apiId\]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts

Length of output: 1093


Validate outcomes against the canonical set

The import path for KEY_VERIFICATION_OUTCOMES is correct—other parts of this package (and related hooks) already import it from @unkey/clickhouse/src/keys/keys. Please apply the refactor below to guard against empty or invalid outcome values:

• Add the import at the top of
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts

import { KEY_VERIFICATION_OUTCOMES } from "@unkey/clickhouse/src/keys/keys";

• Replace the existing case "outcomes": block (lines 89–105) with:

      case "outcomes": {
-          // For credit spend, we might want to include all outcomes to show credit consumption patterns
-          if (typeof filter.value === "string") {
-            params.outcomes?.filters?.push({
-              operator: "is",
-              value: filter.value as
-                | "VALID"
-                | "INSUFFICIENT_PERMISSIONS"
-                | "RATE_LIMITED"
-                | "FORBIDDEN"
-                | "DISABLED"
-                | "EXPIRED"
-                | "USAGE_EXCEEDED"
-                | "",
-            });
-          }
+          if (typeof filter.value === "string" && filter.value) {
+            type ValidOutcome = (typeof KEY_VERIFICATION_OUTCOMES)[number];
+            if (KEY_VERIFICATION_OUTCOMES.includes(filter.value as ValidOutcome)) {
+              params.outcomes?.filters?.push({
+                operator: "is",
+                value: filter.value as ValidOutcome,
+              });
+            }
+          }
         break;
       }

This ensures only legitimate outcomes (and no empty strings) are sent to the server.

🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 89–105, the current outcomes branch can push empty or invalid
outcome strings; import KEY_VERIFICATION_OUTCOMES from
"@unkey/clickhouse/src/keys/keys" at the top of the file and replace the case
"outcomes": block to validate filter.value: first ensure typeof filter.value ===
"string" and filter.value is non-empty, then check
KEY_VERIFICATION_OUTCOMES.includes(filter.value) before pushing to
params.outcomes.filters; only push when both checks pass so no empty or
non-canonical outcomes are sent to the server.

}

case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value,
};
}
Comment on lines +108 to +114
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Trim tag values to avoid backend mismatches

Keep consistent with other filters; send trimmed values only.

-            params.tags = {
+            params.tags = {
               operator,
-              value: filter.value,
+              value: filter.value.trim(),
             };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value,
};
}
case "tags": {
if (typeof filter.value === "string" && filter.value.trim()) {
params.tags = {
operator,
value: filter.value.trim(),
};
}
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/apis/[apiId]/_overview/components/charts/credit-spend-chart/hooks/use-fetch-timeseries.ts
around lines 108 to 114, the "tags" branch currently forwards filter.value
untrimmed; change it to trim the string before sending to the backend and only
set params.tags when the trimmed value is non-empty (e.g. const val =
filter.value.trim(); if (val) params.tags = { operator, value: val }). Ensure
you keep the existing typeof string check and do not send whitespace-only
values.

break;
}
}
});

return params;
}, [filters, timestamp, apiId]);

const { data, isLoading, isError } = trpc.api.keys.timeseries.useQuery(queryParams, {
refetchInterval: queryParams.endTime === timestamp ? 10_000 : false,
enabled: Boolean(apiId),
});

const timeseries = useMemo(() => {
if (!data?.timeseries) {
return [];
}

return data.timeseries.map((ts) => {
return {
displayX: formatTimestampForChart(ts.x, data.granularity),
originalTimestamp: ts.x,
spent_credits: ts.y.spent_credits ?? 0,
total: ts.y.spent_credits ?? 0,
};
});
}, [data]);

return {
timeseries: timeseries || [],
isLoading,
isError,
granularity: data?.granularity,
};
};
Loading