Conversation
|
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
📝 WalkthroughWalkthroughThe pull request introduces a new asynchronous function, Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Tip ⚡🧪 Multi-step agentic review comment chat (experimental)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
Thank you for following the naming conventions for pull request titles! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
apps/dashboard/lib/trpc/routers/api/keys/query-active-keys-timeseries/index.ts (2)
29-35: Surface more detailed error messages.
Throwing an internal server error whenclickhouseResultis falsy or containserris correct from a flow standpoint. However, more context (like the actual error message fromclickhouseResult.err) might help troubleshooting.- message: "Something went wrong when fetching data from ClickHouse.", + message: clickhouseResult?.err?.message || "Something went wrong when fetching data from ClickHouse.",
50-60: Query keys based on aggregated key IDs.
ThequeryApiKeyscall effectively filters keys for the names/identities logic. This is a good example of referencing multiple filters in a single query. Watch out for performance ifallKeyIdsgrows large.Consider chunking the queries or introducing a maximum limit if
allKeyIdscan become extremely large.internal/clickhouse/src/verifications.ts (1)
403-443: Implement batch-based timeseries fetching.
YourbatchVerificationTimeseriesfunction is a solid approach for handling largekeyIdsarrays. UsingPromise.allSettledensures partial success even if some batches fail. Consider logging or handling partial failures more explicitly rather than returning an empty result for failing batches.+ // Example simpler logging approach: console.error(`Batch ${batchIndex} query failed:`, error);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/dashboard/lib/trpc/routers/api/keys/api-query.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-active-keys-timeseries/index.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/index.ts(2 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts(1 hunks)internal/clickhouse/src/keys/active_keys.ts(2 hunks)internal/clickhouse/src/verifications.ts(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (18)
- GitHub Check: Test Agent Local / test_agent_local
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test Packages / Test ./packages/api
- GitHub Check: Test Packages / Test ./internal/resend
- GitHub Check: Test Packages / Test ./packages/rbac
- GitHub Check: Test Packages / Test ./packages/hono
- GitHub Check: Test Packages / Test ./packages/nextjs
- GitHub Check: Test Packages / Test ./internal/keys
- GitHub Check: Test Packages / Test ./internal/clickhouse
- GitHub Check: Test Packages / Test ./internal/hash
- GitHub Check: Test Packages / Test ./packages/cache
- GitHub Check: Test Packages / Test ./internal/id
- GitHub Check: Test Packages / Test ./internal/billing
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
- GitHub Check: Test Packages / Test ./internal/encryption
- GitHub Check: autofix
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (22)
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts (1)
30-30: Simplified error handling approachThe error handling logic for
result.errhas been removed, returning the result directly. This simplification is part of a broader refactoring and aligns with the fix for the keyId overflow issue, assuming that error handling is now managed elsewhere in the call chain.apps/dashboard/lib/trpc/routers/api/keys/api-query.ts (1)
330-352: Well-implemented API retrieval functionThe new
getApifunction efficiently fetches API data with minimal columns, properly handles database errors, and validates that the API exists and has key authentication enabled. This is a good abstraction that centralizes API retrieval logic.internal/clickhouse/src/keys/active_keys.ts (2)
50-50: Improved schema with key_ids trackingAdding the optional
key_idsarray to the schema is appropriate for tracking which specific keys were active in a time period, which supports the fix for the keyId overflow issue.
147-148: Enhanced query to track distinct key IDsThe SQL query now collects distinct key IDs with
groupArray(DISTINCT key_id), which will help with debugging and tracking key activity. This properly supports the schema change above.apps/dashboard/lib/trpc/routers/api/keys/query-overview-logs/index.ts (4)
33-40: Improved API validation with getApiUsing the new
getApifunction for API validation is more efficient than the previous approach. The clear error message when an API is not found or doesn't have key authentication is helpful for debugging.
50-54: Better separation of filtering responsibilitiesThe code now only includes keyIds in ClickHouse queries when explicitly provided, while nullifying names and identities filters to handle them in the database layer. This separation of concerns is appropriate and may help address the keyId overflow issue.
72-94: Improved key filtering and mapping logicThe enhanced filtering ensures that logs are only included for keys that exist in both ClickHouse and the database, and have passed all filters. This approach is more robust and can prevent issues with non-existent or invalid keys.
99-99: Fixed pagination accuracyThe
hasMorecalculation now correctly considers the length of the filteredkeysOverviewLogsarray rather than just the raw logs from ClickHouse. This ensures pagination works properly even when some logs are filtered out.apps/dashboard/lib/trpc/routers/api/keys/query-active-keys-timeseries/index.ts (12)
5-5: Import usage looks good.
This import statement concisely brings in bothgetApiandqueryApiKeys, which appear to be used appropriately throughout the file.
11-18: Validate the existence of the API and keyAuth before proceeding.
Throwing aTRPCErrorwhen theapiorapi.keyAuth?.idis not found is a good practice for preventing further processing of invalid data. Just ensure other parts of the code gracefully handle this error.
20-20: Destructure transform inputs.
The extraction oftransformedInputsandgranularitywithtransformVerificationFilters(input)is a clean approach, keeping the transformation logic separate from business logic.
22-22: Ensure valid granularity keys.
When usingclickhouse.verifications.activeKeysTimeseries[granularity], confirm thatgranularityalways maps to a valid property to avoid runtime errors.Could you run an additional check that ensures
granularityis one of the accepted values inactiveKeysTimeseries?
26-26: Check for consistent handling of empty arrays.
Currently, ifinput.keyIdsis falsy,keyIdsis set tonull. Verify whetherclickhouse.verifications.activeKeysTimeseries[...]expects an array instead ofnull. If it can handle null safely, this is fine; otherwise, consider defaulting to an empty array.
36-42: Return null timeseries on empty result set.
This approach of returningnullundertimeserieswhen no data is available is straightforward. Ensure that downstream consumers properly handletimeseries: null.
44-49: Accumulate key IDs into a set.
Gathering key IDs from all data points is a neat way to reduce potential duplicates. The usage of(point.key_ids ?? [])is safe-guarding from null/undefined conditions. Nicely done.
61-62: Set creation for quick membership tests.
Transforming the resulting keys to aSetfor membership checks is efficient and clean.
63-72: Derive filtered timeseries with minimal overhead.
Mapping overtimeseriesWithKeysand filtering withfilteredKeyIdSetis a straightforward approach. Just double-check that thepoint.xalignment does not require further merging logic if data points share an identical timestamp.
74-74: Concise return for filtered timeseries.
Returning{ timeseries: filteredTimeseries, granularity }is consistent with the earlier structure.
77-80: Map over raw timeseries results.
Preservingpoint.xandpoint.yfor thetimeseriesDataobject matches the approach used elsewhere in the code. The minimal transformation ensures good performance.
82-82: Finalize timeseries response.
Consistently returning{ timeseries, granularity }keeps the API response format uniform.internal/clickhouse/src/verifications.ts (2)
445-486: Accumulate results across batches.
Merging points inmergeVerificationTimeseriesResultsis well-structured. Summing numeric fields ensures correctness of final aggregated time buckets. Watch out for potential integer overflow if these sums can become very large.
488-523: Procedure-level function wrappers.
WrappinggetHourlyVerificationTimeseries,getTwoHourlyVerificationTimeseries, etc., aroundbatchVerificationTimeseriesenforces a consistent interface and ensures all intervals benefit from the batching logic. This is a clean, extensible approach.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
internal/clickhouse/src/verifications.ts (3)
403-443: Well-implemented batch processing for handling large keyId arraysThe new
batchVerificationTimeseriesfunction effectively solves the keyId overflow issue by implementing batch processing. This is a robust solution that handles large datasets by breaking them into manageable chunks.Two minor improvements to consider:
Line 429 returns
resdirectly if it has no.valproperty, but this inconsistency could cause type issues in the merger function. Consider standardizing the return type.Consider adding timeout handling for individual batch queries:
-const batchResults = await Promise.allSettled( +const batchResults = await Promise.allSettled( keyIdBatches.map(async (batchKeyIds, batchIndex) => { + // Add timeout to prevent long-running queries + return Promise.race([ const batchArgs = { ...args, keyIds: batchKeyIds, }; try { const res = await createVerificationTimeseriesQuerier(interval)(ch)(batchArgs); if (res?.val) { return res.val; } return res; // Return res directly if no .val } catch (error) { console.error(`Batch ${batchIndex} query failed:`, error); return []; - } + }, + new Promise((_, reject) => + setTimeout(() => reject(new Error(`Batch ${batchIndex} timed out`)), 30000) + ) + ]); }), );
445-486: Efficient merge implementation for batch resultsThe
mergeVerificationTimeseriesResultsfunction correctly consolidates results from multiple batches, summing counts for each timestamp and maintaining proper order.Consider adding null protection for dataPoint.y properties to make the code more resilient:
if (!existingPoint) { mergedMap.set(dataPoint.x, dataPoint); } else { mergedMap.set(dataPoint.x, { x: dataPoint.x, y: { - total: (existingPoint.y.total ?? 0) + (dataPoint.y.total ?? 0), + total: (existingPoint.y?.total ?? 0) + (dataPoint.y?.total ?? 0), // Apply similar changes to other propertiesThis would protect against unexpected data shapes from query results.
425-434: Robust error handling for individual batch queriesThe implementation properly catches and logs errors for individual batches without failing the entire operation. This increases resilience when dealing with large datasets.
Consider enhancing error reporting by including more diagnostic information:
try { const res = await createVerificationTimeseriesQuerier(interval)(ch)(batchArgs); if (res?.val) { return res.val; } return res; // Return res directly if no .val } catch (error) { - console.error(`Batch ${batchIndex} query failed:`, error); + console.error( + `Batch ${batchIndex} query failed (batch size: ${batchKeyIds.length}):`, + error, + { batchKeyIds: batchKeyIds.map(k => k.value).join(',') } + ); return []; }apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts (1)
30-33:Details
❓ Verification inconclusive
Simplified error handling flow by removing explicit error checks.
The code now directly returns the result from the ClickHouse query without checking for errors. This change aligns with similar modifications in other files and streamlines the code flow.
While this simplification makes the code cleaner, ensure that error handling has been moved to a higher level in the application or that the ClickHouse client handles errors appropriately. Otherwise, clients of this API could receive raw errors instead of well-formatted error responses.
🏁 Script executed:
#!/bin/bash # Check if the ClickHouse verification timeseries function handles errors internally rg "verifications\.timeseries" -A 10 -B 10 --glob "*.ts" | grep -A 5 -B 5 "err\|error\|throw\|catch"Length of output: 102
Action: Confirm Centralized Error Handling for ClickHouse Queries
The change in
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts(lines 30-33) simplifies error handling by directly returning the ClickHouse query result without explicitly catching errors. This approach is aligned with similar changes elsewhere in the codebase. However, it's important to verify one of the following before proceeding:
- Centralized Error Handling: Confirm that error handling has been moved to a higher level in the application where raw errors from the ClickHouse client are caught and transformed into well-structured API responses.
- ClickHouse Client Guarantees: Ensure that the ClickHouse client itself appropriately handles errors internally so that clients of this API do not receive raw error messages.
Since our initial automated search did not yield evidence of explicit error handling within this module, please manually verify that either of the above mechanisms is in place. This will ensure consistency and prevent unhandled errors from propagating to the API consumers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts(1 hunks)apps/dashboard/lib/trpc/routers/api/overview/query-timeseries/index.ts(1 hunks)internal/clickhouse/src/verifications.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts (1)
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#2918
File: apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts:0-0
Timestamp: 2025-03-12T13:07:07.472Z
Learning: The useFetchVerificationTimeseries hook intentionally uses TIMESERIES_DATA_WINDOW * 24 to create a daily interval (24 hours) instead of the standard 1-hour interval used in other hooks.
⏰ Context from checks skipped due to timeout of 90000ms (16)
- GitHub Check: Test Packages / Test ./packages/nextjs
- GitHub Check: Test Packages / Test ./packages/hono
- GitHub Check: Test Packages / Test ./packages/cache
- GitHub Check: Test Packages / Test ./packages/rbac
- GitHub Check: Build / Build
- GitHub Check: Test Packages / Test ./internal/hash
- GitHub Check: Test Packages / Test ./internal/billing
- GitHub Check: Test Packages / Test ./internal/clickhouse
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Packages / Test ./internal/encryption
- GitHub Check: Test Packages / Test ./internal/id
- GitHub Check: Test Packages / Test ./packages/api
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test Agent Local / test_agent_local
- GitHub Check: autofix
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (4)
apps/dashboard/app/(app)/apis/_components/hooks/use-query-timeseries.ts (1)
58-60: Added defensive null handling for improved robustnessAdding nullish coalescing operators for both
data?.timeseriesanddata?.granularityimproves the code's resilience against undefined values. This prevents potential runtime errors and ensures consistent behavior with sensible defaults.This change aligns well with the server-side error handling adjustments made in the related files, creating a more robust end-to-end implementation.
internal/clickhouse/src/verifications.ts (2)
488-522: Clean function signature updates maintaining API consistencyAll exported timeseries functions have been updated to use the new batch processing approach while maintaining a consistent API interface. This is a good practice that allows for internal implementation changes without breaking external consumers.
409-411: Good performance optimization with conditional batchingThe function only activates the batching mechanism when necessary (when keyIds.length > maxBatchSize), which avoids unnecessary complexity for smaller queries.
apps/dashboard/lib/trpc/routers/api/overview/query-timeseries/index.ts (1)
17-20:Details
✅ Verification successful
Simplified error handling by removing explicit error checks.
Similar to changes in the
keyVerificationsTimeseriesfunction, this code now directly returns the result from the ClickHouse query without error validation.This change creates a consistent pattern across the codebase, but ensure that error handling has been moved elsewhere in the application stack. Otherwise, API consumers might receive unexpected error formats.
🏁 Script executed:
#!/bin/bash # Check for global error handling middleware or higher-level error handling rg "errorFormatter|createTRPCRouter|errorHandler" --glob "*.ts" -A 10 -B 10 | grep -A 5 -B 5 "error\|catch" # Check if there are any patterns of handling ClickHouse errors globally rg "clickhouse" -A 5 -B 5 --glob "*.ts" | grep -A 3 -B 3 "catch\|try\|error"Length of output: 4438
Confirm Global Error Handling for TRPC Endpoint
The changes in the
query-timeseriesendpoint—removing explicit error checks and directly returning the ClickHouse query result—are consistent with similar refactoring (e.g., inkeyVerificationsTimeseries). Our investigation shows that errors from ClickHouse queries are caught and reformatted at lower layers (for example, ininternal/clickhouse/src/client/client.tsand within other API routes). This suggests that global error handling (likely via TRPC’s router configuration or middleware) is in place.
- Ensure that the global TRPC error formatter or middleware is applied to this endpoint so that any errors thrown by the ClickHouse client are caught and consistently formatted for API consumers.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
internal/clickhouse/src/verifications.ts (2)
429-429: Consider adding type safety for the return value.The code returns
resdirectly ifres.valis undefined, which could lead to inconsistent return types.- return res; // Return res directly if no .val + return []; // Return empty array if no results to maintain consistent return type
431-432: Improve error handling with more context.When a batch fails, we're logging the error but not capturing any details about which specific keyIds failed.
- console.error(`Batch ${batchIndex} query failed:`, error); - return []; + console.error(`Batch ${batchIndex} query failed with keyIds [${batchKeyIds.map(k => k.value).join(', ')}]:`, error); + return [];
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/clickhouse/src/verifications.ts(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: Test Packages / Test ./internal/clickhouse
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Agent Local / test_agent_local
- GitHub Check: Build / Build
- GitHub Check: autofix
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (3)
internal/clickhouse/src/verifications.ts (3)
403-443: Excellent implementation of batch processing to solve keyId overflow issues.This function elegantly handles the case where a large number of keyIds are provided by:
- Directly using the existing querier for small batches
- Breaking large keyId lists into manageable chunks
- Processing batches in parallel using Promise.allSettled
- Handling errors gracefully without failing the entire operation
445-486: Well-implemented result merging function with proper handling of nullable fields.The function correctly:
- Uses a Map to efficiently group by timestamp
- Handles potentially null/undefined values with the nullish coalescing operator
- Carefully sums all count metrics while preserving the data structure
- Returns the results in chronological order
488-522: Clean and consistent update of the exported functions.All timeseries functions have been consistently updated to use the new batching mechanism while maintaining the same API signature, making this a non-breaking change.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts (2)
23-27: Consider a streamlined approach to checking multiple filters.
While the current logic works, you could simplify filter checks with an array-based approach:- const hasKeyFilters = - (transformedInputs.keyIds !== null && transformedInputs.keyIds.length > 0) || - (transformedInputs.names !== null && transformedInputs.names.length > 0) || - (transformedInputs.identities !== null && transformedInputs.identities.length > 0); + const filtersList = [ + transformedInputs.keyIds, + transformedInputs.names, + transformedInputs.identities, + ]; + const hasKeyFilters = filtersList.some((arr) => arr && arr.length > 0);
29-29: Minimize complexity by removing the union type if null is never assigned.
Currently,keyIdsis defined asKeysOverviewFilterUrlValue[] | nullbut always initialized to[]. Consider standardizing to just an array:-let keyIds: KeysOverviewFilterUrlValue[] | null = []; +let keyIds: KeysOverviewFilterUrlValue[] = [];
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Test Packages / Test ./internal/hash
- GitHub Check: Test Packages / Test ./packages/nextjs
- GitHub Check: Test Packages / Test ./internal/clickhouse
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Agent Local / test_agent_local
- GitHub Check: Build / Build
- GitHub Check: autofix
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (6)
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.ts (6)
2-2: No concerns with the newly imported type.
This import appears correct and aligns well with usage in the key-filters logic later on.
6-6: ImportinggetApiandqueryApiKeysis appropriate.
These new imports look consistent with the usage in this file.
12-18: Validation logic and error handling look correct.
Throwing aNOT_FOUNDerror ifapiorapi.keyAuth?.idis missing is an appropriate approach to guard against invalid or unconfigured APIs.
20-20: Assignment ofkeyspaceIdis straightforward.
No issues here; the naming and usage are clear.
54-57:Details
✅ Verification successful
Double-check the error handling requirement for ClickHouse queries.
Previously, partial checks may have occurred on the result. If the ClickHouse library throws on error, it’s fine. Otherwise, confirm that returning the raw result aligns with your error-handling strategy.
🏁 Script executed:
#!/bin/bash # Searching for usage of clickhouse.verifications.timeseries calls # to confirm if there's any recommended error-checking convention. rg -A 5 "clickhouse\.verifications\.timeseries"Length of output: 1366
Action: Confirm ClickHouse Error-Handling Behavior
After verifying similar usage in both the
apps/dashboard/lib/trpc/routers/api/keys/query-overview-timeseries/index.tsandapps/dashboard/lib/trpc/routers/api/overview/query-timeseries/index.tsfiles, it appears that the code consistently relies on the ClickHouse library to throw errors. In both instances, the raw result is returned without additional error-checking logic. Please confirm that this approach is in line with your overall error-handling strategy—specifically, that the library does indeed throw exceptions on errors and that no extra checks are required.
31-42:Details
✅ Verification successful
Confirm if an empty result set should lead to an error or is acceptable.
WhenhasKeyFiltersis true but no keys match,keyIdsbecomes an empty array. Depending on your business logic, this might be correct or it may require a special error.Run this script to check how
queryApiKeysis used elsewhere and see if empty results are intentionally handled:
🏁 Script executed:
#!/bin/bash # Verifying usage of queryApiKeys throughout the codebase. rg -A 5 "queryApiKeys"Length of output: 4788
Clarification on Handling Empty Key Sets
Based on our review of howqueryApiKeysis used throughout the codebase (e.g., in the logs and active keys timeseries endpoints), it appears that returning an empty array when no keys match is the expected and consistent behavior. There’s no evidence elsewhere in the codebase that an empty result should trigger an error. If this aligns with your business logic, then the current implementation is acceptable.
What does this PR do?
Fixes # (issue)
If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainAppreciated
Summary by CodeRabbit