Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds PostgreSQL materialized views and refresh lifecycle plus advisory-locked periodic refresher; routes eligible dashboard analytics queries to mat-views; implements mat-view-backed aggregation/read helpers and eligibility checks; introduces concurrent COUNT + page fetch; frontend tab-scoped lazy loaders and filter-name deduplication; promotes Changes
Sequence Diagram(s)sequenceDiagram
participant App as Dashboard App
participant Store as RDBLogStore
participant DB as PostgreSQL
participant BG as MatView Refresher
Note over App,Store: Startup
App->>Store: Initialize LogStore
Store->>DB: ensureMatViews() (CREATE MATERIALIZED VIEW / INDEX)
Store->>DB: refreshMatViews() (initial REFRESH CONCURRENTLY with advisory lock)
Store->>BG: startMatViewRefresher(interval)
Note over App,Store: Query Time
App->>Store: GetStats(filters)
alt canUseMatView(filters) && dialect==postgres
Store->>DB: query mv_logs_hourly / mv_logs_filterdata
DB-->>Store: aggregated results
else
Store-->>DB: query raw log tables
DB-->>Store: raw results
end
Store-->>App: Stats response
Note over BG,DB: Background Maintenance
BG->>DB: refreshMatViews() (periodic, advisory-locked)
DB->>DB: REFRESH MATERIALIZED VIEW CONCURRENTLY
sequenceDiagram
participant User as Dashboard User
participant Page as Dashboard Page
participant API as LogStore API
User->>Page: Open Dashboard
Page->>Page: init per-tab refs
Page->>API: fetchOverviewData()
API-->>Page: Overview payloads
Page->>User: Render Overview
User->>Page: Switch to Provider tab
Page->>Page: reset/load flags on filter change
Page->>API: fetchProviderData()
API-->>Page: Provider payloads
Page->>User: Render Provider tab
Note over Page: Background warm-up
Page->>Page: wait 150ms
Page->>API: warm other tabs in background
API-->>Page: cached responses
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 4❌ Failed checks (3 warnings, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
|
|
This stack of pull requests is managed by Graphite. Learn more about stacking. |
dc92c0b to
61f5fe3
Compare
61f5fe3 to
06df0f6
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
ui/app/workspace/dashboard/page.tsx (1)
279-381:⚠️ Potential issue | 🟠 MajorThe generation refs no longer stop stale responses from painting old data.
genis only checked when flipping the*FetchedRef/*LoadingRefflags. The awaited fetchers still callsetHistogramData,setProviderCostData,setMcpHistogramData,setRankingsData, etc. unconditionally, so a slower request for old filters can resolve last and overwrite the newer selection.Also applies to: 400-446
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/app/workspace/dashboard/page.tsx` around lines 279 - 381, The fetch helpers (fetchOverviewData, fetchProviderData, fetchMcpData, fetchRankingsData) unconditionally call set*Data after awaiting concurrent requests so a stale slower response can overwrite newer data; capture the current generation token/ref (the same `gen` checked when flipping the *FetchedRef/*LoadingRef flags) at the start of each fetch, and before calling each set*Data or clearing the loading flag verify the captured gen still matches the latest gen (or that the corresponding isCurrent helper returns true); only apply state updates when the gen matches and move loading flag clearing into a finally or guarded block so stale results cannot paint old data.framework/logstore/postgres.go (1)
80-89:⚠️ Potential issue | 🟠 MajorThis hard version gate breaks backward compatibility with PostgreSQL < 16 without a migration path.
The constructor now rejects PG15 and older at startup, but the logstore migrations still carry a PG<16 fallback path (Go-based
isValidJSON()validation instead of server-sideIS NOT JSON OBJECT). That fallback code is now unreachable—existing deployments on PG15 cannot upgrade to this version without first upgrading PostgreSQL itself. If the codebase is already requiring PG16+ as a breaking change across all services, document it explicitly; otherwise, this needs a phased rollout or a backward-compatible flag to allow PG<16 deployments additional time.
🧹 Nitpick comments (1)
framework/logstore/rdb.go (1)
409-413: UnnecessaryErrRecordNotFoundcheck afterFind().GORM's
Find()returns an empty slice andnilerror when no records match—it does not returngorm.ErrRecordNotFound. This check is dead code.Consider removing it for clarity:
♻️ Proposed simplification
- err := dataQuery.Find(&logs).Error - if err != nil && errors.Is(err, gorm.ErrRecordNotFound) { - return nil - } - return err + return dataQuery.Find(&logs).Error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 409 - 413, The code checks for gorm.ErrRecordNotFound after calling dataQuery.Find(&logs).Error, but GORM's Find returns an empty slice and nil error when nothing matches, so the errors.Is(err, gorm.ErrRecordNotFound) branch is dead code; remove that conditional and simply propagate the error from dataQuery.Find(&logs).Error (e.g., if err != nil { return err } or just return err) and keep the rest of the surrounding logic unchanged—look for the dataQuery.Find(&logs).Error call and the logs variable to locate the exact spot.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/matviews.go`:
- Around line 102-134: refreshMatViews is being executed concurrently by every
replica; wrap the refresh sequence in the same advisory-lock pattern used
elsewhere (the ensureMatViews/create logic) so only one instance refreshes at a
time: acquire the Postgres advisory lock (using the same lock key/value and
try-lock semantics as the existing pattern), check success and skip refresh if
lock not acquired, defer releasing the advisory lock, then call db.Exec("REFRESH
MATERIALIZED VIEW CONCURRENTLY ...") for each view; update startMatViewRefresher
to rely on the modified refreshMatViews (no extra changes needed) so refreshes
across replicas are serialized.
- Around line 18-21: The mvLogsHourlyDDL materialized view currently stores
exact p90/p95/p99 per hour which cannot be re-aggregated by weighted average;
remove the per-hour percentile columns from mvLogsHourlyDDL and from the other
similar materialized view(s) in the provider-latency path, and instead either
(a) store a mergeable sketch/histogram column (e.g., t-digest/tdigest,
HDRHistogram, or a serialized histogram type) such as latency_histogram that can
be unioned and re-queried for percentiles, or (b) stop precomputing percentiles
and run percentile queries directly against raw logs; update the DDL and any
downstream code that reads p90/p95/p99 to use the new histogram column and
aggregation functions (or raw-log queries) so multi-hour percentile queries are
mathematically correct.
- Around line 136-160: canUseMatView currently allows second-precision start/end
times which mismatch applyMatViewFilters' hour-truncation and causes inflated
metrics from mv_logs_hourly; update canUseMatView to only return true when the
time window is hour-aligned (i.e., if f.StartTime and f.EndTime are non-nil they
must have minutes, seconds, and nanoseconds == 0 and EndTime >= StartTime),
otherwise fall back to raw logs; reference the canUseMatView(SearchFilters)
function, the StartTime/EndTime fields, and mv_logs_hourly/applyMatViewFilters
when implementing the guard.
- Around line 56-80: The unique index mv_logs_filterdata_uniq does not include
the name columns causing REFRESH MATERIALIZED VIEW CONCURRENTLY to fail when an
ID appears with different names; update the mvLogsFilterdataUniqueIdx definition
to add selected_key_name, virtual_key_name, and routing_rule_name (matching the
SELECT DISTINCT columns in mvLogsFilterdataDDL) to the index so the uniqueness
covers both IDs and their corresponding name fields.
In `@ui/app/workspace/dashboard/page.tsx`:
- Around line 467-487: The lazy-load and warm-up useEffect blocks check
urlState.tab against the wrong literal "providers" causing the Provider Usage
tab to be treated as inactive; update both effects to use the actual tab key
"provider-usage" wherever they compare tab (i.e., replace the checks comparing
tab === "providers" and tab !== "providers" with "provider-usage") so
ensureProviderDataLoaded is called eagerly when the Provider Usage tab is active
and excluded from warm-up when it is the current tab.
In `@ui/components/filters/filterPopover.tsx`:
- Around line 39-50: The current dedup function (used when building
FILTER_OPTIONS for "Selected Keys", "Virtual Keys", and "Routing Rules")
collapses multiple entries with the same name into a single label, but
resolveValueForCategory still expects to map a label to one ID; to fix, stop
collapsing rows to a single name: change FILTER_OPTIONS to keep items keyed by
their unique IDs (e.g., use the original
availableSelectedKeys/availableVirtualKeys/availableRoutingRules arrays of
{id,name}) or else change the mapping so each label in FILTER_OPTIONS maps to an
array of IDs and update resolveValueForCategory to toggle all IDs for that
label; update references to dedup, FILTER_OPTIONS, and resolveValueForCategory
accordingly so duplicate names remain reachable via their distinct IDs.
---
Outside diff comments:
In `@ui/app/workspace/dashboard/page.tsx`:
- Around line 279-381: The fetch helpers (fetchOverviewData, fetchProviderData,
fetchMcpData, fetchRankingsData) unconditionally call set*Data after awaiting
concurrent requests so a stale slower response can overwrite newer data; capture
the current generation token/ref (the same `gen` checked when flipping the
*FetchedRef/*LoadingRef flags) at the start of each fetch, and before calling
each set*Data or clearing the loading flag verify the captured gen still matches
the latest gen (or that the corresponding isCurrent helper returns true); only
apply state updates when the gen matches and move loading flag clearing into a
finally or guarded block so stale results cannot paint old data.
---
Nitpick comments:
In `@framework/logstore/rdb.go`:
- Around line 409-413: The code checks for gorm.ErrRecordNotFound after calling
dataQuery.Find(&logs).Error, but GORM's Find returns an empty slice and nil
error when nothing matches, so the errors.Is(err, gorm.ErrRecordNotFound) branch
is dead code; remove that conditional and simply propagate the error from
dataQuery.Find(&logs).Error (e.g., if err != nil { return err } or just return
err) and keep the rest of the surrounding logic unchanged—look for the
dataQuery.Find(&logs).Error call and the logs variable to locate the exact spot.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0dddbadb-b263-46e6-be17-1e73f2927f99
📒 Files selected for processing (8)
framework/go.modframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/postgres.goframework/logstore/rdb.goframework/logstore/sqlite.goui/app/workspace/dashboard/page.tsxui/components/filters/filterPopover.tsx
e9ec7d7 to
c4bcb14
Compare
3d406f8 to
5fe923c
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/matviews.go`:
- Around line 745-748: The current prevMap uses a string key built by
concatenating r.Model + ":" + r.Provider which can collide if fields contain
":"; define a small composite key type (e.g., modelProvKey struct{ Model,
Provider string }) and change prevMap to map[modelProvKey]int, populate it with
prevMap[modelProvKey{Model: r.Model, Provider: r.Provider}] = i in the loop that
builds prevMap, and update all lookups (the code that reads prevMap later,
including the similar usage at the other occurrence) to use the same composite
key type instead of the concatenated string.
- Around line 260-268: The matview-based histogram functions
(getHistogramFromMatView, getTokenHistogramFromMatView,
getCostHistogramFromMatView, getModelHistogramFromMatView,
getLatencyHistogramFromMatView, getProviderCostHistogramFromMatView,
getProviderTokenHistogramFromMatView, getProviderLatencyHistogramFromMatView)
currently re-bucket hourly materialized data into arbitrary bucketSizeSeconds
which breaks sub-hour resolutions; add a guard at the start of each of these
functions to reject bucketSizeSeconds < 3600 (return an error) or,
alternatively, detect that case and route to the raw-log query path (the
existing non-matview logic) so finer resolutions use raw logs instead of
mv_logs_hourly, ensuring you check and handle bucketSizeSeconds before building
the SQL SELECT/CAST re-bucketing expression.
- Around line 106-115: refreshMatViews currently uses separate db.WithContext
calls so the pg_try_advisory_lock and pg_advisory_unlock may run on different
pooled connections; change refreshMatViews to obtain a dedicated *sql.Conn from
db (following the advisoryLock pattern in migrations.go) and run the acquire
(pg_try_advisory_lock), the refresh work, and the release (pg_advisory_unlock)
on that same connection so the session-scoped lock is correctly held and
released; also check and handle the error/result from pg_advisory_unlock (do not
ignore its return value) and reference matviewRefreshAdvisoryLockKey in the
queries.
In `@ui/components/filters/filterPopover.tsx`:
- Around line 56-59: The active-filter badge currently counts raw ID arrays
(selected_key_ids, virtual_key_ids, routing_rule_ids) which inflates the visible
count; change the badge/count logic to map those ID arrays to their visible
names using the same lookups used to render options (availableSelectedKeys,
availableVirtualKeys, availableRoutingRules / dedup), collapse/uniq by the
visible label, and then sum those unique labels for the badge. Update both
places noted (the badge computation and the other count usage around the block
that renders options) so the badge uses deduped visible labels instead of raw ID
length.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e12e2cb6-df3b-49f0-aeae-a6e6e3284803
📒 Files selected for processing (8)
framework/go.modframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/postgres.goframework/logstore/rdb.goframework/logstore/sqlite.goui/app/workspace/dashboard/page.tsxui/components/filters/filterPopover.tsx
✅ Files skipped from review due to trivial changes (3)
- framework/logstore/sqlite.go
- framework/go.mod
- framework/logstore/rdb.go
🚧 Files skipped from review as they are similar to previous changes (3)
- framework/logstore/postgres.go
- ui/app/workspace/dashboard/page.tsx
- framework/logstore/migrations.go
c4bcb14 to
dd966fd
Compare
5fe923c to
da62438
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
framework/logstore/matviews.go (1)
106-123:⚠️ Potential issue | 🟠 MajorSession-scoped advisory lock may leak due to connection pooling.
pg_try_advisory_lockandpg_advisory_unlockare session-scoped, but eachdb.WithContext(ctx)call can acquire a different connection from GORM's pool. If the acquire (line 109) and unlock (line 115) run on different connections, the original lock is never released and all future refresh attempts will silently skip forever.Use a dedicated
*sql.Conn(similar to theadvisoryLockpattern inmigrations.go) to pin acquire, refresh, and unlock to the same database session.🔧 Suggested fix using dedicated connection
func refreshMatViews(ctx context.Context, db *gorm.DB) error { - // Try to acquire advisory lock; skip refresh if another replica holds it. - var acquired bool - if err := db.WithContext(ctx).Raw("SELECT pg_try_advisory_lock(?)", matviewRefreshAdvisoryLockKey).Row().Scan(&acquired); err != nil { - return fmt.Errorf("failed to try advisory lock for matview refresh: %w", err) + sqlDB, err := db.DB() + if err != nil { + return fmt.Errorf("failed to get sql.DB: %w", err) } - if !acquired { - return nil // another replica is refreshing + conn, err := sqlDB.Conn(ctx) + if err != nil { + return fmt.Errorf("failed to get dedicated connection for matview refresh: %w", err) } - defer db.WithContext(ctx).Exec("SELECT pg_advisory_unlock(?)", matviewRefreshAdvisoryLockKey) + defer conn.Close() + + // Try to acquire advisory lock; skip refresh if another replica holds it. + var acquired bool + if err := conn.QueryRowContext(ctx, "SELECT pg_try_advisory_lock($1)", matviewRefreshAdvisoryLockKey).Scan(&acquired); err != nil { + return fmt.Errorf("failed to try advisory lock for matview refresh: %w", err) + } + if !acquired { + return nil // another replica is refreshing + } + defer func() { + _, _ = conn.ExecContext(ctx, "SELECT pg_advisory_unlock($1)", matviewRefreshAdvisoryLockKey) + }() for _, view := range []string{"mv_logs_hourly", "mv_logs_filterdata"} { - if err := db.WithContext(ctx).Exec("REFRESH MATERIALIZED VIEW CONCURRENTLY " + view).Error; err != nil { + if _, err := conn.ExecContext(ctx, "REFRESH MATERIALIZED VIEW CONCURRENTLY "+view); err != nil { return fmt.Errorf("failed to refresh %s: %w", view, err) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/matviews.go` around lines 106 - 123, refreshMatViews currently uses session-scoped advisory locks across multiple db.WithContext calls which may pick different pooled connections and leak the lock; change it to obtain and use a single dedicated SQL connection for the entire operation (pin the session) similar to the advisoryLock pattern in migrations.go: get the underlying *sql.DB (via db.DB()), call .Conn(ctx) to get a *sql.Conn, run the pg_try_advisory_lock(...) on that conn, run the REFRESH MATERIALIZED VIEW CONCURRENTLY commands on that same conn, and finally run pg_advisory_unlock(...) and close the *sql.Conn in a defer; ensure all Exec/Query calls reference that *sql.Conn so the acquire, refresh, and unlock happen on the same session and errors are handled/wrapped (refer to refreshMatViews and matviewRefreshAdvisoryLockKey for placement).
🧹 Nitpick comments (1)
framework/logstore/matviews.go (1)
745-748: String key collision possible when model/provider contains ":".The
prevMapkey is built asr.Model+":"+r.Provider, which can collide if either field contains:. For example, model"a:b"with provider"c"collides with model"a"and provider"b:c".♻️ Use a struct key to avoid collision
+ type rankingKey struct { + model string + provider string + } - prevMap := make(map[string]int, len(prevResults)) + prevMap := make(map[rankingKey]int, len(prevResults)) for i, r := range prevResults { - prevMap[r.Model+":"+r.Provider] = i + prevMap[rankingKey{model: r.Model, provider: r.Provider}] = i } ... - if idx, ok := prevMap[r.Model+":"+r.Provider]; ok { + if idx, ok := prevMap[rankingKey{model: r.Model, provider: r.Provider}]; ok {Also applies to: 767-768
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/matviews.go` around lines 745 - 748, The map key construction using string concatenation prevMap[r.Model+":"+r.Provider] can produce collisions when Model or Provider contain ":"; replace the string key with a composite struct key (e.g., type prevKey struct { Model, Provider string }) and use prevMap[prevKey{r.Model, r.Provider}] when populating and looking up entries (apply the same change to the other instance at the similar block around the code that mirrors lines 767-768), ensuring all accesses use the new struct key to avoid collisions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/matviews.go`:
- Around line 564-576: The SELECT includes SUM(total_cached_read_tokens) AS
cached_read_tokens but the destination scan struct used to populate results
doesn't have a corresponding field, so add a field (e.g., CachedReadTokens int64
or appropriate type) to the result struct that backs the results variable and
ensure its db/json/gorm tag matches "cached_read_tokens" (or remove the SUM(...)
AS cached_read_tokens from the q.Select call if you intentionally don't need
that value); update the type/tag to match how other aggregated fields
(prompt_tokens/completion_tokens/total_tkns) are declared so the value is
captured into results after q.Select(...).Find(&results).
- Around line 260-268: getHistogramFromMatView must defend against sub-hour
buckets because mv_logs_hourly contains hourly rows; add a check at the start of
getHistogramFromMatView that if bucketSizeSeconds > 0 and bucketSizeSeconds <
3600 you immediately return an error (or normalize to 3600) instead of running
the SQL that rebuckets hourly data. Reference the function
getHistogramFromMatView and the mv_logs_hourly aggregation and validate
bucketSizeSeconds before the SELECT/CAST(...) SQL block so callers cannot pass 0
< bucketSizeSeconds < 3600.
---
Duplicate comments:
In `@framework/logstore/matviews.go`:
- Around line 106-123: refreshMatViews currently uses session-scoped advisory
locks across multiple db.WithContext calls which may pick different pooled
connections and leak the lock; change it to obtain and use a single dedicated
SQL connection for the entire operation (pin the session) similar to the
advisoryLock pattern in migrations.go: get the underlying *sql.DB (via db.DB()),
call .Conn(ctx) to get a *sql.Conn, run the pg_try_advisory_lock(...) on that
conn, run the REFRESH MATERIALIZED VIEW CONCURRENTLY commands on that same conn,
and finally run pg_advisory_unlock(...) and close the *sql.Conn in a defer;
ensure all Exec/Query calls reference that *sql.Conn so the acquire, refresh,
and unlock happen on the same session and errors are handled/wrapped (refer to
refreshMatViews and matviewRefreshAdvisoryLockKey for placement).
---
Nitpick comments:
In `@framework/logstore/matviews.go`:
- Around line 745-748: The map key construction using string concatenation
prevMap[r.Model+":"+r.Provider] can produce collisions when Model or Provider
contain ":"; replace the string key with a composite struct key (e.g., type
prevKey struct { Model, Provider string }) and use prevMap[prevKey{r.Model,
r.Provider}] when populating and looking up entries (apply the same change to
the other instance at the similar block around the code that mirrors lines
767-768), ensuring all accesses use the new struct key to avoid collisions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2599ea4c-1233-4b7d-a0bc-0d8c8429435f
📒 Files selected for processing (8)
framework/go.modframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/postgres.goframework/logstore/rdb.goframework/logstore/sqlite.goui/app/workspace/dashboard/page.tsxui/components/filters/filterPopover.tsx
✅ Files skipped from review due to trivial changes (2)
- framework/logstore/sqlite.go
- framework/go.mod
🚧 Files skipped from review as they are similar to previous changes (3)
- ui/components/filters/filterPopover.tsx
- framework/logstore/postgres.go
- framework/logstore/rdb.go
dd966fd to
4fbb670
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ui/components/filters/filterPopover.tsx (1)
124-171:⚠️ Potential issue | 🟠 MajorDon't let grouped filters enter a partial visible state.
Line 125 and Line 144 require every backing ID to be selected, but Lines 149-156 count a grouped label as active when any backing ID matches. If
selected_key_ids,virtual_key_ids, orrouting_rule_idsarrive with only one duplicate ID set, the badge shows an active filter, the row renders unchecked, and the first click broadens the filter instead of clearing it.🐛 One way to keep the row state aligned with the active filter state
- // Check if ALL resolved IDs are already selected (toggle all together) - const allSelected = resolvedIds.every((id) => currentValues.includes(id)); - const newValues = allSelected + const anySelected = resolvedIds.some((id) => currentValues.includes(id)); + const newValues = anySelected ? currentValues.filter((v) => !resolvedIds.includes(v)) : [...currentValues, ...resolvedIds.filter((id) => !currentValues.includes(id))]; @@ - return Array.isArray(currentValues) && resolvedIds.every((id) => currentValues.includes(id)); + return Array.isArray(currentValues) && resolvedIds.some((id) => currentValues.includes(id));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/components/filters/filterPopover.tsx` around lines 124 - 171, The badge/count logic is inconsistent with selection logic: isSelected requires ALL backing IDs for a grouped label to be present, but countUniqueNames currently treats a name as active if ANY backing ID matches, causing mismatched row state. Update countUniqueNames (used with dedupedCountKeys for selected_key_ids, virtual_key_ids, routing_rule_ids) to count a name only when every mappedId for that name is included in the current ids (i.e., use mappedIds.every(id => ids.includes(id))) so the deduped count and isSelected semantics align.
🧹 Nitpick comments (2)
ui/components/filters/filterPopover.tsx (1)
34-59: Derive the visible labels from the grouped maps.
groupByName()anddedup()currently encode the same grouping rule twice. If name normalization ever changes in only one path, the rendered options, ID resolution, and badge counting will drift apart. Reusing theMapkeys here keeps the visible labels on the same source of truth as the selection logic.♻️ Suggested cleanup
- // Deduplicate by name to avoid React key collisions (e.g. multiple deleted keys with the same name) - const dedup = (items: { name: string }[]) => [...new Map(items.map((i) => [i.name, i])).values()].map((i) => i.name); - const FILTER_OPTIONS: Record<string, string[]> = { Status: [...Statuses], Providers: providersLoading ? [] : availableProviders.map((provider) => provider.name), Type: [...RequestTypes], Models: filterDataLoading ? [] : availableModels, - "Selected Keys": filterDataLoading ? [] : dedup(availableSelectedKeys), - "Virtual Keys": filterDataLoading ? [] : dedup(availableVirtualKeys), + "Selected Keys": filterDataLoading ? [] : [...selectedKeyNameToIds.keys()], + "Virtual Keys": filterDataLoading ? [] : [...virtualKeyNameToIds.keys()], "Routing Engines": filterDataLoading ? [] : availableRoutingEngines, - "Routing Rules": filterDataLoading ? [] : dedup(availableRoutingRules), + "Routing Rules": filterDataLoading ? [] : [...routingRuleNameToIds.keys()], };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/components/filters/filterPopover.tsx` around lines 34 - 59, The dedup logic duplicates the name-normalization already produced by groupByName, causing drift between rendered labels and selection maps; replace uses of dedup(availableSelectedKeys), dedup(availableVirtualKeys), and dedup(availableRoutingRules) in FILTER_OPTIONS with the keys from the corresponding grouped maps (selectedKeyNameToIds, virtualKeyNameToIds, routingRuleNameToIds) — e.g., when filterDataLoading is false use Array.from(selectedKeyNameToIds.keys()) (and similarly for virtualKeyNameToIds and routingRuleNameToIds) so the visible labels are derived from the same Map used for ID resolution and badge counting.framework/logstore/matviews.go (1)
815-823: Whitelist the allowed(idCol, nameCol)pairs.
idColandnameColare interpolated directly into both SQL fragments here. Even if current callers only pass constants, this helper makes identifier injection one refactor away. Constrain it to the supported column pairs and reject anything else.Possible guard
func (s *RDBLogStore) getDistinctKeyPairsFromMatView(ctx context.Context, idCol, nameCol string) ([]KeyPairResult, error) { + validPairs := map[string]string{ + "selected_key_id": "selected_key_name", + "virtual_key_id": "virtual_key_name", + "routing_rule_id": "routing_rule_name", + } + if validPairs[idCol] != nameCol { + return nil, fmt.Errorf("unsupported key pair columns") + } + var results []KeyPairResult if err := s.db.WithContext(ctx).Table("mv_logs_filterdata"). Select(fmt.Sprintf("DISTINCT %s AS id, %s AS name", idCol, nameCol)).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/matviews.go` around lines 815 - 823, The helper getDistinctKeyPairsFromMatView currently interpolates idCol and nameCol into SQL directly; restrict allowed column pairs by adding a whitelist (e.g., only allow the known safe pairs used elsewhere) and validate the incoming idCol/nameCol against that set before building the query; if the pair is not allowed return a clear error and do not execute the query (use the function name getDistinctKeyPairsFromMatView and reference the mv_logs_filterdata table when adding the check so reviewers can find it).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/matviews.go`:
- Around line 56-74: The materialized view mv_logs_filterdata only keeps 30 days
which causes getDistinctModelsFromMatView, getDistinctKeyPairsFromMatView, and
getDistinctRoutingEnginesFromMatView to lose filter values for dashboard
lookbacks up to 60 days; update the mvLogsFilterdataDDL WHERE clause to retain
the full dashboard-supported horizon (e.g., change INTERVAL '30 days' to
INTERVAL '60 days' or make the interval configurable) so the mat view matches
the dashboard max lookback, or alternatively implement a fallback in those
functions to run raw distinct queries when the requested window exceeds the
mat-view retention.
---
Outside diff comments:
In `@ui/components/filters/filterPopover.tsx`:
- Around line 124-171: The badge/count logic is inconsistent with selection
logic: isSelected requires ALL backing IDs for a grouped label to be present,
but countUniqueNames currently treats a name as active if ANY backing ID
matches, causing mismatched row state. Update countUniqueNames (used with
dedupedCountKeys for selected_key_ids, virtual_key_ids, routing_rule_ids) to
count a name only when every mappedId for that name is included in the current
ids (i.e., use mappedIds.every(id => ids.includes(id))) so the deduped count and
isSelected semantics align.
---
Nitpick comments:
In `@framework/logstore/matviews.go`:
- Around line 815-823: The helper getDistinctKeyPairsFromMatView currently
interpolates idCol and nameCol into SQL directly; restrict allowed column pairs
by adding a whitelist (e.g., only allow the known safe pairs used elsewhere) and
validate the incoming idCol/nameCol against that set before building the query;
if the pair is not allowed return a clear error and do not execute the query
(use the function name getDistinctKeyPairsFromMatView and reference the
mv_logs_filterdata table when adding the check so reviewers can find it).
In `@ui/components/filters/filterPopover.tsx`:
- Around line 34-59: The dedup logic duplicates the name-normalization already
produced by groupByName, causing drift between rendered labels and selection
maps; replace uses of dedup(availableSelectedKeys), dedup(availableVirtualKeys),
and dedup(availableRoutingRules) in FILTER_OPTIONS with the keys from the
corresponding grouped maps (selectedKeyNameToIds, virtualKeyNameToIds,
routingRuleNameToIds) — e.g., when filterDataLoading is false use
Array.from(selectedKeyNameToIds.keys()) (and similarly for virtualKeyNameToIds
and routingRuleNameToIds) so the visible labels are derived from the same Map
used for ID resolution and badge counting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 297bcd65-7e4c-401e-9f47-f5868bb4a667
📒 Files selected for processing (8)
framework/go.modframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/postgres.goframework/logstore/rdb.goframework/logstore/sqlite.goui/app/workspace/dashboard/page.tsxui/components/filters/filterPopover.tsx
✅ Files skipped from review due to trivial changes (4)
- framework/logstore/sqlite.go
- framework/go.mod
- ui/app/workspace/dashboard/page.tsx
- framework/logstore/postgres.go
🚧 Files skipped from review as they are similar to previous changes (2)
- framework/logstore/migrations.go
- framework/logstore/rdb.go
4fbb670 to
6a3ccfc
Compare
da62438 to
8c647b1
Compare
6a3ccfc to
642faa7
Compare
Merge activity
|
642faa7 to
0a443f4
Compare

Summary
Briefly explain the purpose of this PR and the problem it solves.
Changes
Type of change
Affected areas
How to test
Describe the steps to validate this change. Include commands and expected outcomes.
If adding new configs or environment variables, document them here.
Screenshots/Recordings
If UI changes, add before/after screenshots or short clips.
Breaking changes
If yes, describe impact and migration instructions.
Related issues
Link related issues and discussions. Example: Closes #123
Security considerations
Note any security implications (auth, secrets, PII, sandboxing, etc.).
Checklist
docs/contributing/README.mdand followed the guidelines