feat: add session log storage and realtime request normalization#2338
Conversation
|
|
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds session-scoped paginated log retrieval and summaries, ParentRequestID filtering and index registration, matview gating for ParentRequestID, realtime output/history projection tweaks, realtime request normalization/pricing fallback, and observability-plugin flushing on trace completion. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Store as RDBLogStore
participant DB as Database
Client->>Store: GetSessionLogs(ctx, sessionID, pagination)
Store->>Store: clamp limit (≤50), normalize offset (≥0), set ORDER BY timestamp,id
par ConcurrentQueries
Store->>DB: SELECT COUNT(*) WHERE parent_request_id = ? AND session_id = ?
Store->>DB: SELECT rows WHERE parent_request_id = ? AND session_id = ? ORDER BY timestamp,id LIMIT ? OFFSET ?
end
DB-->>Store: count, rows
Store->>Store: normalize timestamps, compute ReturnedCount and HasMore
Store-->>Client: SessionDetailResult (logs, counts, pagination metadata)
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Confidence Score: 5/5Safe to merge — no P0 or P1 issues found across all changed files. All code paths are correct: sync.WaitGroup.Go is valid for Go 1.26.1, trace release is deferred with panic recovery, parallel session queries avoid shared GORM state, and timestamp normalization handles both SQLite and PostgreSQL return types. Prior review concerns are addressed. No blocking findings remain. No files require special attention. Important Files Changed
Reviews (14): Last reviewed commit: "feat: add session log storage and realti..." | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/rdb.go`:
- Around line 610-639: listSelectColumns() currently always expands the full
realtime.turn fields, which causes SearchLogs() to return full cumulative turn
payloads; split the projection so only session/detail paths include the full
turn projection. Fix by updating listSelectColumns() to accept a mode flag
(e.g., includeFullTurn bool or an enum) or create a new helper (e.g.,
sessionSelectColumns()) used by GetSessionLogs()/GetSessionDetail() while
SearchLogs() continues to use the trimmed/default projection; update callers
(SearchLogs, GetSessionLogs) to pass the appropriate flag or call the new helper
and ensure the postgres/sqlite CASE expressions (the same strings currently
assigned to inputHistoryExpr/responsesInputExpr/outputMessageExpr) are only
included when includeFullTurn is true.
- Around line 89-91: canUseMatView currently allows matview fast paths even when
filters.ParentRequestID is set, causing aggregates (e.g., in GetStats,
GetHistogram, GetTokenHistogram, GetCostHistogram, GetModelHistogram,
GetLatencyHistogram, GetModelRankings, GetProviderCostHistogram,
GetProviderTokenHistogram, GetProviderLatencyHistogram) to ignore the
ParentRequestID filter; update canUseMatView (matviews.go) to include &&
f.ParentRequestID == "" in its return condition so matviews are only used when
ParentRequestID is empty, and also add handling for ParentRequestID in
applyMatViewFilters (matviews.go:181) to ensure the raw-logs path is used when
ParentRequestID is present.
- Around line 531-533: Change timestamp parsing and formatting to preserve
sub-second precision: update the parsing at GetSessionLogs where startedAt and
latestAt are parsed (currently using time.RFC3339) to use time.RFC3339Nano so
DurationMs (latestTime.Sub(startedTime).Milliseconds()) is accurate, and also
update normalizeAggregateTimestamp() to format timestamps with time.RFC3339Nano
instead of time.RFC3339; ensure any other places that round timestamps to
strings for GetSessionLogs use the RFC3339Nano constant so startedAt/latestAt
retain fractional seconds.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 11440d72-e987-4011-bc32-3b6e8b2648b8
📒 Files selected for processing (5)
framework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/utils.go
2f86e4c to
16d59fd
Compare
de28bc1 to
8b08931
Compare
16d59fd to
edba174
Compare
8b08931 to
6a8d154
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/rdb.go`:
- Around line 464-467: GetSessionLogs() currently claims whatever
pagination.SortBy was passed while always ordering by timestamp, and ORDER BY
timestamp alone is non-deterministic for offset pagination; update the handler
to normalize pagination.SortBy to "timestamp" before use (so the response
metadata matches actual sorting) and make the SQL ORDER BY deterministic by
appending a unique tiebreaker column (e.g. "id" or a session-specific sequence
column) to the ordering (e.g. ORDER BY timestamp ASC/DESC, id ASC) using the
existing orderDir variable; apply the same change where similar pagination logic
appears (the other pagination handling blocks that set orderDir) so both the
returned metadata and the SQL ordering are consistent and stable across pages.
In `@framework/modelcatalog/utils.go`:
- Around line 69-70: The normalizeStreamRequestType mapping currently remaps
schemas.RealtimeRequest to schemas.ResponsesRequest which breaks
PricingOverride.validateRequestTypes by making "realtime" appear unsupported;
update the case handling in normalizeStreamRequestType so that
schemas.RealtimeRequest returns schemas.RealtimeRequest (i.e., remove the remap)
to preserve runtime normalization and keep PricingOverride.validateRequestTypes
compatible with the first-class "realtime" request type.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 89468cb0-7751-42b7-a08f-d083bd4d3b4a
📒 Files selected for processing (6)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/utils.go
🚧 Files skipped from review as they are similar to previous changes (2)
- framework/logstore/matviews.go
- framework/logstore/migrations.go
edba174 to
a3e85c4
Compare
6a8d154 to
4978f02
Compare
a3e85c4 to
78a4b1f
Compare
90ebca7 to
f38794a
Compare
78a4b1f to
51db921
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
framework/logstore/rdb.go (1)
464-469:⚠️ Potential issue | 🟡 MinorNormalize the returned
ordervalue too.The SQL now has a deterministic sort, but
Pagination.Orderis still echoed back as"","DESC", or any other caller value even though the query falls back to ascending unless the input is exactly"desc". That leaves the response metadata inconsistent with the actual ordering.🛠️ Proposed fix
pagination.SortBy = "timestamp" + pagination.Order = strings.ToLower(pagination.Order) + if pagination.Order != "desc" { + pagination.Order = "asc" + } orderDir := "ASC" if pagination.Order == "desc" { orderDir = "DESC" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 464 - 469, The Pagination.Order field is not normalized even though orderDir determines SQL ordering; after computing orderDir (based on pagination.Order) and building orderClause, set pagination.Order to a normalized value that matches the applied sort (e.g., "asc" when orderDir == "ASC" and "desc" when orderDir == "DESC") so returned metadata reflects the actual ordering; update the block around pagination.SortBy, orderDir, and orderClause to assign pagination.Order accordingly (refer to pagination.SortBy, pagination.Order, orderDir, and orderClause).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/rdb.go`:
- Around line 450-453: GetSessionLogs currently overwrites the caller's
sessionID by assigning sessionID = strings.TrimSpace(sessionID), which changes
the exact ID later returned and used for lookups; instead, perform
strings.TrimSpace(sessionID) only for the whitespace-only guard (e.g., if
strings.TrimSpace(sessionID) == "") but keep and pass the original sessionID
variable unchanged to the underlying store and in any returned structures so the
exact caller-provided ID is preserved (update references in GetSessionLogs that
use sessionID for queries/return values to use the original value).
---
Duplicate comments:
In `@framework/logstore/rdb.go`:
- Around line 464-469: The Pagination.Order field is not normalized even though
orderDir determines SQL ordering; after computing orderDir (based on
pagination.Order) and building orderClause, set pagination.Order to a normalized
value that matches the applied sort (e.g., "asc" when orderDir == "ASC" and
"desc" when orderDir == "DESC") so returned metadata reflects the actual
ordering; update the block around pagination.SortBy, orderDir, and orderClause
to assign pagination.Order accordingly (refer to pagination.SortBy,
pagination.Order, orderDir, and orderClause).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d3dacc8e-85a3-40fa-bab1-c454e7556af9
📒 Files selected for processing (8)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.go
✅ Files skipped from review due to trivial changes (1)
- framework/modelcatalog/pricing_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- framework/modelcatalog/utils.go
- framework/logstore/store.go
- framework/modelcatalog/pricing.go
- framework/logstore/matviews.go
- framework/logstore/migrations.go
f38794a to
8e9f5b5
Compare
51db921 to
c180d40
Compare
8e9f5b5 to
8c17c3b
Compare
c180d40 to
aee487f
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (3)
framework/tracing/tracer.go (2)
392-418: Consider adding panic recovery to prevent trace leaks.If any plugin's
Injectmethod panics,ReleaseTraceon line 417 won't execute, potentially leaking the trace object back to the pool. Since plugins are external code, defensive programming is warranted.🛡️ Proposed fix to add panic recovery
go func() { + defer func() { + if r := recover(); r != nil && t.logger != nil { + t.logger.Error("panic in CompleteAndFlushTrace: %v", r) + } + }() completedTrace := t.EndTrace(strings.TrimSpace(traceID)) if completedTrace == nil { return } + defer t.ReleaseTrace(completedTrace) var obsPlugins []schemas.ObservabilityPlugin if loaded := t.obsPlugins.Load(); loaded != nil { obsPlugins = *loaded } seen := make(map[string]struct{}, len(obsPlugins)) for _, plugin := range obsPlugins { if plugin == nil { continue } name := plugin.GetName() if _, exists := seen[name]; exists { continue } seen[name] = struct{}{} if err := plugin.Inject(context.Background(), completedTrace); err != nil && t.logger != nil { t.logger.Warn("observability plugin %s failed to inject trace: %v", name, err) } } - - t.ReleaseTrace(completedTrace) }()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer.go` around lines 392 - 418, The goroutine that calls t.EndTrace(...), iterates obsPlugins and calls plugin.Inject(...) can panic inside a plugin and skip t.ReleaseTrace(completedTrace); wrap the goroutine body with a defer that recovers from panics and ensures t.ReleaseTrace(completedTrace) always runs; on panic capture the recovered value and log it via t.logger (e.g., t.logger.Warn) along with the plugin name/context so the trace is never leaked even if plugin.Inject panics and failures are observable.
403-414: Plugin mutations to the trace are visible to subsequent plugins.All plugins receive the same
*schemas.Tracepointer. If a plugin mutates the trace (e.g., modifying span attributes), subsequent plugins in the loop will see those mutations. This matches theObservabilityPlugin.Injectcontract which states plugins should "convert the trace to their backend's format" (implying read-only access), but there's no enforcement.If plugin isolation is desired, consider either documenting the read-only expectation more explicitly or passing a deep copy to each plugin.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer.go` around lines 403 - 414, The loop over obsPlugins calls plugin.Inject(context.Background(), completedTrace) passing the same *schemas.Trace pointer so mutations by one plugin affect subsequent ones; to fix, make a deep copy of completedTrace for each plugin (before calling plugin.Inject) so each plugin receives an isolated *schemas.Trace instance, or alternatively enforce/read-only by documenting the expectation; update the loop that iterates obsPlugins (the code calling plugin.GetName() and plugin.Inject(..., completedTrace)) to clone completedTrace per iteration and pass the clone to plugin.Inject to prevent cross-plugin mutation.framework/tracing/tracer_test.go (1)
33-60: Test correctly validates async plugin injection, but has a potential race on trace release.The test validates the happy path, but there's a subtle race condition risk:
CompleteAndFlushTracespawns a goroutine that callsReleaseTrace(completedTrace)after plugin injection. Since the test'sInjectdoes a shallow copy (traceCopy := *trace), the original trace could be released (and potentially reused) while the test is still accessingtrace.TraceIDon line 50.The current implementation works because:
- The shallow copy captures
TraceID(a string, which is immutable)- The pool release only happens after all plugins complete
However, if future plugins or tests access mutable fields (slices/maps), this could become a data race. Consider documenting this constraint or using a deeper copy in the test double.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer_test.go` around lines 33 - 60, The test's plugin Inject (testRealtimeObservabilityPlugin.Inject) does a shallow copy (traceCopy := *trace) which risks a data race because CompleteAndFlushTrace spawns a goroutine that calls ReleaseTrace(completedTrace) — fix by making the plugin capture a deep copy of the trace (clone all mutable fields like slices/maps, not just the struct) before sending to plugin.injected, or add a helper clone/DeepCopyTrace used by the test double so the test accesses its own independent copy and cannot race with ReleaseTrace.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@framework/tracing/tracer_test.go`:
- Around line 33-60: The test's plugin Inject
(testRealtimeObservabilityPlugin.Inject) does a shallow copy (traceCopy :=
*trace) which risks a data race because CompleteAndFlushTrace spawns a goroutine
that calls ReleaseTrace(completedTrace) — fix by making the plugin capture a
deep copy of the trace (clone all mutable fields like slices/maps, not just the
struct) before sending to plugin.injected, or add a helper clone/DeepCopyTrace
used by the test double so the test accesses its own independent copy and cannot
race with ReleaseTrace.
In `@framework/tracing/tracer.go`:
- Around line 392-418: The goroutine that calls t.EndTrace(...), iterates
obsPlugins and calls plugin.Inject(...) can panic inside a plugin and skip
t.ReleaseTrace(completedTrace); wrap the goroutine body with a defer that
recovers from panics and ensures t.ReleaseTrace(completedTrace) always runs; on
panic capture the recovered value and log it via t.logger (e.g., t.logger.Warn)
along with the plugin name/context so the trace is never leaked even if
plugin.Inject panics and failures are observable.
- Around line 403-414: The loop over obsPlugins calls
plugin.Inject(context.Background(), completedTrace) passing the same
*schemas.Trace pointer so mutations by one plugin affect subsequent ones; to
fix, make a deep copy of completedTrace for each plugin (before calling
plugin.Inject) so each plugin receives an isolated *schemas.Trace instance, or
alternatively enforce/read-only by documenting the expectation; update the loop
that iterates obsPlugins (the code calling plugin.GetName() and
plugin.Inject(..., completedTrace)) to clone completedTrace per iteration and
pass the clone to plugin.Inject to prevent cross-plugin mutation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9ebeab06-a20b-4380-9a7b-b75f98bb8675
📒 Files selected for processing (10)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (3)
- framework/modelcatalog/utils.go
- framework/logstore/matviews.go
- framework/modelcatalog/pricing.go
🚧 Files skipped from review as they are similar to previous changes (4)
- framework/modelcatalog/pricing_test.go
- framework/logstore/store.go
- framework/logstore/rdb.go
- framework/logstore/migrations.go
aee487f to
98d9691
Compare
8c17c3b to
861b89e
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
framework/tracing/tracer_test.go (1)
11-31: Test double implementation looks good with one caveat.The
testRealtimeObservabilityPlugincorrectly implements theObservabilityPlugininterface. The defensive nil check inInject(lines 24-26) is reasonable defensive coding.However, the shallow copy on line 28 (
traceCopy := *trace) copies thesync.Mutexembedded inschemas.Trace. While functionally safe for this test (onlyTraceIDis read),go vetwill flag this ascopies lock value.Consider using only the fields you need instead:
♻️ Optional: Avoid copying the mutex
func (p *testRealtimeObservabilityPlugin) Inject(_ context.Context, trace *schemas.Trace) error { if trace == nil { p.injected <- nil return nil } - traceCopy := *trace - p.injected <- &traceCopy + // Send the pointer directly since the test only reads TraceID + // and the trace is about to be released anyway + p.injected <- trace return nil }Alternatively, if isolation is truly needed, construct a minimal struct with just the fields under test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer_test.go` around lines 11 - 31, The Inject method in testRealtimeObservabilityPlugin currently does a shallow copy of the incoming *schemas.Trace via "traceCopy := *trace", which copies the embedded sync.Mutex and triggers "copies lock value" warnings; instead, allocate a new schemas.Trace and copy only the fields needed for the test (e.g., TraceID or other non-lock fields) and send that pointer on p.injected (or construct a minimal struct with only the tested fields) so you avoid copying the mutex while preserving isolation; update the Inject method in testRealtimeObservabilityPlugin accordingly.framework/logstore/rdb.go (1)
534-557: Optional: consolidate count into aggregate query.The separate
Count()query could be merged into the aggregate SELECT asCOUNT(*) AS count, reducing two parallel queries to one. Since errgroup already runs them concurrently, the latency benefit is marginal, but it would halve the DB round-trips for this endpoint.SELECT COUNT(*) AS count, COALESCE(SUM(cost), 0) AS total_cost, ...Not blocking — the current parallel approach is correct and clear.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 534 - 557, The code runs two parallel g.Go tasks to fetch a count and aggregates; consolidate them into a single query by adding COUNT(*) AS count to the SELECT in the second g.Go closure that builds the Row() (the block using s.db.WithContext(...).Model(&Log{}).Where("parent_request_id = ?", sessionID).Select(...).Row()), then scan the additional count column into the existing count variable and keep using normalizeAggregateTimestamp for startedRaw/latestRaw; update the Row().Scan call to include &count as the first target and remove the separate g.Go that calls Count() so there is only one database round-trip for count, totalCost, totalTokens, startedRaw, and latestRaw.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@framework/logstore/rdb.go`:
- Around line 534-557: The code runs two parallel g.Go tasks to fetch a count
and aggregates; consolidate them into a single query by adding COUNT(*) AS count
to the SELECT in the second g.Go closure that builds the Row() (the block using
s.db.WithContext(...).Model(&Log{}).Where("parent_request_id = ?",
sessionID).Select(...).Row()), then scan the additional count column into the
existing count variable and keep using normalizeAggregateTimestamp for
startedRaw/latestRaw; update the Row().Scan call to include &count as the first
target and remove the separate g.Go that calls Count() so there is only one
database round-trip for count, totalCost, totalTokens, startedRaw, and
latestRaw.
In `@framework/tracing/tracer_test.go`:
- Around line 11-31: The Inject method in testRealtimeObservabilityPlugin
currently does a shallow copy of the incoming *schemas.Trace via "traceCopy :=
*trace", which copies the embedded sync.Mutex and triggers "copies lock value"
warnings; instead, allocate a new schemas.Trace and copy only the fields needed
for the test (e.g., TraceID or other non-lock fields) and send that pointer on
p.injected (or construct a minimal struct with only the tested fields) so you
avoid copying the mutex while preserving isolation; update the Inject method in
testRealtimeObservabilityPlugin accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8b1c6d13-db60-47cb-8acb-40193fef144e
📒 Files selected for processing (10)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (3)
- framework/modelcatalog/utils.go
- framework/logstore/matviews.go
- framework/modelcatalog/pricing.go
🚧 Files skipped from review as they are similar to previous changes (5)
- framework/logstore/store.go
- framework/modelcatalog/pricing_test.go
- framework/logstore/migrations.go
- framework/logstore/tables.go
- framework/tracing/tracer.go
861b89e to
a48651f
Compare
98d9691 to
480d94e
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
framework/logstore/rdb.go (1)
466-470:⚠️ Potential issue | 🟡 MinorNormalize
pagination.Orderbefore building the SQL order clause.
GetSessionLogsonly treats exact"desc"as descending. Mixed-case/whitespace values (e.g."DESC") are executed as ascending while response metadata can still echo the original value, causing sort metadata drift.🔧 Suggested fix
pagination.SortBy = "timestamp" + pagination.Order = strings.ToLower(strings.TrimSpace(pagination.Order)) + if pagination.Order != "desc" { + pagination.Order = "asc" + } orderDir := "ASC" if pagination.Order == "desc" { orderDir = "DESC" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 466 - 470, Normalize the incoming pagination.Order before constructing the SQL ORDER clause: in the GetSessionLogs code path where pagination.SortBy and orderDir are set, trim whitespace and lowercase pagination.Order (e.g., via strings.TrimSpace and strings.ToLower) and then compare against "desc" to set orderDir = "DESC" or "ASC"; also consider writing the normalized value back to pagination.Order so response metadata reflects the actual ordering used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@framework/logstore/rdb.go`:
- Around line 466-470: Normalize the incoming pagination.Order before
constructing the SQL ORDER clause: in the GetSessionLogs code path where
pagination.SortBy and orderDir are set, trim whitespace and lowercase
pagination.Order (e.g., via strings.TrimSpace and strings.ToLower) and then
compare against "desc" to set orderDir = "DESC" or "ASC"; also consider writing
the normalized value back to pagination.Order so response metadata reflects the
actual ordering used.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fd1cb46b-b434-469e-acfd-18414862eefa
📒 Files selected for processing (10)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (5)
- framework/logstore/matviews.go
- framework/tracing/tracer_test.go
- framework/modelcatalog/pricing_test.go
- framework/logstore/migrations.go
- framework/logstore/tables.go
🚧 Files skipped from review as they are similar to previous changes (3)
- framework/modelcatalog/utils.go
- framework/logstore/store.go
- framework/tracing/tracer.go
a48651f to
f39f1ed
Compare
480d94e to
4d142a4
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
framework/modelcatalog/pricing.go (1)
836-837: Consider centralizing the “responses-like request” predicate.The repeated condition is correct, but deduplicating it would reduce future drift across fallback branches.
♻️ Proposed refactor
+func isResponsesLikeRequestType(requestType schemas.RequestType) bool { + return requestType == schemas.ResponsesRequest || + requestType == schemas.ResponsesStreamRequest || + requestType == schemas.RealtimeRequest +} ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) {Also applies to: 856-857, 876-877, 887-888
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/modelcatalog/pricing.go` around lines 836 - 837, The repeated predicate checking requestType against schemas.ResponsesRequest, schemas.ResponsesStreamRequest, and schemas.RealtimeRequest should be centralized: add a single helper function (e.g., isResponsesLike(requestType schemas.RequestType) bool or a method on the model catalog receiver like (mc *ModelCatalog) isResponsesLike(requestType schemas.RequestType) bool) that returns true for those three constants, then replace each duplicated condition (the occurrences around mc.logger.Debug and the fallback branches) with a call to that helper to eliminate duplication and reduce drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/tracing/tracer.go`:
- Around line 385-391: CompleteAndFlushTrace lacks a nil receiver guard and may
panic when called on a nil *Tracer due to calling strings.TrimSpace; add the
same nil receiver check used in SetObservabilityPlugins at the start of
CompleteAndFlushTrace (i.e., if t == nil { return }) so the method safely
returns when the receiver is nil before performing any operations with traceID
or strings.TrimSpace.
- Around line 392-419: The fire-and-forget goroutine that calls t.EndTrace(...),
iterates obsPlugins and calls plugin.Inject(...) can outlive Tracer.Stop() and
cause lost traces or use-after-cleanup; add a sync.WaitGroup field (e.g., wg) to
the Tracer struct, call t.wg.Add(1) immediately before spawning the goroutine
and defer t.wg.Done() as the first statement in the goroutine, and then update
Stop() to call t.wg.Wait() (after initiating shutdown/closing resources) so Stop
blocks until all in-flight flush goroutines finish; keep using EndTrace,
obsPlugins.Load, plugin.Inject, ReleaseTrace as-is inside the tracked goroutine.
- Around line 412-417: The ReleaseTrace(completedTrace) is called immediately
after invoking plugin.Inject(...) which is unsafe because the
ObservabilityPlugin docs allow async processing; update the contract and call
site so traces aren't returned to the pool while plugins may still use them:
either change the ObservabilityPlugin.Inject signature to provide a completion
signal (e.g., return an error plus a done channel or a context-aware WaitGroup)
that callers can wait on, or require plugins to copy the trace for async work
and document Inject as synchronous-only; then modify the tracer.go call site
(the loop that calls plugin.Inject(context.Background(), completedTrace)) to
collect/await those completion signals before calling
ReleaseTrace(completedTrace); reference the ObservabilityPlugin interface, its
Inject method, and the ReleaseTrace(completedTrace) call (also update OtelPlugin
and LoggerPlugin implementations to follow the new sync/async contract).
---
Nitpick comments:
In `@framework/modelcatalog/pricing.go`:
- Around line 836-837: The repeated predicate checking requestType against
schemas.ResponsesRequest, schemas.ResponsesStreamRequest, and
schemas.RealtimeRequest should be centralized: add a single helper function
(e.g., isResponsesLike(requestType schemas.RequestType) bool or a method on the
model catalog receiver like (mc *ModelCatalog) isResponsesLike(requestType
schemas.RequestType) bool) that returns true for those three constants, then
replace each duplicated condition (the occurrences around mc.logger.Debug and
the fallback branches) with a call to that helper to eliminate duplication and
reduce drift.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 55dce558-8570-471f-ae2f-a609de5e54ce
📒 Files selected for processing (10)
framework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (3)
- framework/modelcatalog/utils.go
- framework/modelcatalog/pricing_test.go
- framework/logstore/rdb.go
🚧 Files skipped from review as they are similar to previous changes (4)
- framework/logstore/matviews.go
- framework/logstore/store.go
- framework/tracing/tracer_test.go
- framework/logstore/migrations.go
4d142a4 to
3048cdf
Compare
f39f1ed to
217a6b5
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
framework/logstore/rdb.go (1)
466-470:⚠️ Potential issue | 🟡 MinorNormalize
pagination.Orderbefore building the session query.This path defaults to ascending unless
pagination.Order == "desc", but it returns the caller's raw value. Iforderis omitted or sent as"DESC", the logs are sorted ascending while the response metadata still says""or"DESC".💡 Suggested fix
pagination.SortBy = "timestamp" + pagination.Order = strings.ToLower(strings.TrimSpace(pagination.Order)) + if pagination.Order != "desc" { + pagination.Order = "asc" + } orderDir := "ASC" if pagination.Order == "desc" { orderDir = "DESC" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 466 - 470, Normalize pagination.Order to a canonical lowercase value before building the DB session: compute orderDir by normalizing pagination.Order (e.g., strings.ToLower(pagination.Order)) and mapping empty/unknown to "asc" and "desc" to "desc"; use that orderDir when building the query and assign the normalized value back to pagination.Order so the response metadata matches the actual sort direction (refer to pagination.Order, pagination.SortBy, and orderDir in the session-building block).
🧹 Nitpick comments (1)
framework/modelcatalog/pricing.go (1)
836-837: Consider extracting a helper for responses-like fallback checks.The same request-type OR-chain is repeated in multiple branches; a shared predicate will reduce drift risk.
♻️ Suggested refactor
+func isResponsesLikeRequestType(requestType schemas.RequestType) bool { + return requestType == schemas.ResponsesRequest || + requestType == schemas.ResponsesStreamRequest || + requestType == schemas.RealtimeRequest +} ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) { ... - if requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest { + if isResponsesLikeRequestType(requestType) {Also applies to: 856-857, 876-877, 887-888
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/modelcatalog/pricing.go` around lines 836 - 837, Multiple branches repeat the same request-type OR-chain (requestType == schemas.ResponsesRequest || requestType == schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest); extract a helper predicate (e.g., isResponsesLikeRequest(requestType) or method on the same package) that returns true for those three constants, then replace the OR-chain uses in the pricing.go branches (the lines around mc.logger.Debug and the other occurrences noted) with calls to that helper to centralize the logic and avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/rdb.go`:
- Around line 532-557: GetSessionSummary currently issues two queries (one Count
and one SUM/MIN/MAX) causing inconsistent reads; change it to a single aggregate
query by selecting COUNT(*) AS count together with COALESCE(SUM(cost),0) AS
total_cost, COALESCE(SUM(total_tokens),0) AS total_tokens, MIN(timestamp) AS
started_at, MAX(timestamp) AS latest_at in the same
s.db.WithContext(...).Model(&Log{}).Where("parent_request_id = ?",
sessionID).Select(...).Row(), then Scan the row into count, totalCost,
totalTokens, startedRaw, latestRaw, and keep using normalizeAggregateTimestamp
for startedAt/latestAt and return any scan error—this consolidates reads into
one atomic aggregate and removes the separate g.Go Count goroutine.
---
Duplicate comments:
In `@framework/logstore/rdb.go`:
- Around line 466-470: Normalize pagination.Order to a canonical lowercase value
before building the DB session: compute orderDir by normalizing pagination.Order
(e.g., strings.ToLower(pagination.Order)) and mapping empty/unknown to "asc" and
"desc" to "desc"; use that orderDir when building the query and assign the
normalized value back to pagination.Order so the response metadata matches the
actual sort direction (refer to pagination.Order, pagination.SortBy, and
orderDir in the session-building block).
---
Nitpick comments:
In `@framework/modelcatalog/pricing.go`:
- Around line 836-837: Multiple branches repeat the same request-type OR-chain
(requestType == schemas.ResponsesRequest || requestType ==
schemas.ResponsesStreamRequest || requestType == schemas.RealtimeRequest);
extract a helper predicate (e.g., isResponsesLikeRequest(requestType) or method
on the same package) that returns true for those three constants, then replace
the OR-chain uses in the pricing.go branches (the lines around mc.logger.Debug
and the other occurrences noted) with calls to that helper to centralize the
logic and avoid drift.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5278999b-d9e3-478b-b4b0-e118fe8213f2
📒 Files selected for processing (11)
core/schemas/plugin.goframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (3)
- core/schemas/plugin.go
- framework/tracing/tracer_test.go
- framework/logstore/migrations.go
🚧 Files skipped from review as they are similar to previous changes (3)
- framework/modelcatalog/utils.go
- framework/logstore/matviews.go
- framework/tracing/tracer.go
3048cdf to
acc3988
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
framework/tracing/tracer.go (1)
378-384:⚠️ Potential issue | 🟠 MajorBlock new flush registrations before
Stop()waits.
core/bifrost.gostill invokesCompleteAndFlushTracefrom request handling. Line 379 waits only for flushes that are already registered, so a late request can still enterCompleteAndFlushTraceafter the wait and racet.store.Stop()/t.accumulator.Cleanup()on Lines 380-384. Add a shutdown gate around registration, then drain, then tear down.Also applies to: 398-399
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer.go` around lines 378 - 384, The Stop() method races with late CompleteAndFlushTrace registrations because flushWG.Wait() only waits for currently-registered flushes; add a shutdown gate (e.g., a boolean like stopping protected by a mutex or an atomic plus a registration channel) that CompleteAndFlushTrace checks before registering with flushWG to refuse/short-circuit new registrations, then in Tracer.Stop() set the gate to disallow new registrations, drain/ensure no in-flight registration callbacks can start, call flushWG.Wait(), and only after that call t.store.Stop() and t.accumulator.Cleanup(); apply the same gate/drain pattern to the other shutdown site referenced around the 398-399 area so no new registrations can race teardown.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/logstore/migrations.go`:
- Around line 113-115: The migrationAddParentRequestIDIndex migration currently
builds the parent_request_id index synchronously inside triggerMigrations();
change it to follow the deferred index pattern used by
migrationAddProviderHistogramIndex, migrationAddMetadataGINIndex, and
migrationAddLogsAndDashboardPerformanceIndexes by setting UseTransaction = false
and making Migrate a no-op so the actual index creation is postponed and
performed by ensurePerformanceIndexes() using CREATE INDEX CONCURRENTLY in the
post-startup background goroutine; update migrationAddParentRequestIDIndex and
any registration in triggerMigrations() accordingly so startup isn't blocked by
an AccessExclusiveLock.
In `@framework/tracing/tracer.go`:
- Around line 25-26: The middleware's SetObservabilityPlugins updates only
m.obsPlugins but never forwards the new slice to the tracer, leaving
t.obsPlugins empty (used by CompleteAndFlushTrace); fix by updating
TracingMiddleware.SetObservabilityPlugins to store the slice on m.obsPlugins
and, if m.tracer.Load() returns a non-nil tracer, call
tracer.SetObservabilityPlugins(obsPlugins) so the tracer's t.obsPlugins is
populated; apply the same change to the other SetObservabilityPlugins
occurrences mentioned (lines around 42-47 and 404-407) to ensure all
middleware-to-tracer flows forward the update.
- Around line 399-423: The code in the function processing observability plugins
lacks panic isolation and does not defer the call to ReleaseTrace, risking
resource leaks if a panic occurs. To fix this, move the call to ReleaseTrace
into a defer statement immediately after successfully calling EndTrace, and wrap
the calls to plugin.GetName() and plugin.Inject() inside an anonymous function
that uses defer and recover to catch and suppress panics, preventing the
goroutine from crashing and ensuring the trace is always properly released. This
update will isolate plugin panics and maintain trace pooling integrity.
---
Outside diff comments:
In `@framework/tracing/tracer.go`:
- Around line 378-384: The Stop() method races with late CompleteAndFlushTrace
registrations because flushWG.Wait() only waits for currently-registered
flushes; add a shutdown gate (e.g., a boolean like stopping protected by a mutex
or an atomic plus a registration channel) that CompleteAndFlushTrace checks
before registering with flushWG to refuse/short-circuit new registrations, then
in Tracer.Stop() set the gate to disallow new registrations, drain/ensure no
in-flight registration callbacks can start, call flushWG.Wait(), and only after
that call t.store.Stop() and t.accumulator.Cleanup(); apply the same gate/drain
pattern to the other shutdown site referenced around the 398-399 area so no new
registrations can race teardown.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fbf56d5a-fae3-4545-9fff-9feaa50c529a
📒 Files selected for processing (11)
core/schemas/plugin.goframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (3)
- framework/modelcatalog/pricing_test.go
- framework/modelcatalog/pricing.go
- framework/tracing/tracer_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- framework/modelcatalog/utils.go
- framework/logstore/matviews.go
- core/schemas/plugin.go
- framework/logstore/store.go
- framework/logstore/tables.go
acc3988 to
95e9aa6
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
framework/logstore/rdb.go (1)
466-470:⚠️ Potential issue | 🟡 MinorNormalize the returned
pagination.Ordertoo.The
timestamp,idtie-breaker is in place, butpagination.Orderis still echoed verbatim.order=""ororder="DESC"will execute as ascending while the response metadata says otherwise.🛠️ Suggested tweak
pagination.SortBy = "timestamp" + pagination.Order = strings.ToLower(strings.TrimSpace(pagination.Order)) + if pagination.Order != "desc" { + pagination.Order = "asc" + } orderDir := "ASC" if pagination.Order == "desc" { orderDir = "DESC" }Also applies to: 506-514
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/logstore/rdb.go` around lines 466 - 470, The code sets orderDir based on pagination.Order but doesn't normalize pagination.Order itself, causing response metadata to be misleading; update the block that sets pagination.SortBy and orderDir so that after determining orderDir you also set pagination.Order = "desc" when orderDir == "DESC" and pagination.Order = "asc" otherwise (i.e., normalize empty/unknown to "asc"), and make the same change in the other similar block that builds orderDir (the one around the second pagination handling) so the returned pagination.Order matches the actual sorting applied.
🧹 Nitpick comments (1)
framework/tracing/tracer.go (1)
42-48: Publish an immutable plugin snapshot.Line 47 only swaps the slice header. The backing array is still caller-owned, so later element edits or slice reuse can leak into in-flight flushes despite the atomic store. Clone the slice before publishing it.
💡 Suggested change
func (t *Tracer) SetObservabilityPlugins(obsPlugins []schemas.ObservabilityPlugin) { if t == nil { return } - t.obsPlugins.Store(&obsPlugins) + snapshot := append([]schemas.ObservabilityPlugin(nil), obsPlugins...) + t.obsPlugins.Store(&snapshot) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/tracing/tracer.go` around lines 42 - 48, SetObservabilityPlugins currently atomically stores the incoming slice header (t.obsPlugins.Store(&obsPlugins)) which can leak caller-owned backing arrays into consumers; fix by cloning the slice and its backing array before publishing: inside Tracer.SetObservabilityPlugins, after the nil check, allocate a new slice with the same length, copy the elements from obsPlugins into it, and then call t.obsPlugins.Store(&clonedSlice) so consumers get an immutable snapshot; keep using the Tracer type and t.obsPlugins.Store for the atomic swap.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@framework/logstore/rdb.go`:
- Around line 466-470: The code sets orderDir based on pagination.Order but
doesn't normalize pagination.Order itself, causing response metadata to be
misleading; update the block that sets pagination.SortBy and orderDir so that
after determining orderDir you also set pagination.Order = "desc" when orderDir
== "DESC" and pagination.Order = "asc" otherwise (i.e., normalize empty/unknown
to "asc"), and make the same change in the other similar block that builds
orderDir (the one around the second pagination handling) so the returned
pagination.Order matches the actual sorting applied.
---
Nitpick comments:
In `@framework/tracing/tracer.go`:
- Around line 42-48: SetObservabilityPlugins currently atomically stores the
incoming slice header (t.obsPlugins.Store(&obsPlugins)) which can leak
caller-owned backing arrays into consumers; fix by cloning the slice and its
backing array before publishing: inside Tracer.SetObservabilityPlugins, after
the nil check, allocate a new slice with the same length, copy the elements from
obsPlugins into it, and then call t.obsPlugins.Store(&clonedSlice) so consumers
get an immutable snapshot; keep using the Tracer type and t.obsPlugins.Store for
the atomic swap.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c77116a8-c0db-49e2-9ac4-2fbbb7f8a534
📒 Files selected for processing (11)
core/schemas/plugin.goframework/logstore/matviews.goframework/logstore/migrations.goframework/logstore/rdb.goframework/logstore/store.goframework/logstore/tables.goframework/modelcatalog/pricing.goframework/modelcatalog/pricing_test.goframework/modelcatalog/utils.goframework/tracing/tracer.goframework/tracing/tracer_test.go
✅ Files skipped from review due to trivial changes (2)
- framework/modelcatalog/pricing_test.go
- framework/tracing/tracer_test.go
🚧 Files skipped from review as they are similar to previous changes (6)
- framework/modelcatalog/utils.go
- framework/logstore/matviews.go
- framework/modelcatalog/pricing.go
- core/schemas/plugin.go
- framework/logstore/tables.go
- framework/logstore/migrations.go
Merge activity
|
The base branch was changed.

Summary
Adds database support for querying logs grouped by realtime session. Introduces
parent_request_idfiltering,GetSessionLogsandGetSessionSummarymethodswith pagination and aggregate metrics, a migration for the
parent_request_idindex,
RealtimeRequestnormalization in model catalog for pricing lookups, andCompleteAndFlushTraceimplementation in the tracing framework for explicittrace flushing outside the HTTP request lifecycle.
Changes
ParentRequestIDtoSearchFiltersfor filtering logs by sessionSessionDetailResulttype with pagination, aggregates (total cost,tokens, started_at, latest_at, duration_ms)
SessionSummaryResultlightweight struct for polling aggregate stats(total cost, tokens, started_at, latest_at, duration_ms, total count)
GetSessionLogs(ctx, sessionID, pagination)to logstore interface andRDB implementation
GetSessionSummary(ctx, sessionID)to logstore interface and RDBimplementation — single-query aggregate without loading individual logs
limit
migrationAddParentRequestIDIndexcreating index onlogs.parent_request_idrealtime.turnobject type: preserves fullinput_history, responses_input_history, and output_message columns
normalizeAggregateTimestamphelper for cross-DB timestamp handlingRealtimeRequest→"responses"mapping in model catalog for pricingnormalization, with test coverage
CompleteAndFlushTracetoTracer— ends a trace and forwards it toobservability plugins asynchronously; used by realtime transports that bypass
the HTTP tracing middleware
SetObservabilityPluginsto allow runtime plugin injection into tracerCompleteAndFlushTraceobservability plugin injectionType of change
Affected areas
How to test
Screenshots/Recordings
N/A
Breaking changes
Related issues
N/A
Security considerations
sessionIDis validated and trimmed before use in queriesCompleteAndFlushTraceruns async — observability plugin errors are logged, not propagatedChecklist
docs/contributing/README.mdand followed the guidelines