Improve Activity event handling in the UI#254
Conversation
- Bug in #252 found that the Activity page showed activity inconsistent with /api/metrics - Change data structure on how metrics to an array. - Improves synchronization performance and reliability of backend/ui
WalkthroughAdds a Go CLI benchmark for chat completions; changes proxy metrics transmission to batched slices and SSE parsing; UI now exposes connection status, shows a ConnectionStatus component in the header, and Activity page sorts metrics descending with a new Id column. No exported/public API signatures changed. Changes
Sequence Diagram(s)sequenceDiagram
participant PM as ProxyManager
participant API as Server API
participant UI as Browser APIProvider
participant Page as ActivityPage
PM->>API: sendMetrics([]TokenMetrics)
Note right of API: SSE stream emits metrics arrays
API-->>UI: SSE "metrics": Metrics[]
UI->>UI: state.metrics = [...newMetrics, ...prev]
UI-->>Page: metrics prop updated
Page->>Page: sort by id desc and render
sequenceDiagram
participant User
participant CLI as benchmark-chatcompletion
participant Pool as Worker Pool
participant Svc as /chat/completions API
User->>CLI: run with -baseurl -model -requests -par
CLI->>Pool: enqueue totalRequests
loop parallel workers (par)
Pool->>Svc: POST /chat/completions (JSON)
alt success
Svc-->>Pool: HTTP response
Pool->>CLI: increment statusCounts[resp.StatusCode]
else client error
Pool->>CLI: increment statusCounts[-1]
end
end
CLI->>User: periodic progress + final status summary
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (7)
ui/src/contexts/APIProvider.tsx (1)
111-114: Handle array vs single metric, deduplicate by id, and cap growth to avoid duplicates and unbounded memorySSE reconnections send a full initial batch. Prepending arrays wholesale will duplicate entries and grow unbounded over time. Also, a guard for backward compatibility (single metric object) is prudent.
Apply this diff to make ingestion robust:
- const newMetrics = JSON.parse(message.data) as Metrics[]; - setMetrics((prevMetrics) => { - return [...newMetrics, ...prevMetrics]; - }); + const parsed = JSON.parse(message.data) as Metrics | Metrics[]; + const batch = Array.isArray(parsed) ? parsed : [parsed]; + setMetrics((prevMetrics) => { + const seen = new Set(prevMetrics.map((m) => m.id)); + const unique = batch.filter((m) => !seen.has(m.id)); + const next = [...unique, ...prevMetrics]; + const MAX_METRICS = 2000; // prevent unbounded growth + return next.length > MAX_METRICS ? next.slice(0, MAX_METRICS) : next; + });ui/src/pages/Activity.tsx (2)
46-49: Use stable keys; avoid index in keys once metrics are de-duplicatedIndex in the key causes unnecessary remounts on resorting and hides duplicate-id issues. After de-duplicating by id in the provider, keying by id alone is cleaner.
Apply this diff:
- {sortedMetrics.map((metric, index) => ( - <tr key={`${metric.id}-${index}`}> + {sortedMetrics.map((metric) => ( + <tr key={metric.id}>
35-35: Nit: Use “ID” capitalizationMinor consistency nit with common table headers.
- <th className="px-4 py-3 text-left text-xs font-medium uppercase tracking-wider">Id</th> + <th className="px-4 py-3 text-left text-xs font-medium uppercase tracking-wider">ID</th>proxy/proxymanager_api.go (1)
135-145: Batching metrics over SSE looks good; note potential drops when buffer is fullSwitching to []TokenMetrics for both initial send and per-event updates aligns with the UI and reduces overhead. One caveat: sendBuffer is size 25 and the non-blocking select with default means bursts can drop messages. If metrics integrity is more important than logs, consider prioritizing metrics, increasing buffer size, or using a small ringbuffer.
Also applies to: 171-171, 178-178
misc/benchmark-chatcompletion/main.go (3)
46-49: Fix CLI error message to match the actual flag (-par)The message mentions “-parallelization”, but the flag is “-par”.
- fmt.Println("Error: -parallelization must be greater than 0.") + fmt.Println("Error: -par must be greater than 0.")
10-20: Avoid per-request string building and URL joins; precompute endpoint and payloadPrecompute endpoint (trim trailing slash) and payload bytes once to reduce allocations and avoid double slashes if baseurl ends with “/”.
Apply this diff:
@@ import ( "bytes" "flag" "fmt" "io" "log" "net/http" "os" + "strings" "sync" "time" ) @@ - // ----- Worker pool ------------------------------------------------------- + // ----- Worker pool ------------------------------------------------------- + endpoint := fmt.Sprintf("%s/chat/completions", strings.TrimRight(baseurl, "/")) + payloadBytes := []byte(`{"model":"` + modelName + `","max_tokens":100,"stream":false,"messages":[{"role":"user","content":"write a snake game in python"}]}`) var wg sync.WaitGroup for i := 0; i < parallelization; i++ { wg.Add(1) go func(workerID int) { defer wg.Done() @@ - for reqID := range requests { - // Build request payload as a single line JSON string - payload := `{"model":"` + modelName + `","max_tokens":100,"stream":false,"messages":[{"role":"user","content":"write a snake game in python"}]}` - - // Send POST request - req, err := http.NewRequest(http.MethodPost, - fmt.Sprintf("%s/chat/completions", baseurl), - bytes.NewReader([]byte(payload))) + for reqID := range requests { + // Send POST request + req, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payloadBytes))Also applies to: 71-75, 78-86
78-81: Prefer json.Marshal over manual JSON string to avoid malformed JSON on special charactersIf modelName contains quotes or special characters, the current string concatenation yields invalid JSON. Using encoding/json prevents this.
Example outside the selected range:
type ChatRequest struct { Model string `json:"model"` MaxTokens int `json:"max_tokens"` Stream bool `json:"stream"` Messages []map[string]string `json:"messages"` } reqBody, _ := json.Marshal(ChatRequest{ Model: modelName, MaxTokens: 100, Stream: false, Messages: []map[string]string{{"role":"user","content":"write a snake game in python"}}, }) req, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(reqBody))
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
misc/benchmark-chatcompletion/main.go(1 hunks)proxy/proxymanager_api.go(2 hunks)ui/src/contexts/APIProvider.tsx(1 hunks)ui/src/pages/Activity.tsx(4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
ui/src/contexts/APIProvider.tsx (1)
ui/src/pages/Models.tsx (2)
totalRequests(136-145)sum(143-143)
ui/src/pages/Activity.tsx (1)
ui/src/pages/Models.tsx (5)
StatsPanel(133-173)ModelsPage(9-37)sum(143-143)ModelsPanel(39-131)totalRequests(136-145)
misc/benchmark-chatcompletion/main.go (1)
misc/simple-responder/simple-responder.go (1)
main(18-319)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: run-tests
🔇 Additional comments (1)
ui/src/pages/Activity.tsx (1)
18-20: Sorting newest-first is correct and memoized appropriatelyCopying before sort avoids mutating context state, and useMemo keyed by metrics is appropriate.
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
ui/src/contexts/APIProvider.tsx (1)
66-73: Prevent reconnection after disable and cleanly manage reconnect timers.If a reconnect timer is scheduled in onerror and the consumer calls enableAPIEvents(false) before it fires, the current code will still reconnect. Also, the timer isn’t tracked/cleared, and apiEventSource.current isn’t nulled on error, which can leave stale state.
Add an enabledRef guard and track/clear the reconnect timer; also null out the ref on error.
Apply these diffs:
@@ - const apiEventSource = useRef<EventSource | null>(null); + const apiEventSource = useRef<EventSource | null>(null); + const reconnectTimerRef = useRef<number | null>(null); + const eventsEnabledRef = useRef(false); @@ - const enableAPIEvents = useCallback((enabled: boolean) => { - if (!enabled) { + const enableAPIEvents = useCallback((enabled: boolean) => { + eventsEnabledRef.current = enabled; + if (!enabled) { apiEventSource.current?.close(); - apiEventSource.current = null; - setMetrics([]); + apiEventSource.current = null; + if (reconnectTimerRef.current !== null) { + clearTimeout(reconnectTimerRef.current); + reconnectTimerRef.current = null; + } + setMetrics([]); return; } @@ - eventSource.onerror = () => { - eventSource.close(); - retryCount++; - const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000); - setTimeout(connect, delay); - }; + eventSource.onerror = () => { + eventSource.close(); + apiEventSource.current = null; + if (!eventsEnabledRef.current) { + return; + } + retryCount++; + const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000); + if (reconnectTimerRef.current !== null) { + clearTimeout(reconnectTimerRef.current); + } + reconnectTimerRef.current = window.setTimeout(connect, delay); + };And guard connect itself to avoid races:
@@ - const connect = () => { + const connect = () => { + if (!eventsEnabledRef.current) { + return; + } const eventSource = new EventSource("/api/events");Also applies to: 130-139, 55-56
🧹 Nitpick comments (2)
ui/src/contexts/APIProvider.tsx (2)
80-86: Reset backoff on successful connect; confirm snapshot semantics before clearing logs.
- Resetting retryCount on open avoids long backoff after a successful reconnect.
- Clearing proxy/upstream logs on every reconnect can permanently drop log history if the server does not replay logs.
Apply this small improvement:
eventSource.onopen = () => { + // reset backoff after a successful connection + retryCount = 0; // clear everything out on connect to keep things in sync setProxyLogs(""); setUpstreamLogs(""); setMetrics([]); // clear metrics on reconnect setModels([]); // clear models on reconnect };Please confirm the server emits full replacement snapshots for models and metrics (and ideally logs) on every connect/reconnect. If logs are not replayed, consider not clearing them here (or only clearing metrics/models).
97-101: Comparator can be clearer and avoid string-concatenation artifacts.String concatenation may sort id “10” before “2” within the same name. A two-step comparator is simpler and precise.
Apply this diff:
- // sort models by name and id - models.sort((a, b) => { - return (a.name + a.id).localeCompare(b.name + b.id); - }); + // sort models by name then by numeric id + models.sort((a, b) => { + const byName = a.name.localeCompare(b.name); + return byName !== 0 ? byName : a.id.localeCompare(b.id); + });If
idis numeric, useNumber(a.id) - Number(b.id)instead.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
ui/src/contexts/APIProvider.tsx(2 hunks)ui/src/pages/Activity.tsx(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- ui/src/pages/Activity.tsx
🔇 Additional comments (1)
ui/src/contexts/APIProvider.tsx (1)
148-151: Cleanup on unmount is correctly disabling SSE.Returning enableAPIEvents(false) in the effect cleanup is the right call to avoid dangling EventSource connections.
| const newMetrics = JSON.parse(message.data) as Metrics[]; | ||
| setMetrics((prevMetrics) => { | ||
| return [newMetric, ...prevMetrics]; | ||
| return [...newMetrics, ...prevMetrics]; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Batch handling for metrics should dedupe by id, cap length, and be backward-compatible with single-object payloads.
Current logic can:
- Duplicate entries when the backend re-sends overlapping batches (common with replay-on-reconnect).
- Grow unbounded in memory.
- Break if a single Metrics object is sent (during transition or bugs).
Apply this diff to harden the handler:
- const newMetrics = JSON.parse(message.data) as Metrics[];
- setMetrics((prevMetrics) => {
- return [...newMetrics, ...prevMetrics];
- });
+ const parsed = JSON.parse(message.data);
+ const incoming: Metrics[] = Array.isArray(parsed) ? parsed : [parsed];
+ setMetrics((prevMetrics) => {
+ // Deduplicate by id and cap list size to avoid unbounded growth
+ const seen = new Set(prevMetrics.map((m) => m.id));
+ const merged = incoming.filter((m) => !seen.has(m.id)).concat(prevMetrics);
+ return merged.slice(0, MAX_METRICS);
+ });Add a top-level constant near LOG_LENGTH_LIMIT:
// Near top-level constants
const MAX_METRICS = 2000; // cap to prevent unbounded memory growth on the Activity page🤖 Prompt for AI Agents
In ui/src/contexts/APIProvider.tsx around lines 119 to 121, the metrics handler
naively parses message.data and prepends newMetrics causing duplicates,
unbounded growth, and failures when a single Metrics object is sent; add a
top-level constant near LOG_LENGTH_LIMIT: const MAX_METRICS = 2000, then update
the handler to: parse message.data and normalize it to an array if a single
object is received, merge the incoming array with existing metrics while
deduplicating by id (prefer the newer entry from the incoming batch), maintain
desired ordering (new first), and finally trim the resulting array to
MAX_METRICS before calling setMetrics.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
ui/src/contexts/APIProvider.tsx (1)
128-133: Harden metrics handler: handle single-object payloads, dedupe by id, and cap list sizeCurrent logic assumes an array, prepends blindly, and can grow unbounded and duplicate entries on overlap/replay. Normalize, dedupe, and bound length.
Apply this diff:
- const newMetrics = JSON.parse(message.data) as Metrics[]; - setMetrics((prevMetrics) => { - return [...newMetrics, ...prevMetrics]; - }); + const parsed = JSON.parse(message.data); + const incoming: Metrics[] = Array.isArray(parsed) ? parsed : [parsed]; + setMetrics((prev) => { + // Deduplicate by id (prefer incoming) and cap size + const seen = new Set(prev.map((m) => m.id)); + const merged = incoming.filter((m) => !seen.has(m.id)).concat(prev); + return merged.slice(0, MAX_METRICS); + });And define a cap near other top-level constants:
// near LOG_LENGTH_LIMIT const MAX_METRICS = 2000;
🧹 Nitpick comments (8)
ui/src/contexts/APIProvider.tsx (3)
91-97: Reset retry backoff on successful (re)connectionWithout resetting
retryCounton a successfulonopen, later transient failures will continue using an inflated backoff. Reset it to keep reconnect latency low after a good connection.Apply this diff:
eventSource.onopen = () => { + retryCount = 0; // clear everything out on connect to keep things in sync setProxyLogs(""); setUpstreamLogs(""); setMetrics([]); // clear metrics on reconnect setModels([]); // clear models on reconnect };
77-83: Prevent stray reconnects after disable/unmount and clear scheduled retriesA pending retry
setTimeoutcan fire afterenableAPIEvents(false), recreating an EventSource despite being disabled. Track and clear the timeout; also avoid multiple scheduled reconnects.Apply these diffs:
@@ const enableAPIEvents = useCallback((enabled: boolean) => { if (!enabled) { apiEventSource.current?.close(); apiEventSource.current = null; - setMetrics([]); + // Clear any pending reconnection attempts + if (reconnectTimeoutRef.current !== null) { + clearTimeout(reconnectTimeoutRef.current); + reconnectTimeoutRef.current = null; + } + setMetrics([]); return; }@@ - eventSource.onerror = () => { - eventSource.close(); - retryCount++; - const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000); - setTimeout(connect, delay); - }; + eventSource.onerror = () => { + eventSource.close(); + retryCount++; + const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000); + // Avoid stacking multiple timers + if (reconnectTimeoutRef.current !== null) { + clearTimeout(reconnectTimeoutRef.current); + } + reconnectTimeoutRef.current = window.setTimeout(() => { + reconnectTimeoutRef.current = null; + connect(); + }, delay); + };Add the ref near your other refs:
// just below: const apiEventSource = useRef<EventSource | null>(null); const reconnectTimeoutRef = useRef<number | null>(null);Optional verification steps:
- Unmount/mount the provider quickly while the network is offline and ensure no new connections appear after disable.
- Flip
autoStartAPIEventsand confirm no reconnection occurs when disabled.Also applies to: 141-147
206-219: Include getConnectionStatus in the memo deps (minor)It’s stable due to an empty dependency array, but adding it keeps eslint/react-hooks happy and avoids future surprises if the callback ever gains deps.
const value = useMemo( () => ({ models, listModels, unloadAllModels, loadModel, enableAPIEvents, proxyLogs, upstreamLogs, metrics, getConnectionStatus, }), - [models, listModels, unloadAllModels, loadModel, enableAPIEvents, proxyLogs, upstreamLogs, metrics] + [models, listModels, unloadAllModels, loadModel, enableAPIEvents, proxyLogs, upstreamLogs, metrics, getConnectionStatus] );ui/src/components/ConnectionStatus.tsx (5)
10-15: Fix effect dependencies to avoid recreating the interval on every renderWithout a dependency array, a new interval is created each render (the previous is cleaned up), causing unnecessary churn. Tie it to
getConnectionStatusand initialize immediately to avoid 1s of stale state.- useEffect(() => { - const interval = setInterval(() => { - setEventStreamStatus(getConnectionStatus()); - }, 1000); - return () => clearInterval(interval); - }); + useEffect(() => { + // Initialize immediately + setEventStreamStatus(getConnectionStatus()); + const interval = setInterval(() => { + setEventStreamStatus(getConnectionStatus()); + }, 1000); + return () => clearInterval(interval); + }, [getConnectionStatus]);
7-9: Initialize from current status to prevent initial flickerSeeding with "disconnected" can briefly show a red dot even when connected. Use a lazy initializer to read the current status.
- const { getConnectionStatus } = useAPI(); - const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>("disconnected"); + const { getConnectionStatus } = useAPI(); + const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>(() => getConnectionStatus());
29-33: Improve accessibility: provide a readable label and live regionA title attribute is not consistently announced by screen readers. Add an SR-only label and a live region so status changes are announced.
- return ( - <div className="flex items-center" title={`event stream: ${eventStreamStatus}`}> - <span className={`inline-block w-3 h-3 rounded-full ${eventStatusColor} mr-2`}></span> - </div> - ); + return ( + <div className="flex items-center" aria-live="polite" aria-atomic="true"> + <span className={`inline-block w-3 h-3 rounded-full ${eventStatusColor} mr-2`} aria-hidden="true"></span> + <span className="sr-only">{`event stream: ${eventStreamStatus}`}</span> + </div> + );
17-27: Minor: useMemo is unnecessary hereMapping a small union to a class string is trivial and won’t benefit from memoization. Keeping it as a plain function/inline switch is fine.
If you prefer, extract a small helper:
const statusToColor = (s: ConnectionStatus) => s === "connected" ? "bg-green-500" : s === "connecting" ? "bg-yellow-500" : "bg-red-500";Then
const eventStatusColor = statusToColor(eventStreamStatus);
4-4: Name shadowing: consider renaming the type to avoid confusion with the component identifierType/value namespaces are separate in TS, so this compiles, but
type ConnectionStatusandconst ConnectionStatuscan be confusing. Optional rename toConnectionState.-type ConnectionStatus = "disconnected" | "connecting" | "connected"; +type ConnectionState = "disconnected" | "connecting" | "connected"; @@ -const ConnectionStatus = () => { +const ConnectionStatus = () => { @@ - const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>(() => getConnectionStatus()); + const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionState>(() => getConnectionStatus());Also applies to: 6-6, 8-8
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
ui/src/App.tsx(2 hunks)ui/src/components/ConnectionStatus.tsx(1 hunks)ui/src/contexts/APIProvider.tsx(5 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
ui/src/App.tsx (1)
ui/src/contexts/ThemeProvider.tsx (1)
ThemeProvider(18-60)
ui/src/components/ConnectionStatus.tsx (2)
ui/src/contexts/APIProvider.tsx (1)
useAPI(224-230)ui/src/pages/LogViewer.tsx (1)
useAPI(15-37)
🔇 Additional comments (3)
ui/src/contexts/APIProvider.tsx (2)
23-24: Public API surface for connection status looks goodAdding
getConnectionStatusto the context is a clear, minimal way to expose SSE readiness to the UI.
67-75: Deriving status from EventSource.readyState is correctThe mapping OPEN → "connected", CONNECTING → "connecting", else → "disconnected" is appropriate for the current UI needs.
ui/src/App.tsx (1)
7-7: LGTM: Surface connection status in the headerImporting and rendering ConnectionStatus beside the theme toggle provides a clear, low-friction signal to users.
Also applies to: 35-36
Summary by CodeRabbit
New Features
Refactor
UI Behavior