Skip to content

Improve Activity event handling in the UI#254

Merged
mostlygeek merged 3 commits intomainfrom
issue-252-activity-events
Aug 16, 2025
Merged

Improve Activity event handling in the UI#254
mostlygeek merged 3 commits intomainfrom
issue-252-activity-events

Conversation

@mostlygeek
Copy link
Owner

@mostlygeek mostlygeek commented Aug 16, 2025

Summary by CodeRabbit

  • New Features

    • Added a CLI benchmark tool for chat-completion APIs.
    • New ConnectionStatus component and header display; exposes connection status via the API context.
  • Refactor

    • Metrics are now processed and sent as batched arrays.
    • Activity entries now shown newest-first and include an ID column.
  • UI Behavior

    • On reconnect, logs and metrics are cleared.
    • Activity page no longer shows a transient error banner.

- Bug in #252 found that the Activity page showed activity inconsistent
  with /api/metrics
- Change data structure on how metrics to an array.
- Improves synchronization performance and reliability of backend/ui
@coderabbitai
Copy link

coderabbitai bot commented Aug 16, 2025

Walkthrough

Adds a Go CLI benchmark for chat completions; changes proxy metrics transmission to batched slices and SSE parsing; UI now exposes connection status, shows a ConnectionStatus component in the header, and Activity page sorts metrics descending with a new Id column. No exported/public API signatures changed.

Changes

Cohort / File(s) Summary of changes
Benchmark CLI tool
misc/benchmark-chatcompletion/main.go
New standalone Go program: parses flags (-baseurl, -model, -requests, -par), runs a worker pool to POST /chat/completions, tracks HTTP status counts (including client-side errors as -1), logs per-second progress, drains responses, and prints a final summary.
Proxy metrics batching
proxy/proxymanager_api.go
sendMetrics signature changed to accept []TokenMetrics; event handler now calls sendMetrics([]TokenMetrics{...}) for single events and initialization calls sendMetrics(pm.metricsMonitor.GetMetrics()) to send batches; JSON marshals a slice.
UI SSE & API provider
ui/src/contexts/APIProvider.tsx
SSE onmessage metrics handler now parses Metrics[] and prepends received items to state; added onopen handler that clears proxyLogs, upstreamLogs, metrics, and models on reconnect; added and exposed `getConnectionStatus: () => "connected"
Activity page rendering
ui/src/pages/Activity.tsx
Replaced useState/useEffect-based error handling with useMemo sorting: sortedMetrics = [...metrics].sort((a,b)=>b.id-a.id); renders sortedMetrics, adds a 1-based Id column, and removed the error UI.
Connection status UI
ui/src/components/ConnectionStatus.tsx
New React component exported as default: polls getConnectionStatus() every second via setInterval (effect created without dependency array), maps status to a color class, and renders status text plus a colored dot.
Header integration
ui/src/App.tsx
Imports and renders ConnectionStatus in the header beside the theme toggle.

Sequence Diagram(s)

sequenceDiagram
  participant PM as ProxyManager
  participant API as Server API
  participant UI as Browser APIProvider
  participant Page as ActivityPage

  PM->>API: sendMetrics([]TokenMetrics)
  Note right of API: SSE stream emits metrics arrays
  API-->>UI: SSE "metrics": Metrics[]
  UI->>UI: state.metrics = [...newMetrics, ...prev]
  UI-->>Page: metrics prop updated
  Page->>Page: sort by id desc and render
Loading
sequenceDiagram
  participant User
  participant CLI as benchmark-chatcompletion
  participant Pool as Worker Pool
  participant Svc as /chat/completions API

  User->>CLI: run with -baseurl -model -requests -par
  CLI->>Pool: enqueue totalRequests
  loop parallel workers (par)
    Pool->>Svc: POST /chat/completions (JSON)
    alt success
      Svc-->>Pool: HTTP response
      Pool->>CLI: increment statusCounts[resp.StatusCode]
    else client error
      Pool->>CLI: increment statusCounts[-1]
    end
  end
  CLI->>User: periodic progress + final status summary
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue-252-activity-events

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
ui/src/contexts/APIProvider.tsx (1)

111-114: Handle array vs single metric, deduplicate by id, and cap growth to avoid duplicates and unbounded memory

SSE reconnections send a full initial batch. Prepending arrays wholesale will duplicate entries and grow unbounded over time. Also, a guard for backward compatibility (single metric object) is prudent.

Apply this diff to make ingestion robust:

-                const newMetrics = JSON.parse(message.data) as Metrics[];
-                setMetrics((prevMetrics) => {
-                  return [...newMetrics, ...prevMetrics];
-                });
+                const parsed = JSON.parse(message.data) as Metrics | Metrics[];
+                const batch = Array.isArray(parsed) ? parsed : [parsed];
+                setMetrics((prevMetrics) => {
+                  const seen = new Set(prevMetrics.map((m) => m.id));
+                  const unique = batch.filter((m) => !seen.has(m.id));
+                  const next = [...unique, ...prevMetrics];
+                  const MAX_METRICS = 2000; // prevent unbounded growth
+                  return next.length > MAX_METRICS ? next.slice(0, MAX_METRICS) : next;
+                });
ui/src/pages/Activity.tsx (2)

46-49: Use stable keys; avoid index in keys once metrics are de-duplicated

Index in the key causes unnecessary remounts on resorting and hides duplicate-id issues. After de-duplicating by id in the provider, keying by id alone is cleaner.

Apply this diff:

-              {sortedMetrics.map((metric, index) => (
-                <tr key={`${metric.id}-${index}`}>
+              {sortedMetrics.map((metric) => (
+                <tr key={metric.id}>

35-35: Nit: Use “ID” capitalization

Minor consistency nit with common table headers.

-                <th className="px-4 py-3 text-left text-xs font-medium uppercase tracking-wider">Id</th>
+                <th className="px-4 py-3 text-left text-xs font-medium uppercase tracking-wider">ID</th>
proxy/proxymanager_api.go (1)

135-145: Batching metrics over SSE looks good; note potential drops when buffer is full

Switching to []TokenMetrics for both initial send and per-event updates aligns with the UI and reduces overhead. One caveat: sendBuffer is size 25 and the non-blocking select with default means bursts can drop messages. If metrics integrity is more important than logs, consider prioritizing metrics, increasing buffer size, or using a small ringbuffer.

Also applies to: 171-171, 178-178

misc/benchmark-chatcompletion/main.go (3)

46-49: Fix CLI error message to match the actual flag (-par)

The message mentions “-parallelization”, but the flag is “-par”.

-    fmt.Println("Error: -parallelization must be greater than 0.")
+    fmt.Println("Error: -par must be greater than 0.")

10-20: Avoid per-request string building and URL joins; precompute endpoint and payload

Precompute endpoint (trim trailing slash) and payload bytes once to reduce allocations and avoid double slashes if baseurl ends with “/”.

Apply this diff:

@@
 import (
 	"bytes"
 	"flag"
 	"fmt"
 	"io"
 	"log"
 	"net/http"
 	"os"
+	"strings"
 	"sync"
 	"time"
 )
@@
-	// ----- Worker pool -------------------------------------------------------
+	// ----- Worker pool -------------------------------------------------------
+	endpoint := fmt.Sprintf("%s/chat/completions", strings.TrimRight(baseurl, "/"))
+	payloadBytes := []byte(`{"model":"` + modelName + `","max_tokens":100,"stream":false,"messages":[{"role":"user","content":"write a snake game in python"}]}`)
 	var wg sync.WaitGroup
 	for i := 0; i < parallelization; i++ {
 		wg.Add(1)
 		go func(workerID int) {
 			defer wg.Done()
@@
-			for reqID := range requests {
-				// Build request payload as a single line JSON string
-				payload := `{"model":"` + modelName + `","max_tokens":100,"stream":false,"messages":[{"role":"user","content":"write a snake game in python"}]}`
-
-				// Send POST request
-				req, err := http.NewRequest(http.MethodPost,
-					fmt.Sprintf("%s/chat/completions", baseurl),
-					bytes.NewReader([]byte(payload)))
+			for reqID := range requests {
+				// Send POST request
+				req, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(payloadBytes))

Also applies to: 71-75, 78-86


78-81: Prefer json.Marshal over manual JSON string to avoid malformed JSON on special characters

If modelName contains quotes or special characters, the current string concatenation yields invalid JSON. Using encoding/json prevents this.

Example outside the selected range:

type ChatRequest struct {
  Model     string                   `json:"model"`
  MaxTokens int                      `json:"max_tokens"`
  Stream    bool                     `json:"stream"`
  Messages  []map[string]string      `json:"messages"`
}

reqBody, _ := json.Marshal(ChatRequest{
  Model: modelName,
  MaxTokens: 100,
  Stream: false,
  Messages: []map[string]string{{"role":"user","content":"write a snake game in python"}},
})
req, err := http.NewRequest(http.MethodPost, endpoint, bytes.NewReader(reqBody))
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4662cf7 and b8baabd.

📒 Files selected for processing (4)
  • misc/benchmark-chatcompletion/main.go (1 hunks)
  • proxy/proxymanager_api.go (2 hunks)
  • ui/src/contexts/APIProvider.tsx (1 hunks)
  • ui/src/pages/Activity.tsx (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
ui/src/contexts/APIProvider.tsx (1)
ui/src/pages/Models.tsx (2)
  • totalRequests (136-145)
  • sum (143-143)
ui/src/pages/Activity.tsx (1)
ui/src/pages/Models.tsx (5)
  • StatsPanel (133-173)
  • ModelsPage (9-37)
  • sum (143-143)
  • ModelsPanel (39-131)
  • totalRequests (136-145)
misc/benchmark-chatcompletion/main.go (1)
misc/simple-responder/simple-responder.go (1)
  • main (18-319)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: run-tests
🔇 Additional comments (1)
ui/src/pages/Activity.tsx (1)

18-20: Sorting newest-first is correct and memoized appropriately

Copying before sort avoids mutating context state, and useMemo keyed by metrics is appropriate.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
ui/src/contexts/APIProvider.tsx (1)

66-73: Prevent reconnection after disable and cleanly manage reconnect timers.

If a reconnect timer is scheduled in onerror and the consumer calls enableAPIEvents(false) before it fires, the current code will still reconnect. Also, the timer isn’t tracked/cleared, and apiEventSource.current isn’t nulled on error, which can leave stale state.

Add an enabledRef guard and track/clear the reconnect timer; also null out the ref on error.

Apply these diffs:

@@
-  const apiEventSource = useRef<EventSource | null>(null);
+  const apiEventSource = useRef<EventSource | null>(null);
+  const reconnectTimerRef = useRef<number | null>(null);
+  const eventsEnabledRef = useRef(false);
@@
-  const enableAPIEvents = useCallback((enabled: boolean) => {
-    if (!enabled) {
+  const enableAPIEvents = useCallback((enabled: boolean) => {
+    eventsEnabledRef.current = enabled;
+    if (!enabled) {
       apiEventSource.current?.close();
-      apiEventSource.current = null;
-      setMetrics([]);
+      apiEventSource.current = null;
+      if (reconnectTimerRef.current !== null) {
+        clearTimeout(reconnectTimerRef.current);
+        reconnectTimerRef.current = null;
+      }
+      setMetrics([]);
       return;
     }
@@
-      eventSource.onerror = () => {
-        eventSource.close();
-        retryCount++;
-        const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000);
-        setTimeout(connect, delay);
-      };
+      eventSource.onerror = () => {
+        eventSource.close();
+        apiEventSource.current = null;
+        if (!eventsEnabledRef.current) {
+          return;
+        }
+        retryCount++;
+        const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000);
+        if (reconnectTimerRef.current !== null) {
+          clearTimeout(reconnectTimerRef.current);
+        }
+        reconnectTimerRef.current = window.setTimeout(connect, delay);
+      };

And guard connect itself to avoid races:

@@
-    const connect = () => {
+    const connect = () => {
+      if (!eventsEnabledRef.current) {
+        return;
+      }
       const eventSource = new EventSource("/api/events");

Also applies to: 130-139, 55-56

🧹 Nitpick comments (2)
ui/src/contexts/APIProvider.tsx (2)

80-86: Reset backoff on successful connect; confirm snapshot semantics before clearing logs.

  • Resetting retryCount on open avoids long backoff after a successful reconnect.
  • Clearing proxy/upstream logs on every reconnect can permanently drop log history if the server does not replay logs.

Apply this small improvement:

 eventSource.onopen = () => {
+  // reset backoff after a successful connection
+  retryCount = 0;
   // clear everything out on connect to keep things in sync
   setProxyLogs("");
   setUpstreamLogs("");
   setMetrics([]); // clear metrics on reconnect
   setModels([]); // clear models on reconnect
 };

Please confirm the server emits full replacement snapshots for models and metrics (and ideally logs) on every connect/reconnect. If logs are not replayed, consider not clearing them here (or only clearing metrics/models).


97-101: Comparator can be clearer and avoid string-concatenation artifacts.

String concatenation may sort id “10” before “2” within the same name. A two-step comparator is simpler and precise.

Apply this diff:

-                // sort models by name and id
-                models.sort((a, b) => {
-                  return (a.name + a.id).localeCompare(b.name + b.id);
-                });
+                // sort models by name then by numeric id
+                models.sort((a, b) => {
+                  const byName = a.name.localeCompare(b.name);
+                  return byName !== 0 ? byName : a.id.localeCompare(b.id);
+                });

If id is numeric, use Number(a.id) - Number(b.id) instead.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b8baabd and 913ba00.

📒 Files selected for processing (2)
  • ui/src/contexts/APIProvider.tsx (2 hunks)
  • ui/src/pages/Activity.tsx (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • ui/src/pages/Activity.tsx
🔇 Additional comments (1)
ui/src/contexts/APIProvider.tsx (1)

148-151: Cleanup on unmount is correctly disabling SSE.

Returning enableAPIEvents(false) in the effect cleanup is the right call to avoid dangling EventSource connections.

Comment on lines +119 to +121
const newMetrics = JSON.parse(message.data) as Metrics[];
setMetrics((prevMetrics) => {
return [newMetric, ...prevMetrics];
return [...newMetrics, ...prevMetrics];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Batch handling for metrics should dedupe by id, cap length, and be backward-compatible with single-object payloads.

Current logic can:

  • Duplicate entries when the backend re-sends overlapping batches (common with replay-on-reconnect).
  • Grow unbounded in memory.
  • Break if a single Metrics object is sent (during transition or bugs).

Apply this diff to harden the handler:

-                const newMetrics = JSON.parse(message.data) as Metrics[];
-                setMetrics((prevMetrics) => {
-                  return [...newMetrics, ...prevMetrics];
-                });
+                const parsed = JSON.parse(message.data);
+                const incoming: Metrics[] = Array.isArray(parsed) ? parsed : [parsed];
+                setMetrics((prevMetrics) => {
+                  // Deduplicate by id and cap list size to avoid unbounded growth
+                  const seen = new Set(prevMetrics.map((m) => m.id));
+                  const merged = incoming.filter((m) => !seen.has(m.id)).concat(prevMetrics);
+                  return merged.slice(0, MAX_METRICS);
+                });

Add a top-level constant near LOG_LENGTH_LIMIT:

// Near top-level constants
const MAX_METRICS = 2000; // cap to prevent unbounded memory growth on the Activity page
🤖 Prompt for AI Agents
In ui/src/contexts/APIProvider.tsx around lines 119 to 121, the metrics handler
naively parses message.data and prepends newMetrics causing duplicates,
unbounded growth, and failures when a single Metrics object is sent; add a
top-level constant near LOG_LENGTH_LIMIT: const MAX_METRICS = 2000, then update
the handler to: parse message.data and normalize it to an array if a single
object is received, merge the incoming array with existing metrics while
deduplicating by id (prefer the newer entry from the incoming batch), maintain
desired ordering (new first), and finally trim the resulting array to
MAX_METRICS before calling setMetrics.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
ui/src/contexts/APIProvider.tsx (1)

128-133: Harden metrics handler: handle single-object payloads, dedupe by id, and cap list size

Current logic assumes an array, prepends blindly, and can grow unbounded and duplicate entries on overlap/replay. Normalize, dedupe, and bound length.

Apply this diff:

-                const newMetrics = JSON.parse(message.data) as Metrics[];
-                setMetrics((prevMetrics) => {
-                  return [...newMetrics, ...prevMetrics];
-                });
+                const parsed = JSON.parse(message.data);
+                const incoming: Metrics[] = Array.isArray(parsed) ? parsed : [parsed];
+                setMetrics((prev) => {
+                  // Deduplicate by id (prefer incoming) and cap size
+                  const seen = new Set(prev.map((m) => m.id));
+                  const merged = incoming.filter((m) => !seen.has(m.id)).concat(prev);
+                  return merged.slice(0, MAX_METRICS);
+                });

And define a cap near other top-level constants:

// near LOG_LENGTH_LIMIT
const MAX_METRICS = 2000;
🧹 Nitpick comments (8)
ui/src/contexts/APIProvider.tsx (3)

91-97: Reset retry backoff on successful (re)connection

Without resetting retryCount on a successful onopen, later transient failures will continue using an inflated backoff. Reset it to keep reconnect latency low after a good connection.

Apply this diff:

       eventSource.onopen = () => {
+        retryCount = 0;
         // clear everything out on connect to keep things in sync
         setProxyLogs("");
         setUpstreamLogs("");
         setMetrics([]); // clear metrics on reconnect
         setModels([]); // clear models on reconnect
       };

77-83: Prevent stray reconnects after disable/unmount and clear scheduled retries

A pending retry setTimeout can fire after enableAPIEvents(false), recreating an EventSource despite being disabled. Track and clear the timeout; also avoid multiple scheduled reconnects.

Apply these diffs:

@@
   const enableAPIEvents = useCallback((enabled: boolean) => {
     if (!enabled) {
       apiEventSource.current?.close();
       apiEventSource.current = null;
-      setMetrics([]);
+      // Clear any pending reconnection attempts
+      if (reconnectTimeoutRef.current !== null) {
+        clearTimeout(reconnectTimeoutRef.current);
+        reconnectTimeoutRef.current = null;
+      }
+      setMetrics([]);
       return;
     }
@@
-      eventSource.onerror = () => {
-        eventSource.close();
-        retryCount++;
-        const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000);
-        setTimeout(connect, delay);
-      };
+      eventSource.onerror = () => {
+        eventSource.close();
+        retryCount++;
+        const delay = Math.min(initialDelay * Math.pow(2, retryCount - 1), 5000);
+        // Avoid stacking multiple timers
+        if (reconnectTimeoutRef.current !== null) {
+          clearTimeout(reconnectTimeoutRef.current);
+        }
+        reconnectTimeoutRef.current = window.setTimeout(() => {
+          reconnectTimeoutRef.current = null;
+          connect();
+        }, delay);
+      };

Add the ref near your other refs:

// just below: const apiEventSource = useRef<EventSource | null>(null);
const reconnectTimeoutRef = useRef<number | null>(null);

Optional verification steps:

  • Unmount/mount the provider quickly while the network is offline and ensure no new connections appear after disable.
  • Flip autoStartAPIEvents and confirm no reconnection occurs when disabled.

Also applies to: 141-147


206-219: Include getConnectionStatus in the memo deps (minor)

It’s stable due to an empty dependency array, but adding it keeps eslint/react-hooks happy and avoids future surprises if the callback ever gains deps.

   const value = useMemo(
     () => ({
       models,
       listModels,
       unloadAllModels,
       loadModel,
       enableAPIEvents,
       proxyLogs,
       upstreamLogs,
       metrics,
       getConnectionStatus,
     }),
-    [models, listModels, unloadAllModels, loadModel, enableAPIEvents, proxyLogs, upstreamLogs, metrics]
+    [models, listModels, unloadAllModels, loadModel, enableAPIEvents, proxyLogs, upstreamLogs, metrics, getConnectionStatus]
   );
ui/src/components/ConnectionStatus.tsx (5)

10-15: Fix effect dependencies to avoid recreating the interval on every render

Without a dependency array, a new interval is created each render (the previous is cleaned up), causing unnecessary churn. Tie it to getConnectionStatus and initialize immediately to avoid 1s of stale state.

-  useEffect(() => {
-    const interval = setInterval(() => {
-      setEventStreamStatus(getConnectionStatus());
-    }, 1000);
-    return () => clearInterval(interval);
-  });
+  useEffect(() => {
+    // Initialize immediately
+    setEventStreamStatus(getConnectionStatus());
+    const interval = setInterval(() => {
+      setEventStreamStatus(getConnectionStatus());
+    }, 1000);
+    return () => clearInterval(interval);
+  }, [getConnectionStatus]);

7-9: Initialize from current status to prevent initial flicker

Seeding with "disconnected" can briefly show a red dot even when connected. Use a lazy initializer to read the current status.

-  const { getConnectionStatus } = useAPI();
-  const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>("disconnected");
+  const { getConnectionStatus } = useAPI();
+  const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>(() => getConnectionStatus());

29-33: Improve accessibility: provide a readable label and live region

A title attribute is not consistently announced by screen readers. Add an SR-only label and a live region so status changes are announced.

-  return (
-    <div className="flex items-center" title={`event stream: ${eventStreamStatus}`}>
-      <span className={`inline-block w-3 h-3 rounded-full ${eventStatusColor} mr-2`}></span>
-    </div>
-  );
+  return (
+    <div className="flex items-center" aria-live="polite" aria-atomic="true">
+      <span className={`inline-block w-3 h-3 rounded-full ${eventStatusColor} mr-2`} aria-hidden="true"></span>
+      <span className="sr-only">{`event stream: ${eventStreamStatus}`}</span>
+    </div>
+  );

17-27: Minor: useMemo is unnecessary here

Mapping a small union to a class string is trivial and won’t benefit from memoization. Keeping it as a plain function/inline switch is fine.

If you prefer, extract a small helper:

const statusToColor = (s: ConnectionStatus) =>
  s === "connected" ? "bg-green-500" : s === "connecting" ? "bg-yellow-500" : "bg-red-500";

Then const eventStatusColor = statusToColor(eventStreamStatus);


4-4: Name shadowing: consider renaming the type to avoid confusion with the component identifier

Type/value namespaces are separate in TS, so this compiles, but type ConnectionStatus and const ConnectionStatus can be confusing. Optional rename to ConnectionState.

-type ConnectionStatus = "disconnected" | "connecting" | "connected";
+type ConnectionState = "disconnected" | "connecting" | "connected";
@@
-const ConnectionStatus = () => {
+const ConnectionStatus = () => {
@@
-  const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionStatus>(() => getConnectionStatus());
+  const [eventStreamStatus, setEventStreamStatus] = useState<ConnectionState>(() => getConnectionStatus());

Also applies to: 6-6, 8-8

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 913ba00 and c3f7bb4.

📒 Files selected for processing (3)
  • ui/src/App.tsx (2 hunks)
  • ui/src/components/ConnectionStatus.tsx (1 hunks)
  • ui/src/contexts/APIProvider.tsx (5 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
ui/src/App.tsx (1)
ui/src/contexts/ThemeProvider.tsx (1)
  • ThemeProvider (18-60)
ui/src/components/ConnectionStatus.tsx (2)
ui/src/contexts/APIProvider.tsx (1)
  • useAPI (224-230)
ui/src/pages/LogViewer.tsx (1)
  • useAPI (15-37)
🔇 Additional comments (3)
ui/src/contexts/APIProvider.tsx (2)

23-24: Public API surface for connection status looks good

Adding getConnectionStatus to the context is a clear, minimal way to expose SSE readiness to the UI.


67-75: Deriving status from EventSource.readyState is correct

The mapping OPEN → "connected", CONNECTING → "connecting", else → "disconnected" is appropriate for the current UI needs.

ui/src/App.tsx (1)

7-7: LGTM: Surface connection status in the header

Importing and rendering ConnectionStatus beside the theme toggle provides a clear, low-friction signal to users.

Also applies to: 35-36

@mostlygeek mostlygeek merged commit 04fc673 into main Aug 16, 2025
3 checks passed
@mostlygeek mostlygeek deleted the issue-252-activity-events branch August 16, 2025 04:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant