Dev#10
Conversation
status sync
- add `GET /v1/guilds/{guildSlug}/members` route, schemas, handler, and
router wiring
- fetch right-sidebar member data via `apiClient` instead of
`getFullOrganization`
- derive sidebar member types from API client (`InferResponseType`) and
remove inline hardcoded member types
- restore/simplify guild members panel rendering with online/offline
grouping
- subscribe to socket presence events in the sidebar and patch React
Query cache on:
- `connect`
- `presence:ready`
- `presence:user:update`
- simplify channel route sidebar view state to `{ type, guildSlug,
channelId }`
- add shared presence Redis key/events in realtime types and update
realtime presence service typing
- update API/realtime package deps and lockfile for Redis + shared
realtime types
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds Redis-backed presence (server + client types), a GET /v1/guilds/{guildSlug}/members API that merges DB and Redis presence, Socket.IO Redis adapter and presence flows, and a right-side guild-members panel in the web app. Environment and type schemas now require REDIS_URL and presence-related types. Changes
Sequence Diagram(s)sequenceDiagram
participant Web as Web Client
participant API as API Server
participant Realtime as Realtime Server
participant Redis as Redis
participant DB as Database
Web->>API: GET /v1/guilds/{guildSlug}/members
API->>DB: Query guild members + user data
DB-->>API: Members list
API->>Redis: listOnlineUserIds(userIds)
Redis-->>API: Online user IDs
API-->>Web: Respond with members + status
Web->>Realtime: Socket connect + emit presence:subscribe(guildSlug)
Realtime->>Redis: listOnlineUserIds for guild
Redis-->>Realtime: Online user IDs
Realtime-->>Web: presence:ready / snapshot
Note over Realtime,Redis: user disconnects on another node
Realtime->>Redis: markUserDisconnected(userId, socketId)
Redis-->>Realtime: becameOffline
Realtime-->>Web: broadcast presence:user:update {userId, status: offline}
Web->>Web: Mutate cache and re-render member status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/routes/v1/guilds/handlers.ts`:
- Line 7: Replace the relative import of the route type with the API source
alias: change the import that brings in ListGuildMembersRoute (currently using
"./routes") to use the "@/..." alias for the routes module so it follows the
apps/api src alias convention and will be rewritten by tsc-alias; update the
import statement referencing ListGuildMembersRoute accordingly.
- Around line 14-21: The current code issues one concurrent redis.sIsMember per
user (membership) which can flood Redis; change it to a bounded approach by
batching userIds or using Redis SMISMEMBER if supported: split userIds into
chunks (e.g. 100-500 ids), call redis.sMIsMember(PRESENCE_ONLINE_USERS_SET_KEY,
...chunk) or, if SMISMEMBER is unavailable, run sIsMember for each chunk with
Promise.all and await each chunk sequentially (or use a concurrency limiter) and
then reconstruct the membership array before computing onlineIds; update the
code that references membership and onlineIds to use the aggregated results.
In `@apps/api/src/routes/v1/guilds/index.ts`:
- Around line 2-3: Replace the relative imports at the top of this file with the
project path-alias form: change the imports that reference "./handlers" and
"./routes" to use the "@/..." alias (e.g., import * as handlers from
"@/routes/v1/guilds/handlers" and import * as routes from
"@/routes/v1/guilds/routes") so the API source uses the `@/`* aliases that
tsc-alias will rewrite during build; update the import specifiers for the
handlers and routes symbols accordingly.
In `@apps/api/src/routes/v1/guilds/routes.ts`:
- Around line 23-31: The route responses map is missing a 404 entry even though
guildAuthMiddleware (used by this route) can return NOT_FOUND; update the
responses object in routes.ts for this endpoint to include
[HttpStatusCodes.NOT_FOUND]: notFoundSchema (matching other routes like
listChannels), and add the necessary import for notFoundSchema if it's not
already imported so the OpenAPI docs declare the NOT_FOUND response.
In `@apps/realtime/src/index.ts`:
- Around line 196-204: The presence:subscribe handler uses socket.data.guildIds
before connection initialization can populate it, causing false "Forbidden"
responses; modify the flow so the handler waits for connection initialization
(e.g., await a socket.data.initPromise or check socket.data.initialized) before
parsing/authorizing: in the socket.on("presence:subscribe", ...) callback, if
socket.data.guildIds is undefined or socket.data.initialized is false, await the
init promise (set during your connection initialization logic) or defer handling
until initialization completes, then call presenceSubscribePayloadSchema.parse
and perform the guildIds.includes check; ensure the connection initialization
code creates the shared init promise/flag on socket.data so both sides
coordinate.
In `@apps/realtime/src/services/presence.ts`:
- Around line 26-43: markUserDisconnected has a race where sRem then sCard can
interleave with other disconnects/ connects; replace the multi-step logic with
an atomic Redis Lua script (EVAL/EVALSHA) that takes userId and socketId,
performs sRem on userSocketsKey(userId), checks sCard(userSocketsKey(userId)),
and only if count==0 removes userId from PRESENCE_ONLINE_USERS_SET_KEY and
deletes the userSocketsKey, then returns a boolean (becameOffline). Update
markUserDisconnected to call that script and map the script result to {
becameOffline } so the removal is atomic and race-free.
- Around line 10-24: markUserConnected has a race between sAdd and sCard; make
the add-and-check atomic by replacing the two separate calls with a single Redis
Lua script (invoked via redis.eval or equivalent on the RedisClient) that: 1)
performs sAdd on userSocketsKey(userId) with socketId, 2) checks the set
cardinality (sCard), 3) if cardinality is 1 then sAdd
PRESENCE_ONLINE_USERS_SET_KEY with userId, and 4) returns a boolean/flag
indicating becameOnline; update markUserConnected to call that Lua script and
map the script's result to the returned { becameOnline } object (reference
functions/keys: markUserConnected, userSocketsKey,
PRESENCE_ONLINE_USERS_SET_KEY).
- Around line 45-55: Replace the N individual sIsMember calls in
listOnlineUserIds with a single batch call using
redis.sMisMember(PRESENCE_ONLINE_USERS_SET_KEY, ...userIds); then map the
returned results to filter the original userIds (e.g., keep those where the
corresponding sMisMember result indicates membership). Keep the early return for
empty userIds, and ensure you handle the sMisMember response shape (booleans or
integers depending on client) when filtering in listOnlineUserIds.
In `@apps/web/src/components/sidebar/right-panel/guild-members-panel.tsx`:
- Around line 96-165: Extract data?.guildId into a stable local constant (e.g.,
const guildId = data?.guildId) outside the useEffect and use that guildId inside
the effect and its callbacks (applySnapshot, requestSnapshot, onPresenceReady,
onConnect, onPresenceUpdate) so the closures reference a stable value; update
the effect dependency array to reference guildId instead of data?.guildId and
ensure each callback checks for guildId presence and compares payload.guildId to
that guildId before mutating queryClient with queryKey so updates are always
applied for the current guild.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (22)
.env.exampleapps/api/package.jsonapps/api/src/app.tsapps/api/src/lib/redis.tsapps/api/src/routes/v1/guilds/handlers.tsapps/api/src/routes/v1/guilds/index.tsapps/api/src/routes/v1/guilds/routes.tsapps/api/src/routes/v1/guilds/schema.tsapps/realtime/package.jsonapps/realtime/src/index.tsapps/realtime/src/services/presence.tsapps/web/src/components/sidebar/index.tsxapps/web/src/components/sidebar/right-panel/guild-members-panel.tsxapps/web/src/components/sidebar/right-panel/right-sidebar-context.tsxapps/web/src/components/sidebar/right-panel/right-sidebar-panel.tsxapps/web/src/components/sidebar/right-panel/right-sidebar-types.tsapps/web/src/lib/api-types.tsapps/web/src/routes/_authenticated/$guildSlug/$channelId.tsxapps/web/src/routes/_authenticated/dms.tsxpackages/auth/src/lib/auth.tspackages/env/src/server.tspackages/realtime-types/src/events.ts
| export async function listOnlineUserIds(redis: RedisClient, userIds: string[]) { | ||
| if (userIds.length === 0) return [] | ||
|
|
||
| const membership = await Promise.all( | ||
| userIds.map((userId) => | ||
| redis.sIsMember(PRESENCE_ONLINE_USERS_SET_KEY, userId) | ||
| ) | ||
| ) | ||
|
|
||
| return userIds.filter((_, index) => membership[index] === true) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
node-redis smIsMember SMISMEMBER command support
💡 Result:
Yes—node-redis (the redis / @redis/client package) supports Redis’ SMISMEMBER as both:
- Raw command:
client.SMISMEMBER(key, ...members) - Camel-case alias:
client.sMisMember(key, members)(camel-cased variant of the raw command naming)
Node-redis exposes Redis commands using the raw Redis command names and a friendlier camel-cased version, and for anything not in its command surface you can always fall back to client.sendCommand([...]). [1]
SMISMEMBER itself is available since Redis 6.2.0 (so it won’t work on Redis 6.0/5.x even if the client API exists). [2]
References: [1] [2]
🏁 Script executed:
# Check the actual file to verify the current implementation matches the snippet
cat -n apps/realtime/src/services/presence.ts | head -60Repository: BuckyMcYolo/townhall
Length of output: 1870
Use sMisMember for batch membership check.
The current implementation makes N separate sIsMember calls. Redis 6.2+ supports SMISMEMBER which checks multiple members in a single call via node-redis's sMisMember method, reducing network round trips.
⚡ Proposed optimization
export async function listOnlineUserIds(redis: RedisClient, userIds: string[]) {
if (userIds.length === 0) return []
- const membership = await Promise.all(
- userIds.map((userId) =>
- redis.sIsMember(PRESENCE_ONLINE_USERS_SET_KEY, userId)
- )
- )
+ const membership = await redis.sMisMember(PRESENCE_ONLINE_USERS_SET_KEY, userIds)
return userIds.filter((_, index) => membership[index] === true)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/realtime/src/services/presence.ts` around lines 45 - 55, Replace the N
individual sIsMember calls in listOnlineUserIds with a single batch call using
redis.sMisMember(PRESENCE_ONLINE_USERS_SET_KEY, ...userIds); then map the
returned results to filter the original userIds (e.g., keep those where the
corresponding sMisMember result indicates membership). Keep the early return for
empty userIds, and ensure you handle the sMisMember response shape (booleans or
integers depending on client) when filtering in listOnlineUserIds.
gating, batched membership) and align guild API imports/docs
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/routes/v1/guilds/routes.ts`:
- Line 11: Replace the relative import of the schema module with the project
path-alias form: update the import that brings in guildSlugParamsSchema and
listGuildMembersResponseSchema (currently imported from "./schema") to use the
"@/..." alias that matches the file's module path (so the API build's tsc-alias
can rewrite it); keep the same exported symbols (guildSlugParamsSchema,
listGuildMembersResponseSchema) and only change the module specifier to the `@/`*
alias for this routes.ts file.
In `@apps/realtime/src/index.ts`:
- Around line 77-89: The Redis clients (redisPubClient, redisSubClient,
redisPresenceClient created via createClient) only log errors; add an explicit
reconnection/health strategy by configuring a retry_strategy/backoff in
createClient options or attaching handlers for 'reconnecting', 'ready', and
'end' to track connection state, increment a failure counter on repeated
disconnects and trigger a graceful shutdown or circuit-breaker action when a
threshold is exceeded, and expose a simple health-check function (e.g.,
isRedisHealthy) that returns overall status used by the app to prevent work when
Redis is unhealthy.
- Around line 350-353: Assign socket.data.initPromise before registering event
handlers to avoid the race where the presence:subscribe handler awaits an
undefined promise; specifically, call and store the result of
initializeConnection(socket) into socket.data.initPromise (and set
socket.data.initialized inside its .then) prior to wiring up handlers (the
presence:subscribe handler that does await socket.data.initPromise), so any
incoming events will properly await initialization.
In `@apps/web/src/components/sidebar/right-panel/guild-members-panel.tsx`:
- Around line 134-152: The presence update handler onPresenceUpdate currently
maps current.members and silently ignores payload.userId that isn't present;
modify the setQueryData callback in the onPresenceUpdate handler to detect when
no member matched (e.g., track a boolean or compare lengths) and, if the user
was not found, trigger a refetch or invalidation for the members list using
queryClient.invalidateQueries(queryKey) (or call a fetch to append the new
member) so newly-joined users appear without requiring a manual refresh; keep
the existing path for updating an existing member's status in the same mapping
logic.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
apps/api/src/routes/v1/guilds/handlers.tsapps/api/src/routes/v1/guilds/index.tsapps/api/src/routes/v1/guilds/routes.tsapps/realtime/src/index.tsapps/realtime/src/services/presence.tsapps/web/src/components/sidebar/right-panel/guild-members-panel.tsx
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/realtime/src/index.ts`:
- Around line 161-175: The init/disconnect race can cause a socket to be marked
online after its disconnect cleanup ran; update initializeConnection() so after
calling markUserConnected(redisPresenceClient, socket.data.user.id, socket.id)
you verify the socket is still connected and the socket.id hasn't changed before
broadcasting: check socket.connected (or a per-socket alive flag set/cleared by
your disconnect handler) and only emit presence:user:update for
guildRoom(guildId) when becameOnline is true AND the socket is still the same
active socket; additionally ensure your disconnect cleanup clears that alive
flag immediately so the post-init check will prevent stale online broadcasts.
- Around line 153-159: Current code joins presence rooms using a snapshot stored
in socket.data.guildIds at connect time (guildMembershipRows -> guildIds ->
socket.join), which lets sockets keep receiving updates after membership is
revoked; instead, in the subscribe/subscribe-to-presence handler (the code that
calls socket.join with guildRoom(guildId)) perform a live membership check
against the authoritative membership store before joining any guildRoom and
update socket.data.guildIds to reflect the current memberships (remove joins for
revoked guilds and leave those rooms). Also add the same live membership check
in the presence-publish path (where presence updates are emitted) so sockets are
validated per-send. Locate and change the logic around guildMembershipRows /
guildIds, socket.data.guildIds, guildRoom(...) and socket.join(...) and ensure
joins/leaves are driven by the fresh membership query rather than the initial
connect snapshot.
- Around line 355-373: The bootstrap function awaits Redis connections but calls
httpServer.listen(realtimePort, ...) without awaiting or handling listen errors,
so bind failures bypass bootstrap().catch; update bootstrap to await the server
bind by converting httpServer.listen into a Promise (or attach a one-time
'error' listener) and reject that Promise on bind errors so that bind failures
propagate from bootstrap; modify the httpServer.listen call within bootstrap
(referenced as httpServer.listen and realtimePort) to return/await a Promise
that resolves on 'listening' and rejects on 'error' to ensure bootstrap() fails
fast.
Summary
This PR adds Redis-backed real-time guild member presence across API, realtime, and web frontends. It introduces presence tracking and propagation, a new API endpoint to list guild members with online/offline status, UI components to display live presence in a right sidebar, and related types and configuration.
Key Changes
Backend (API)
Realtime Services
Web Application
Types & Config
Notes & Observations
Confidence Score: 3/5
Reasoning: Implementation is well-structured, follows established patterns, and addresses race conditions with atomic Lua scripts. However, there are notable gaps before production readiness: missing automated tests for critical presence logic and Lua scripts, startup behavior that fails hard on Redis unavailability, and limited verification that realtime events and API cache updates remain perfectly synchronized. These are follow-ups rather than blockers for code review.