feat(api): Slack integration with AI agent#1027
Conversation
📝 WalkthroughWalkthroughAdds a Slack integration: DB enum and types, API routes for OAuth/connect/callback/events, Slack event processors and agent runner (Claude + MCP), new MCP package and many MCP tools, web/desktop UI for Slack integration, env/workflow updates, and supporting utilities and manifests. Changes
Sequence Diagram(s)sequenceDiagram
participant Slack as Slack
participant API as Superset API
participant DB as Database
participant QStash as QStash
participant Processor as Event Processor
participant MCP as MCP (in-memory / superset)
participant Claude as Claude (Anthropic)
Slack->>API: POST /events (signed)
API->>API: verify signature & challenge
API-->>Slack: 200 OK (challenge or ack)
API->>QStash: enqueue (app_mention / message)
QStash->>Processor: deliver job (process-mention)
Processor->>DB: lookup integration by teamId
DB-->>Processor: integration (access token)
Processor->>MCP: createSupersetMcpClient(org,user)
Processor->>MCP: createSlackMcpClient(token,team)
Processor->>Claude: runSlackAgent (context + tools)
Claude-->>Processor: agent result (text + actions)
Processor->>Slack: post message / unfurl / presentDetails
Slack-->>Processor: response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
There was a problem hiding this comment.
Actionable comments posted: 15
🤖 Fix all issues with AI agents
In `@apps/api/slack-app-manifest.json`:
- Around line 34-36: The manifest currently hardcodes a dev ngrok callback URL
in the JSON "redirect_urls" array; replace that entry with either the production
callback (e.g., https://api.superset.sh/api/integrations/slack/callback), a
clearly marked placeholder (e.g., "REPLACE_WITH_CALLBACK_URL"), or wire the
manifest to environment-specific values so the "redirect_urls" field is not tied
to a local ngrok address; update the "redirect_urls" array accordingly and
ensure any deployment pipeline or config-management reads the correct value for
production.
- Around line 56-58: The manifest currently hardcodes a development ngrok URL in
the JSON key "event_subscriptions" -> "request_url"; update the manifest so the
request_url points to the production events endpoint (or make it
environment-driven) instead of
"https://a9be19c152b9.ngrok-free.app/api/integrations/slack/events": either
replace the hardcoded value with the correct production URL, add a placeholder
(e.g. an env-var template) and document that the manifest must be generated
per-environment, or introduce a small manifest-generation script that
substitutes the environment-appropriate URL into
"event_subscriptions.request_url" before deployment.
In `@apps/api/src/app/api/agent/`[transport]/route.ts:
- Around line 8-31: The internal-auth branch that returns token:"internal" based
solely on X-Internal-Organization-Id and X-Internal-User-Id (variables
internalOrgId/internalUserId) is vulnerable to header spoofing; change it to
validate a shared secret or signed header before trusting these values: require
an additional header (e.g. X-Internal-Auth-Secret or X-Internal-Signature),
verify it against a server-side secret (process.env.INTERNAL_AUTH_SECRET) or
validate an HMAC signature/timestamp, and only then return the internal
token/clientId/scopes and mcpContext; if validation fails, continue to the
normal session/OAuth auth flow or reject the request.
- Around line 9-26: Add Zod validation for the internal headers before
constructing mcpContext: define a small Zod schema (e.g., InternalAuthSchema
with organizationId and userId as nonempty strings or UUIDs) in the route
handler in apps/api/src/app/api/agent/[transport]/route.ts, run safeParse on {
organizationId: internalOrgId, userId: internalUserId }, and if validation fails
log the error and reject/return a 400 or unauthorized response; if it passes,
use the parsed values (not the raw headers) to populate the McpContext and the
returned token/clientId/scopes structure.
In `@apps/api/src/app/api/integrations/slack/callback/route.ts`:
- Around line 72-77: Replace the hardcoded ngrok URL by reading a development
redirect base from an env var instead of using the literal string in the isDev
branch; update the redirectUri construction in the block that defines
isDev/redirectUri (the variables isDev and redirectUri) to use
env.NEXT_PUBLIC_API_URL (or a new env like env.DEV_API_URL / NEXT_PUBLIC_API_URL
if you prefer) for development so developers can set their local ngrok URL in
their .env and the production branch continues to use env.NEXT_PUBLIC_API_URL.
In `@apps/api/src/app/api/integrations/slack/connect/route.ts`:
- Around line 37-63: The dev shortcut currently assigns devUserId when isDev is
true and skips the membership check, so always validate membership after
resolving userId: keep the devUserId assignment logic but remove the early
else-branch that bypasses auth; after setting userId (regardless of path) run
the existing membership lookup using db.query.members.findFirst (with
and(eq(members.organizationId, organizationId), eq(members.userId, userId))) and
return the same 403 response if no membership; ensure auth.api.getSession is
still used when devUserId is not provided and preserve the 401 handling for
missing session.
In `@apps/api/src/app/api/integrations/slack/events/route.ts`:
- Around line 248-265: The queued URL
/api/integrations/slack/jobs/process-assistant-message does not exist; either
expose the existing handler or point qstash to the correct job route. Fix by
creating the job route directory and POST handler that calls the existing
processAssistantMessage function (located in
apps/api/src/app/api/integrations/slack/events/process-assistant-message.ts) and
exports it as the job API, or change the qstash.publishJSON url in route.ts to
the existing job endpoint (e.g., process-mention) so the queued job points to a
valid route; ensure the handler signature and expected body (event, teamId,
eventId) match between the queued payload and the job endpoint.
In `@apps/api/src/app/api/integrations/slack/jobs/process-mention/route.ts`:
- Around line 49-53: Wrap the JSON.parse call in a try/catch before calling
payloadSchema.safeParse so malformed JSON won’t throw; in the catch block log
the parse error (e.g., console.error("[slack/process-mention] Malformed JSON:",
err)) and return Response.json({ error: "Invalid JSON" }, { status: 400 }); then
proceed to call payloadSchema.safeParse(JSON.parse(body)) (or reuse the
already-parsed object) and keep the existing handling of parsed.success and
Response.json for schema validation failures.
- Around line 55-57: Wrap the call to processSlackMention(parsed.data) in a
try/catch inside route.ts, and in the catch log the error with the established
pattern (e.g., console.error("[slack/process-mention] Failed to process
mention:", error)); after logging rethrow the error so the existing behavior
that returns 500/QStash retry remains unchanged.
In `@apps/api/src/lib/slack-agent/run-agent.ts`:
- Around line 186-188: There's a typo in the Slack formatting example in the
message/template inside run-agent.ts: the italic example uses mismatched
delimiters `_italic*`; update that example to use matching underscores
(`_italic_`). Locate the string or comment block used for Slack formatting
examples (e.g., the message/template near the runAgent function or variable that
contains the bullets "When creating tasks, extract key details...") and replace
`_italic*` with `_italic_` so the example shows correct Slack italic formatting.
- Around line 261-263: The filter is comparing prefixed anthropic tool names to
entries in DENIED_SUPERSET_TOOLS (unprefixed), so denied tools never match; fix
by filtering before mapping or by normalizing names: either change the pipeline
to supersetToolsResult.tools.filter(t =>
!DENIED_SUPERSET_TOOLS.has(t.name)).map(t => mcpToolToAnthropicTool(t,
"superset")) so you check the original unprefixed names, or keep the pipeline
but compare against stripped names (e.g., in the .filter use t =>
!DENIED_SUPERSET_TOOLS.has(t.name.replace(/^superset_/, ''))), referencing
supersetToolsResult.tools, mcpToolToAnthropicTool, and DENIED_SUPERSET_TOOLS.
In `@apps/desktop/src/main/index.ts`:
- Around line 57-74: The local variable named `path` in the deep-link handler
shadows the imported `node:path` module; rename it (e.g., `routePath` or
`deepLinkPath`) and update all uses in this block (the assignment from
`parsed.pathname || "/"` and the
`mainWindow.webContents.send("deep-link-navigate", ...)` call) to the new
identifier to avoid collisions with the `path` module; keep the rest of the flow
(calling `focusMainWindow()`, getting `BrowserWindow.getAllWindows()`, and the
try/catch) unchanged.
In `@apps/desktop/src/renderer/index.tsx`:
- Around line 36-41: The renderer currently listens for deep links via
window.ipcRenderer.on and handleDeepLink; replace this raw IPC usage with the
project's tRPC pattern: add a deep-link subscription/procedure to the window
router (follow apps/desktop/src/lib/trpc/routers/window.ts for naming and
structure) and invoke/subscribe to that procedure from the renderer instead of
window.ipcRenderer.on, then call router.navigate({ to: path }) inside the tRPC
callback; keep the existing HMR cleanup logic intact.
In `@apps/web/src/app/task/`[slug]/page.tsx:
- Around line 8-11: The file-level JSDoc at the top of the task page (the
comment block in page.tsx) incorrectly says "falls back to dashboard" while the
component actually renders a clickable fallback link to the same deep link URL
(superset://task/{slug}); update that JSDoc to accurately describe current
behavior (e.g., "attempts to open Superset desktop app, falls back to a
clickable deep-link URL") or change the implementation to perform a dashboard
redirect if you prefer that behavior; locate the top comment in
apps/web/src/app/task/[slug]/page.tsx and edit the text to match the implemented
fallback (or implement a redirect in the page component if you choose the
alternate fix).
In `@packages/trpc/src/router/integration/slack/slack.ts`:
- Around line 83-89: The code builds a SlackConfig with botUserId defaulting to
an empty string which can produce an invalid SlackConfig; update the logic
around currentConfig and SlackConfig construction in slack.ts (the currentConfig
variable and the config object creation) to validate that currentConfig and
currentConfig.botUserId exist and are non-empty before proceeding—if botUserId
is missing, either throw a clear error (e.g., using the same error type your
router uses) or return a validation failure instead of using "" so downstream
code that uses SlackConfig.botUserId will never receive an empty value.
🧹 Nitpick comments (24)
apps/web/src/app/task/[slug]/page.tsx (1)
13-15: Consider validating the slug before constructing the deep link.If
slugis unexpectedly undefined or empty (e.g., due to routing edge cases), the deep link would be malformed. Adding a guard improves robustness.💡 Suggested improvement
export default function TaskDeepLinkPage() { const params = useParams<{ slug: string }>(); const slug = params.slug; + + if (!slug || typeof slug !== "string") { + return ( + <div className="flex min-h-screen items-center justify-center"> + <p className="text-muted-foreground">Invalid task link</p> + </div> + ); + } + const deepLink = `superset://task/${slug}`;apps/desktop/src/main/index.ts (1)
66-71: Consider centralizing main window access.Both here and in
focusMainWindow()useBrowserWindow.getAllWindows()[0]to get the main window. This assumes the first window is always the main window, which could break if other windows (e.g., devtools, settings) are created first.Since
MainWindowis already imported (line 20), consider exposing a static reference or singleton accessor for safer retrieval. This is a pre-existing pattern, so deferring is acceptable.apps/api/src/lib/slack-agent/mcp-clients.ts (3)
43-44: Add error handling for MCP client connection failures.Both
createSupersetMcpClientandcreateSlackMcpClientcallclient.connect(transport)without error handling. Connection failures will propagate as unhandled exceptions without context.Suggested improvement
- await client.connect(transport); - return client; + try { + await client.connect(transport); + return client; + } catch (error) { + console.error("[slack-agent/mcp] Failed to connect to Superset MCP server", error); + throw error; + }Apply similar pattern to
createSlackMcpClient. As per coding guidelines: "Never silently swallow errors... Log errors at minimum if not re-throwing."Also applies to: 73-74
99-105: Edge case:parseToolNamereturns emptytoolNamewhen no underscore.When
prefixedNamecontains no underscore, the function returns{ prefix: prefixedName, toolName: "" }. This may cause issues if callers expect a valid tool name.Consider throwing or documenting the edge case
export function parseToolName(prefixedName: string): { prefix: string; toolName: string; } { const underscoreIndex = prefixedName.indexOf("_"); if (underscoreIndex === -1) { - return { prefix: prefixedName, toolName: "" }; + // Tool names should always be prefixed; log unexpected format + console.warn("[slack-agent/mcp] Unexpected tool name format (no prefix):", prefixedName); + return { prefix: "", toolName: prefixedName }; }Alternatively, document this behavior in the JSDoc.
58-66: Consider pinning the MCP server package version for reliability.Using
npx -ydownloads the package on demand at runtime, introducing network dependency and potential version drift. If the Slack MCP server is a runtime dependency, add it topackage.jsonand invoke it directly; otherwise, pin an explicit version.Suggested improvement
Pin to the latest stable version (currently 2025.4.25):
const transport = new StdioClientTransport({ command: "npx", - args: ["-y", "@modelcontextprotocol/server-slack"], + args: ["-y", "@modelcontextprotocol/server-slack@2025.4.25"], env: {Or, preferably, add
@modelcontextprotocol/server-slackto your project dependencies and use the direct executable path.apps/api/src/lib/slack-agent/work-objects.ts (2)
20-22: Address the TODO: Extract hardcoded URL to environment variable.The hardcoded
WEB_APP_URLwill cause issues when deploying to different environments. This same constant also appears inslack-blocks.ts(line 49).🔧 Suggested approach
Consider extracting this to
env.tsand importing from there:-// TODO: Use env var in production -const WEB_APP_URL = "https://app.superset.sh"; +import { env } from "@/env"; +const WEB_APP_URL = env.NEXT_PUBLIC_WEB_URL;This would also allow consolidating with
slack-blocks.tswhich uses both the hardcoded constant andenv.NEXT_PUBLIC_WEB_URL.
262-271: Duplicate priority formatting logic.This
formatPriorityLabelfunction duplicatesformatPriorityinslack-blocks.ts(lines 249-258), though with different output formats (plain text vs emoji-prefixed).Consider extracting to a shared module if both are needed, or unifying if the emoji format is acceptable everywhere.
apps/api/src/lib/slack-agent/slack-blocks.ts (2)
48-49: Inconsistent URL source: hardcoded vs env variable.Line 49 uses a hardcoded
WEB_APP_URLwhile line 93 usesenv.NEXT_PUBLIC_WEB_URL. This inconsistency could cause unfurl links to not match attachment links in non-production environments.🔧 Suggested fix
-// Production web app URL for unfurl links (localhost won't unfurl) -const WEB_APP_URL = "https://app.superset.sh"; +// Use env var for consistency with attachment URLs +const WEB_APP_URL = env.NEXT_PUBLIC_WEB_URL;If the hardcoded URL is intentionally different for unfurl domain matching, add a comment explaining why.
51-83: Consider using a lookup object for action type handling.The repeated
if-elsechain for action types could be simplified with a lookup pattern per coding guidelines.🔧 Example refactor
const actionFormatters: Record<AgentActionType, (action: AgentAction) => string[]> = { task_created: (a) => ('tasks' in a ? a.tasks : []).map(t => `Created task <${WEB_APP_URL}/task/${t.slug}|${t.slug}>`), task_updated: (a) => ('tasks' in a ? a.tasks : []).map(t => `Updated task <${WEB_APP_URL}/task/${t.slug}|${t.slug}>`), task_deleted: (a) => ('tasks' in a ? a.tasks : []).map(t => `Deleted task ${t.slug}`), workspace_created: (a) => ('workspaces' in a ? a.workspaces : []).map(ws => `Created workspace *${ws.name}*${ws.branch ? ` on branch \`${ws.branch}\`` : ""}`), workspace_switched: (a) => ('workspaces' in a ? a.workspaces : []).map(ws => `Switched to workspace *${ws.name}*`), }; export function formatActionsAsText(actions: AgentAction[]): string { return actions.flatMap(action => actionFormatters[action.type](action)).join("\n"); }As per coding guidelines: "Use lookup objects/maps instead of repeated
if (provider === ...)conditionals".apps/api/src/app/api/integrations/slack/events/process-link-shared.ts (1)
64-98: Consider batching database queries for multiple links.The current implementation makes a separate DB query for each link. For messages with multiple Superset URLs, this could be optimized by collecting slugs first and querying in batch.
🔧 Suggested optimization
// Collect all valid slugs first const slugsToQuery = event.links .map(link => ({ url: link.url, slug: parseTaskSlugFromUrl(link.url) })) .filter((item): item is { url: string; slug: string } => item.slug !== null); if (slugsToQuery.length === 0) return; // Batch query all tasks const tasksFound = await db.query.tasks.findMany({ where: and( eq(tasks.organizationId, connection.organizationId), inArray(tasks.slug, slugsToQuery.map(s => s.slug)), ), with: { status: true, assignee: true, creator: true }, }); // Map tasks back to URLs const taskBySlug = new Map(tasksFound.map(t => [t.slug, t])); for (const { url, slug } of slugsToQuery) { const task = taskBySlug.get(slug); if (task) { const entity = createTaskWorkObject(task); entity.app_unfurl_url = url; entities.push(entity); } }apps/api/src/lib/slack-agent/run-agent.ts (3)
89-100: Hardcoded status "Backlog" may be incorrect.The status is hardcoded to
"Backlog"but the created task might have a different default status based on organization settings or workflow configuration. Consider using the actual status from the response data if available.if (toolName === "create_task" && data.created) { return { type: "task_created", tasks: data.created.map( - (t: { id: string; slug: string; title: string }) => ({ + (t: { id: string; slug: string; title: string; status?: string }) => ({ id: t.id, slug: t.slug, title: t.title, - status: "Backlog", + status: t.status ?? "Backlog", }), ), }; }
296-303: Extract magic numbers to named constants.Per coding guidelines, magic numbers should be extracted to named constants at module top. The model name and max_tokens are repeated at lines 298-299 and 385-386.
♻️ Suggested refactor
+const CLAUDE_MODEL = "claude-sonnet-4-5"; +const MAX_TOKENS = 2048; + const SYSTEM_PROMPT = `You are a helpful assistant...`;Then use these constants in both API calls:
let response = await anthropic.messages.create({ - model: "claude-sonnet-4-5", - max_tokens: 2048, + model: CLAUDE_MODEL, + max_tokens: MAX_TOKENS, system: contextualSystem,
402-417: Empty catch blocks silently swallow errors.Per coding guidelines, errors should not be silently swallowed. While cleanup errors during close are often non-critical, they should at minimum be logged for debugging.
♻️ Suggested fix
if (supersetMcp) { try { await supersetMcp.close(); } catch (err) { - // Ignore close errors + console.warn("[slack-agent] Failed to close Superset MCP client:", err); } } if (slackMcp) { try { await slackMcp.close(); } catch (err) { - // Ignore close errors + console.warn("[slack-agent] Failed to close Slack MCP client:", err); } }apps/api/src/app/api/integrations/slack/events/process-mention.ts (1)
98-106: Error message posting could fail silently.If
slack.chat.postMessagefails while posting the error message, it will throw but won't be caught, potentially leaving the user without feedback. Consider wrapping in try-catch.♻️ Suggested fix
} catch (err) { console.error("[slack/process-mention] Agent error:", err); // Post error message to the channel + try { await slack.chat.postMessage({ channel: event.channel, thread_ts: threadTs, text: `Sorry, something went wrong: ${err instanceof Error ? err.message : "Unknown error"}`, }); + } catch (postErr) { + console.error("[slack/process-mention] Failed to post error message:", postErr); + } } finally {apps/api/src/app/api/integrations/slack/events/process-assistant-message.ts (1)
40-54: Consider extracting shared connection lookup logic.This DB query pattern is duplicated across
processSlackMention,processAssistantMessage,processEntityDetails, andprocessLinkShared. Consider extracting to a shared helper.♻️ Example helper
// In a shared file like slack-connection.ts export async function getSlackConnection(teamId: string) { const connection = await db.query.integrationConnections.findFirst({ where: and( eq(integrationConnections.provider, "slack"), eq(integrationConnections.externalOrgId, teamId), ), }); return connection; }apps/api/src/app/api/integrations/slack/events/route.ts (2)
172-172: Consider wrapping JSON.parse in try-catch.While Slack should always send valid JSON, a malformed payload would cause an unhandled exception. Defensive parsing is recommended.
♻️ Suggested fix
- const payload: SlackPayload = JSON.parse(body); + let payload: SlackPayload; + try { + payload = JSON.parse(body); + } catch { + console.error("[slack/events] Invalid JSON body"); + return Response.json({ error: "Invalid JSON" }, { status: 400 }); + }
13-121: Consider sharing interface definitions.The event interfaces (
SlackAppMentionEvent,SlackMessageImEvent, etc.) are duplicated between this file and the individual processor files. Consider extracting to a shared types file.♻️ Example organization
// apps/api/src/lib/slack-agent/types.ts export interface SlackAppMentionEvent { type: "app_mention"; user: string; // ... } export interface SlackMessageImEvent { type: "message"; channel_type: "im"; // ... } // etc.apps/api/src/app/api/integrations/slack/connect/route.ts (2)
23-33: Validate query params with a Zod schema at the boundary.Manual checks miss
userIdshape validation and split error handling. A schema keeps validation consistent and future-proof.🔧 Suggested refactor
import { and, eq } from "drizzle-orm"; +import { z } from "zod"; const SLACK_SCOPES = [ // Core bot functionality "app_mentions:read", @@ "users:read", ].join(","); + +const SlackConnectQuerySchema = z.object({ + organizationId: z.uuid(), + userId: z.uuid().optional(), +}); export async function GET(request: Request) { const url = new URL(request.url); - const organizationId = url.searchParams.get("organizationId"); + const parsedQuery = SlackConnectQuerySchema.safeParse( + Object.fromEntries(url.searchParams), + ); + if (!parsedQuery.success) { + return Response.json( + { error: "Invalid query parameters" }, + { status: 400 }, + ); + } + const { organizationId, userId: devUserId } = parsedQuery.data; const isDev = env.NODE_ENV === "development"; - - if (!organizationId) { - return Response.json( - { error: "Missing organizationId parameter" }, - { status: 400 }, - ); - }As per coding guidelines: apps//src//+(api|trpc)//.{ts,tsx}: Validate at boundaries using Zod schemas for tRPC inputs and API route bodies.
71-74: Avoid hardcoding the ngrok redirect URI.This will require code edits per developer. Prefer an env-driven base URL so devs can set
NEXT_PUBLIC_API_URL(or a dedicated Slack redirect env var) to their ngrok domain.♻️ Suggested change
- const redirectUri = isDev - ? "https://6b3ce1c0b374.ngrok-free.app/api/integrations/slack/callback" - : `${env.NEXT_PUBLIC_API_URL}/api/integrations/slack/callback`; + const redirectUri = `${env.NEXT_PUBLIC_API_URL}/api/integrations/slack/callback`;packages/trpc/src/router/integration/slack/utils.ts (2)
5-21: Consider using params object and adding logging.Per coding guidelines, functions with 2+ parameters should accept a single params object with named properties. Additionally, the error path lacks logging which would aid debugging.
Suggested refactor
-export async function verifyOrgMembership( - userId: string, - organizationId: string, -) { +export async function verifyOrgMembership({ + userId, + organizationId, +}: { + userId: string; + organizationId: string; +}) { const membership = await db.query.members.findFirst({ where: and( eq(members.organizationId, organizationId), eq(members.userId, userId), ), }); if (!membership) { + console.error("[slack/verifyOrgMembership] Membership not found", { + userId, + organizationId, + }); throw new Error("Not a member of this organization"); } return { membership }; }As per coding guidelines: "Functions with 2+ parameters should accept a single params object with named properties" and "Use prefixed console logging with pattern
[domain/operation] messagefor all logging".
23-31: Same refactor opportunity asverifyOrgMembership.For consistency with the suggested refactor above, this function would also benefit from a params object and logging on the error path.
apps/api/src/app/api/integrations/slack/callback/route.ts (2)
79-106: Consider adding a timeout to the token exchange request.The
fetchcall has no timeout configured. If Slack's API is slow or unresponsive, this request could hang indefinitely. Consider usingAbortControllerwith a timeout.Example with timeout
const controller = new AbortController(); const timeoutId = setTimeout(() => controller.abort(), 10000); try { const tokenResponse = await fetch("https://slack.com/api/oauth.v2.access", { method: "POST", headers: { "Content-Type": "application/x-www-form-urlencoded" }, body: new URLSearchParams({ client_id: env.SLACK_CLIENT_ID, client_secret: env.SLACK_CLIENT_SECRET, redirect_uri: redirectUri, code, }), signal: controller.signal, }); // ... } finally { clearTimeout(timeoutId); }
113-138: Database operation lacks error handling.If the database insert/upsert fails, the error will bubble up unhandled, potentially showing a generic error page to the user. Consider wrapping in try-catch to redirect with a friendly error message.
Suggested error handling
+ try { await db .insert(integrationConnections) .values({ organizationId, connectedByUserId: userId, provider: "slack", accessToken: tokenData.access_token, externalOrgId: tokenData.team.id, externalOrgName: tokenData.team.name, config, }) .onConflictDoUpdate({ target: [ integrationConnections.organizationId, integrationConnections.provider, ], set: { accessToken: tokenData.access_token, externalOrgId: tokenData.team.id, externalOrgName: tokenData.team.name, connectedByUserId: userId, config, updatedAt: new Date(), }, }); + } catch (error) { + console.error("[slack/callback] Database error:", error); + return Response.redirect( + `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=database_error`, + ); + }apps/api/src/app/api/agent/[transport]/route.ts (1)
7-7: Prefer a params object forverifyToken.This aligns with the codebase guideline for functions with 2+ parameters.
As per coding guidelines: Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments.♻️ Suggested refactor
-async function verifyToken(req: Request, bearerToken?: string) { +async function verifyToken({ + req, + bearerToken, +}: { + req: Request; + bearerToken?: string; +}) {-const handler = withMcpAuth(baseHandler, verifyToken, { required: true }); +const handler = withMcpAuth( + baseHandler, + (req, bearerToken) => verifyToken({ req, bearerToken }), + { required: true }, +);Also applies to: 99-99
| // 1. Try internal service auth (for Slack agent and other internal services) | ||
| const internalOrgId = req.headers.get("X-Internal-Organization-Id"); | ||
| const internalUserId = req.headers.get("X-Internal-User-Id"); | ||
| if (internalOrgId && internalUserId) { | ||
| // Internal requests are trusted when running in the same process | ||
| // This is used by the Slack agent to call MCP tools on behalf of users | ||
| console.log("[mcp/auth] Internal service auth:", { | ||
| organizationId: internalOrgId, | ||
| userId: internalUserId, | ||
| }); | ||
| return { | ||
| token: "internal", | ||
| clientId: "slack-agent", | ||
| scopes: ["mcp:full"], | ||
| extra: { | ||
| mcpContext: { | ||
| userId: internalUserId, | ||
| organizationId: internalOrgId, | ||
| } satisfies McpContext, | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| // 2. Try session auth |
There was a problem hiding this comment.
Block header-only internal auth to prevent spoofed privilege escalation.
Any external caller that can set X-Internal-* headers can impersonate any user/org and receive mcp:full because this path runs before session/OAuth checks. Require a shared secret (or signed header) and reject mismatches to prevent auth bypass.
🔒 Suggested hardening
+ const internalAuth = req.headers.get("X-Internal-Auth");
const internalOrgId = req.headers.get("X-Internal-Organization-Id");
const internalUserId = req.headers.get("X-Internal-User-Id");
if (internalOrgId && internalUserId) {
+ if (!internalAuth || internalAuth !== env.INTERNAL_SERVICE_TOKEN) {
+ console.error("[mcp/auth] Invalid internal auth token");
+ return undefined;
+ }
// Internal requests are trusted when running in the same process
// This is used by the Slack agent to call MCP tools on behalf of users
console.log("[mcp/auth] Internal service auth:", {Also applies to: 54-54
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/agent/`[transport]/route.ts around lines 8 - 31, The
internal-auth branch that returns token:"internal" based solely on
X-Internal-Organization-Id and X-Internal-User-Id (variables
internalOrgId/internalUserId) is vulnerable to header spoofing; change it to
validate a shared secret or signed header before trusting these values: require
an additional header (e.g. X-Internal-Auth-Secret or X-Internal-Signature),
verify it against a server-side secret (process.env.INTERNAL_AUTH_SECRET) or
validate an HMAC signature/timestamp, and only then return the internal
token/clientId/scopes and mcpContext; if validation fails, continue to the
normal session/OAuth auth flow or reject the request.
| const internalOrgId = req.headers.get("X-Internal-Organization-Id"); | ||
| const internalUserId = req.headers.get("X-Internal-User-Id"); | ||
| if (internalOrgId && internalUserId) { | ||
| // Internal requests are trusted when running in the same process | ||
| // This is used by the Slack agent to call MCP tools on behalf of users | ||
| console.log("[mcp/auth] Internal service auth:", { | ||
| organizationId: internalOrgId, | ||
| userId: internalUserId, | ||
| }); | ||
| return { | ||
| token: "internal", | ||
| clientId: "slack-agent", | ||
| scopes: ["mcp:full"], | ||
| extra: { | ||
| mcpContext: { | ||
| userId: internalUserId, | ||
| organizationId: internalOrgId, | ||
| } satisfies McpContext, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the full route file to understand the complete context
cat -n apps/api/src/app/api/agent/[transport]/route.tsRepository: superset-sh/superset
Length of output: 3574
🏁 Script executed:
# Search for McpContext type definition to understand expected format
rg -A 5 -B 5 "type McpContext|interface McpContext" apps/Repository: superset-sh/superset
Length of output: 253
🏁 Script executed:
# Check if Zod is already used for API route validation in this codebase
rg "import.*from.*zod" apps/api/src/app/api --type ts --type tsxRepository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Look for other API route handlers in the same directory/pattern to see validation patterns
find apps/api/src/app/api -name "route.ts" -type f | head -5 | xargs -I {} bash -c 'echo "=== {} ===" && head -30 {}'Repository: superset-sh/superset
Length of output: 3666
🏁 Script executed:
# Search for Zod usage in the entire api directory
rg "import.*zod" apps/api/src/app --type tsRepository: superset-sh/superset
Length of output: 488
🏁 Script executed:
# Check what auth library returns and if values are pre-validated
rg -A 3 "getSession|getMcpSession" apps/api/src --type ts | head -40Repository: superset-sh/superset
Length of output: 2578
🏁 Script executed:
# Look at how the internal headers are being used downstream
rg "mcpContext\|userId\|organizationId" apps/api/src/lib/mcp --type ts -A 2 -B 2Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check the mcp-handler library or how mcpContext is used downstream
rg "mcpContext" apps/ --type ts -B 2 -A 2Repository: superset-sh/superset
Length of output: 1992
🏁 Script executed:
# Look for any ID format validation patterns in the codebase
rg "userId.*string\|organizationId.*string" apps/ --type ts -B 3 -A 3 | head -50Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check if there's any validation applied to the internal headers elsewhere
rg "X-Internal-Organization-Id\|X-Internal-User-Id" apps/ --type ts -B 3 -A 3Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Verify the actual threat model - check if empty/null values would cause issues downstream
rg -A 10 "export.*McpContext|interface.*McpContext" apps/api/src/lib/mcp/auth.tsRepository: superset-sh/superset
Length of output: 137
Validate internal auth headers at the boundary using Zod.
This API route must validate the internal org and user ID headers before using them in mcpContext, per the API route validation guideline.
✅ Example validation
+import { z } from "zod";
+
+const internalAuthSchema = z.object({
+ internalOrgId: z.string().min(1),
+ internalUserId: z.string().min(1),
+});
+
async function verifyToken(req: Request, bearerToken?: string) {
// 1. Try internal service auth (for Slack agent and other internal services)
const internalOrgId = req.headers.get("X-Internal-Organization-Id");
const internalUserId = req.headers.get("X-Internal-User-Id");
if (internalOrgId && internalUserId) {
+ const parsed = internalAuthSchema.safeParse({ internalOrgId, internalUserId });
+ if (!parsed.success) {
+ console.error("[mcp/auth] Invalid internal auth headers");
+ return undefined;
+ }
+ const { internalOrgId: orgId, internalUserId: userId } = parsed.data;
// Internal requests are trusted when running in the same process
// This is used by the Slack agent to call MCP tools on behalf of users
console.log("[mcp/auth] Internal service auth:", {
- organizationId: internalOrgId,
- userId: internalUserId,
+ organizationId: orgId,
+ userId: userId,
});
return {
token: "internal",
clientId: "slack-agent",
scopes: ["mcp:full"],
extra: {
mcpContext: {
- userId: internalUserId,
- organizationId: internalOrgId,
+ userId,
+ organizationId: orgId,
} satisfies McpContext,
},
};
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const internalOrgId = req.headers.get("X-Internal-Organization-Id"); | |
| const internalUserId = req.headers.get("X-Internal-User-Id"); | |
| if (internalOrgId && internalUserId) { | |
| // Internal requests are trusted when running in the same process | |
| // This is used by the Slack agent to call MCP tools on behalf of users | |
| console.log("[mcp/auth] Internal service auth:", { | |
| organizationId: internalOrgId, | |
| userId: internalUserId, | |
| }); | |
| return { | |
| token: "internal", | |
| clientId: "slack-agent", | |
| scopes: ["mcp:full"], | |
| extra: { | |
| mcpContext: { | |
| userId: internalUserId, | |
| organizationId: internalOrgId, | |
| } satisfies McpContext, | |
| const internalOrgId = req.headers.get("X-Internal-Organization-Id"); | |
| const internalUserId = req.headers.get("X-Internal-User-Id"); | |
| if (internalOrgId && internalUserId) { | |
| const parsed = internalAuthSchema.safeParse({ internalOrgId, internalUserId }); | |
| if (!parsed.success) { | |
| console.error("[mcp/auth] Invalid internal auth headers"); | |
| return undefined; | |
| } | |
| const { internalOrgId: orgId, internalUserId: userId } = parsed.data; | |
| // Internal requests are trusted when running in the same process | |
| // This is used by the Slack agent to call MCP tools on behalf of users | |
| console.log("[mcp/auth] Internal service auth:", { | |
| organizationId: orgId, | |
| userId: userId, | |
| }); | |
| return { | |
| token: "internal", | |
| clientId: "slack-agent", | |
| scopes: ["mcp:full"], | |
| extra: { | |
| mcpContext: { | |
| userId, | |
| organizationId: orgId, | |
| } satisfies McpContext, | |
| }, | |
| }; | |
| } |
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/agent/`[transport]/route.ts around lines 9 - 26, Add Zod
validation for the internal headers before constructing mcpContext: define a
small Zod schema (e.g., InternalAuthSchema with organizationId and userId as
nonempty strings or UUIDs) in the route handler in
apps/api/src/app/api/agent/[transport]/route.ts, run safeParse on {
organizationId: internalOrgId, userId: internalUserId }, and if validation fails
log the error and reject/return a 400 or unauthorized response; if it passes,
use the parsed values (not the raw headers) to populate the McpContext and the
returned token/clientId/scopes structure.
| const supersetTools = supersetToolsResult.tools | ||
| .map((t) => mcpToolToAnthropicTool(t, "superset")) | ||
| .filter((t) => !DENIED_SUPERSET_TOOLS.has(t.name)); |
There was a problem hiding this comment.
Bug: Filter comparison uses prefixed names but denylist has unprefixed names.
The mcpToolToAnthropicTool function adds a prefix (e.g., "superset_navigate_to_workspace"), but DENIED_SUPERSET_TOOLS contains unprefixed names (e.g., "navigate_to_workspace"). The filter will never match and denied tools will not be excluded.
const supersetTools = supersetToolsResult.tools
.map((t) => mcpToolToAnthropicTool(t, "superset"))
- .filter((t) => !DENIED_SUPERSET_TOOLS.has(t.name));
+ .filter((t) => !DENIED_SUPERSET_TOOLS.has(t.name.replace("superset_", "")));Alternatively, update the denylist to use prefixed names:
const DENIED_SUPERSET_TOOLS = new Set([
- "navigate_to_workspace",
- "switch_workspace",
- "get_app_context",
+ "superset_navigate_to_workspace",
+ "superset_switch_workspace",
+ "superset_get_app_context",
]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const supersetTools = supersetToolsResult.tools | |
| .map((t) => mcpToolToAnthropicTool(t, "superset")) | |
| .filter((t) => !DENIED_SUPERSET_TOOLS.has(t.name)); | |
| const supersetTools = supersetToolsResult.tools | |
| .map((t) => mcpToolToAnthropicTool(t, "superset")) | |
| .filter((t) => !DENIED_SUPERSET_TOOLS.has(t.name.replace("superset_", ""))); |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/slack-agent/run-agent.ts` around lines 261 - 263, The filter
is comparing prefixed anthropic tool names to entries in DENIED_SUPERSET_TOOLS
(unprefixed), so denied tools never match; fix by filtering before mapping or by
normalizing names: either change the pipeline to
supersetToolsResult.tools.filter(t => !DENIED_SUPERSET_TOOLS.has(t.name)).map(t
=> mcpToolToAnthropicTool(t, "superset")) so you check the original unprefixed
names, or keep the pipeline but compare against stripped names (e.g., in the
.filter use t => !DENIED_SUPERSET_TOOLS.has(t.name.replace(/^superset_/, ''))),
referencing supersetToolsResult.tools, mcpToolToAnthropicTool, and
DENIED_SUPERSET_TOOLS.
| // For all other deep links, extract path and navigate in renderer | ||
| // e.g. superset://workspace/123 -> /workspace/123 | ||
| try { | ||
| const parsed = new URL(url); | ||
| const path = parsed.pathname || "/"; | ||
|
|
||
| const result = await handleAuthCallback(authParams); | ||
| if (result.success) { | ||
| focusMainWindow(); | ||
| } else { | ||
| console.error("[main] Auth deep link failed:", result.error); | ||
|
|
||
| // Navigate in renderer via loading the route directly | ||
| const windows = BrowserWindow.getAllWindows(); | ||
| if (windows.length > 0) { | ||
| const mainWindow = windows[0]; | ||
| // Send navigation request to renderer | ||
| mainWindow.webContents.send("deep-link-navigate", path); | ||
| } | ||
| } catch (err) { | ||
| console.error("[main] Failed to parse deep link:", err); | ||
| } |
There was a problem hiding this comment.
Variable path shadows the node:path import.
Line 61 declares const path = parsed.pathname || "/" which shadows the path module imported at line 1. If anyone later adds code that needs path.join(), path.resolve(), etc. within this function, it will fail silently or throw a confusing error.
Rename to avoid the collision:
Proposed fix
try {
const parsed = new URL(url);
- const path = parsed.pathname || "/";
+ const targetPath = parsed.pathname || "/";
focusMainWindow();
// Navigate in renderer via loading the route directly
const windows = BrowserWindow.getAllWindows();
if (windows.length > 0) {
const mainWindow = windows[0];
// Send navigation request to renderer
- mainWindow.webContents.send("deep-link-navigate", path);
+ mainWindow.webContents.send("deep-link-navigate", targetPath);
}
} catch (err) {
console.error("[main] Failed to parse deep link:", err);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // For all other deep links, extract path and navigate in renderer | |
| // e.g. superset://workspace/123 -> /workspace/123 | |
| try { | |
| const parsed = new URL(url); | |
| const path = parsed.pathname || "/"; | |
| const result = await handleAuthCallback(authParams); | |
| if (result.success) { | |
| focusMainWindow(); | |
| } else { | |
| console.error("[main] Auth deep link failed:", result.error); | |
| // Navigate in renderer via loading the route directly | |
| const windows = BrowserWindow.getAllWindows(); | |
| if (windows.length > 0) { | |
| const mainWindow = windows[0]; | |
| // Send navigation request to renderer | |
| mainWindow.webContents.send("deep-link-navigate", path); | |
| } | |
| } catch (err) { | |
| console.error("[main] Failed to parse deep link:", err); | |
| } | |
| // For all other deep links, extract path and navigate in renderer | |
| // e.g. superset://workspace/123 -> /workspace/123 | |
| try { | |
| const parsed = new URL(url); | |
| const targetPath = parsed.pathname || "/"; | |
| focusMainWindow(); | |
| // Navigate in renderer via loading the route directly | |
| const windows = BrowserWindow.getAllWindows(); | |
| if (windows.length > 0) { | |
| const mainWindow = windows[0]; | |
| // Send navigation request to renderer | |
| mainWindow.webContents.send("deep-link-navigate", targetPath); | |
| } | |
| } catch (err) { | |
| console.error("[main] Failed to parse deep link:", err); | |
| } |
🤖 Prompt for AI Agents
In `@apps/desktop/src/main/index.ts` around lines 57 - 74, The local variable
named `path` in the deep-link handler shadows the imported `node:path` module;
rename it (e.g., `routePath` or `deepLinkPath`) and update all uses in this
block (the assignment from `parsed.pathname || "/"` and the
`mainWindow.webContents.send("deep-link-navigate", ...)` call) to the new
identifier to avoid collisions with the `path` module; keep the rest of the flow
(calling `focusMainWindow()`, getting `BrowserWindow.getAllWindows()`, and the
try/catch) unchanged.
| // Handle deep link navigation from main process | ||
| const handleDeepLink = (path: string) => { | ||
| console.log("[deep-link] Navigating to:", path); | ||
| router.navigate({ to: path }); | ||
| }; | ||
| window.ipcRenderer.on("deep-link-navigate", handleDeepLink); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
# Find preload script and IPC-related files
fd -t f "preload" --extension ts --extension jsRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check for contextBridge and ipcRenderer exposure
rg "contextBridge|exposeInMainWorld" --type ts --type js -B 2 -A 5Repository: superset-sh/superset
Length of output: 1866
🏁 Script executed:
# Examine the actual renderer file under review
cat -n apps/desktop/src/renderer/index.tsxRepository: superset-sh/superset
Length of output: 1890
🏁 Script executed:
# Check for tRPC setup in the project
fd -t f "trpc" --extension ts --extension jsRepository: superset-sh/superset
Length of output: 276
🏁 Script executed:
# Get full preload/index.ts to see how ipcRendererAPI is defined
cat -n apps/desktop/src/preload/index.tsRepository: superset-sh/superset
Length of output: 2951
🏁 Script executed:
# Check how deep-link-navigate is sent from main process
rg "deep-link-navigate" --type ts --type tsx -B 3 -A 3Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Examine electron-trpc setup to understand the pattern
cat -n apps/desktop/src/renderer/lib/electron-trpc.tsRepository: superset-sh/superset
Length of output: 615
🏁 Script executed:
# Check api-trpc-client to see how it's used
head -80 apps/desktop/src/renderer/lib/api-trpc-client.tsRepository: superset-sh/superset
Length of output: 829
🏁 Script executed:
# Search for deep-link-navigate being sent from main process
rg "deep-link-navigate" --type ts -B 5 -A 5Repository: superset-sh/superset
Length of output: 1854
🏁 Script executed:
# Check the tRPC router setup to see existing patterns
find apps/desktop -name "*router*" -o -name "*trpc*" | grep -E "\.(ts|tsx)$" | head -20Repository: superset-sh/superset
Length of output: 250
🏁 Script executed:
# Look for tRPC router definition in main process
fd "router" --extension ts --path "*/lib/trpc*"Repository: superset-sh/superset
Length of output: 297
🏁 Script executed:
# Check if there's a tRPC router in the main process
find apps/desktop/src/main -name "*trpc*" -o -name "*router*" | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Look for existing IPC procedures in tRPC
rg "procedure\|router\." apps/desktop/src/main/lib/trpc --type ts -B 2 -A 5 2>/dev/null || echo "File not found, searching more broadly"Repository: superset-sh/superset
Length of output: 102
🏁 Script executed:
# Search for any tRPC router definition
rg "t\.procedure\|router\.add\|createRouter" apps/desktop/src --type ts -B 2 -A 5 | head -50Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check if there's a tRPC router anywhere in the project
find . -name "*.ts" -type f | xargs rg "createTRPCRouter|router\(" | grep -v node_modules | head -30Repository: superset-sh/superset
Length of output: 2280
🏁 Script executed:
# Search for main.ts or index.ts in main directory to see overall structure
cat -n apps/desktop/src/main/index.ts | head -100Repository: superset-sh/superset
Length of output: 3803
🏁 Script executed:
# Look for any existing tRPC patterns in the codebase
rg "electronTrpc\|createTRPC" apps/desktop --type ts -B 2 -A 5 | head -60Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Verify one more tRPC procedure example to confirm the pattern
cat apps/desktop/src/lib/trpc/routers/window.ts | head -50Repository: superset-sh/superset
Length of output: 1483
Use tRPC for deep-link navigation instead of raw IPC.
The project enforces tRPC for all Electron interprocess communication. Create a procedure in the tRPC router and call it from the renderer, following the established pattern in apps/desktop/src/lib/trpc/routers/window.ts.
The HMR cleanup is correct.
🤖 Prompt for AI Agents
In `@apps/desktop/src/renderer/index.tsx` around lines 36 - 41, The renderer
currently listens for deep links via window.ipcRenderer.on and handleDeepLink;
replace this raw IPC usage with the project's tRPC pattern: add a deep-link
subscription/procedure to the window router (follow
apps/desktop/src/lib/trpc/routers/window.ts for naming and structure) and
invoke/subscribe to that procedure from the renderer instead of
window.ipcRenderer.on, then call router.navigate({ to: path }) inside the tRPC
callback; keep the existing HMR cleanup logic intact.
| /** | ||
| * Deep link passthrough page for tasks. | ||
| * Attempts to open the Superset desktop app, falls back to dashboard. | ||
| */ |
There was a problem hiding this comment.
Comment is inconsistent with actual behavior.
The JSDoc states "falls back to dashboard" but the actual fallback is a clickable link to the same deep link URL (superset://task/{slug}), not a dashboard redirect.
📝 Suggested fix
/**
* Deep link passthrough page for tasks.
- * Attempts to open the Superset desktop app, falls back to dashboard.
+ * Attempts to open the Superset desktop app with a manual retry link as fallback.
*/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Deep link passthrough page for tasks. | |
| * Attempts to open the Superset desktop app, falls back to dashboard. | |
| */ | |
| /** | |
| * Deep link passthrough page for tasks. | |
| * Attempts to open the Superset desktop app with a manual retry link as fallback. | |
| */ |
🤖 Prompt for AI Agents
In `@apps/web/src/app/task/`[slug]/page.tsx around lines 8 - 11, The file-level
JSDoc at the top of the task page (the comment block in page.tsx) incorrectly
says "falls back to dashboard" while the component actually renders a clickable
fallback link to the same deep link URL (superset://task/{slug}); update that
JSDoc to accurately describe current behavior (e.g., "attempts to open Superset
desktop app, falls back to a clickable deep-link URL") or change the
implementation to perform a dashboard redirect if you prefer that behavior;
locate the top comment in apps/web/src/app/task/[slug]/page.tsx and edit the
text to match the implemented fallback (or implement a redirect in the page
component if you choose the alternate fix).
Add web app integration pages (list card, detail page, ConnectionControls, ErrorHandler) and desktop app integration card for Slack. Replace hardcoded ngrok URLs in OAuth routes and manifest with env var / production URLs.
493d18b to
9502566
Compare
There was a problem hiding this comment.
Actionable comments posted: 14
🤖 Fix all issues with AI agents
In @.mcp.json:
- Around line 5-9: The .mcp.json file currently contains hardcoded localhost
endpoints ("url", "oauth.authorizationUrl", "oauth.tokenUrl") and should not be
tracked; add .mcp.json to .gitignore and commit a template like
.mcp.json.example containing placeholder values for "url", "oauth.clientId",
"oauth.authorizationUrl", and "oauth.tokenUrl" (or alternatively update the
config to support environment variable interpolation for those keys if the
loader supports it), and update any README/docs to instruct developers to copy
the example to .mcp.json and fill in their environment-specific values.
In `@apps/api/src/app/api/integrations/slack/callback/route.ts`:
- Around line 75-99: Wrap the token exchange/fetch and JSON parsing around a
try/catch in the Slack callback handler so network errors or invalid JSON are
caught; specifically protect the call that assigns tokenResponse (fetch to
"https://slack.com/api/oauth.v2.access") and the subsequent tokenData = await
tokenResponse.json() and on any thrown error log it (include the error object)
and return the same user-facing redirect used for token failures (e.g., redirect
to `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=token_exchange_failed`);
make sure to reference and handle tokenResponse and tokenData paths in the catch
so you don’t leak internal stack traces.
- Around line 1-42: Validate the OAuth callback query params at the API boundary
by adding a Zod schema and parsing the incoming request in the GET handler:
import z from zod, define a schema like z.object({ code: z.string().nonempty(),
state: z.string().nonempty() }) and parse the URL searchParams into that schema
before using code or state; on parse failure redirect to
`${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params` (same
redirect used currently) and only proceed to call verifySignedState or further
logic after successful validation so code and state are guaranteed non-empty.
In `@apps/api/src/app/api/integrations/slack/events/process-assistant-message.ts`:
- Around line 115-123: The catch block for slack.assistant.threads.setStatus
silently swallows errors; change it to log the error with context instead of
ignoring it — e.g., in the try/catch around slack.assistant.threads.setStatus({
channel_id: event.channel, thread_ts: threadTs, status: "" }) call, capture the
caught error and call your module logger (or console.error) with a descriptive
message including event.channel and threadTs and the error object so failures to
clear status are recorded.
- Around line 9-60: Add a guard inside processAssistantMessage to ignore
bot/self messages by checking fields on the incoming SlackMessageImEvent (e.g.,
if event.bot_id is set or event.subtype === "bot_message" or if event.user is
missing) and return early; place this right after the initial console.log and
before using threadTs or creating replies. Also update the catch block that
clears the Slack status (the error handler after calling the Slack client via
createSlackClient) to log the caught error using console.error or
processLogger.error with context (include teamId, eventId, and channel/threadTs)
instead of silently swallowing it.
In `@apps/api/src/app/api/integrations/slack/events/process-link-shared.ts`:
- Around line 100-110: Update the slack.chat.unfurl call in
process-link-shared.ts to use the (unfurl_id, source) signature instead of
(channel, ts) when handling link_shared events: extract event.unfurl_id and
event.source and pass both to slack.chat.unfurl along with the metadata.entities
payload, and remove the conflicting channel and ts fields when unfurl_id/source
are present; apply this (unfurl_id, source) approach for both "composer" and
"conversations_history" cases so the API call is always valid and consistent.
- Around line 12-36: Add runtime validation for incoming Slack link_shared
payloads by defining a Zod schema matching the SlackLinkSharedEvent shape (type
literal "link_shared", user, channel, message_ts, unfurl_id, source enum, links
array of {url, domain}, event_ts) and use it in the API boundary (route.ts) to
parse/validate the raw event before calling processLinkShared; replace the
direct call to processLinkShared({ event, teamId: team_id, eventId: event_id })
with one that passes the parsed/validated object (e.g., const validatedEvent =
SlackLinkSharedEventSchema.parse(event); processLinkShared({ event:
validatedEvent, teamId: team_id, eventId: event_id })). Ensure the Zod schema
name and the validate-and-pass step are colocated in route.ts near the existing
invocation so processLinkShared and the SlackLinkSharedEvent type receive only
validated data.
In `@apps/api/src/app/api/integrations/slack/events/process-mention.ts`:
- Around line 108-116: The catch block that swallows errors from
slack.reactions.remove should be changed to capture the error (e.g., catch
(err)) and log it with context: include the action ("remove reaction"), the
reaction name ("eyes"), and identifiers event.channel and event.ts, using the
module's logger (or console.error if none). Update the try/catch around
slack.reactions.remove to log the error and relevant event metadata instead of
leaving the catch empty.
In `@apps/api/src/lib/slack-agent/run-agent.ts`:
- Around line 405-418: Replace the empty catch blocks that silently swallow
errors in the MCP cleanup with logs: when calling cleanupSuperset() (symbol
cleanupSuperset) and slackMcp.close() (symbol slackMcp) catch the error as e and
log it using a prefixed console message like "[slack-agent/cleanup] Failed to
close <cleanupSuperset|slackMcp>: " plus the error details so failures are
diagnosable; ensure the log includes which operation failed (cleanupSuperset vs
slackMcp) and the error object/string.
- Around line 152-166: The parseTextContent function currently swallows JSON
parse errors and uses any; change its signature to accept content: unknown,
narrow types inside (use content?.[0] after asserting array/object), and in the
catch block log the parse failure once with context using a prefixed message
(e.g., "parseTextContent: failed to parse tool response") along with the caught
error and the raw contentItem.text, then continue to return null to preserve the
graceful fallback; reference the function name parseTextContent and the
identifiers contentItem and contentItem.text when making these edits.
In `@apps/api/src/lib/slack-agent/work-objects.ts`:
- Around line 281-295: The parseTaskSlugFromUrl function currently swallows URL
parse errors and returns null; update its catch block to log the error with
context instead of silently returning—use the project's logger (e.g.,
processLogger or the established logger used elsewhere in this file) to emit a
descriptive message including the input url and the caught error, then return
null as before; make sure the log call is added inside the catch in
parseTaskSlugFromUrl so failures are recorded for debugging.
- Around line 17-23: Replace the hardcoded WEB_APP_URL constant with a value
read from environment/config so deployments can override it: change the
WEB_APP_URL declaration in work-objects.ts to pull from the app config or
process.env (e.g., process.env.WEB_APP_URL or your central config export) and
remove the TODO; keep SUPERSET_PRODUCT_NAME as-is, and ensure the new constant
has an appropriate fallback or type so TypeScript compilation still passes
(e.g., assert string or provide a default from config).
In
`@apps/web/src/app/`(dashboard)/integrations/slack/components/ConnectionControls/ConnectionControls.tsx:
- Around line 34-45: The disconnect mutation (disconnectMutation using
trpc.integration.slack.disconnect.mutationOptions) is missing error handling so
failures are silently ignored; add an onError callback to the mutation options
that logs the error with a prefixed message (e.g.,
console.error("[slack/disconnect] ...", error)) and optionally surface user
feedback (toast or set state) so the UI informs the user on failure; keep the
existing onSuccess logic (queryClient.invalidateQueries for
trpc.integration.slack.getConnection with organizationId and router.refresh)
intact and only augment the mutation options with the onError handler.
In `@packages/db/drizzle/0016_slack_integration.sql`:
- Line 1: Remove the manual duplicate migration file 0015_slack_integration.sql
so only the auto-generated 0016_slack_integration.sql remains (the latter is
registered in packages/db/drizzle/meta/_journal.json at idx 16); verify that
ALTER TYPE "public"."integration_provider" ADD VALUE 'slack'; exists only in
0016_slack_integration.sql and then delete 0015_slack_integration.sql from the
repository and commit the change.
🧹 Nitpick comments (12)
apps/web/src/app/tasks/[slug]/page.tsx (1)
24-29: Extract image dimensions into named constants.Keeps magic numbers out of JSX and aligns with the codebase guideline.
As per coding guidelines: Avoid magic numbers by extracting them to named constants at module top.♻️ Suggested refactor
+const TITLE_IMAGE_WIDTH = 280; +const TITLE_IMAGE_HEIGHT = 86; + export default function TaskDeepLinkPage() { const params = useParams<{ slug: string }>(); const slug = params.slug; const deepLink = `superset://tasks/${slug}`; @@ <Image src="/title.svg" alt="Superset" - width={280} - height={86} + width={TITLE_IMAGE_WIDTH} + height={TITLE_IMAGE_HEIGHT} priority />apps/web/src/app/(dashboard)/integrations/slack/components/ConnectionControls/ConnectionControls.tsx (1)
47-49: Consider adding error handling for the OAuth redirect.If the environment variable is misconfigured, the redirect URL could be malformed. A defensive check would improve robustness.
♻️ Optional: Add URL validation
const handleConnect = () => { + const connectUrl = `${env.NEXT_PUBLIC_API_URL}/api/integrations/slack/connect?organizationId=${organizationId}`; + try { + new URL(connectUrl); // Validate URL is well-formed + } catch { + console.error("[slack/connect] Invalid connect URL:", connectUrl); + return; + } - window.location.href = `${env.NEXT_PUBLIC_API_URL}/api/integrations/slack/connect?organizationId=${organizationId}`; + window.location.href = connectUrl; };packages/mcp/src/in-memory.ts (1)
22-29: Consider extracting hardcoded values to named constants or parameters.The auth injection uses magic strings (
"internal","slack-agent","mcp:full") that are embedded inline. Additionally,clientId: "slack-agent"couples this generic MCP utility to a specific use case.♻️ Proposed refactor to improve configurability
+const DEFAULT_CLIENT_ID = "superset-internal"; +const INTERNAL_TOKEN = "internal"; +const INTERNAL_SCOPES = ["mcp:full"] as const; + export async function createInMemoryMcpClient({ organizationId, userId, + clientId = DEFAULT_CLIENT_ID, }: { organizationId: string; userId: string; + clientId?: string; }): Promise<{ client: Client; cleanup: () => Promise<void> }> { // ... clientTransport.send = (message, options) => originalSend(message, { ...options, authInfo: { - token: "internal", - clientId: "slack-agent", - scopes: ["mcp:full"], + token: INTERNAL_TOKEN, + clientId, + scopes: [...INTERNAL_SCOPES], extra: { mcpContext: { userId, organizationId } satisfies McpContext, }, }, });As per coding guidelines: "Avoid magic numbers by extracting them to named constants at module top" (applies to magic strings as well).
packages/trpc/src/router/integration/slack/utils.ts (1)
5-24: Prefer params object for multi‑arg helpers.
verifyOrgMembership/verifyOrgAdmintake positional args; consider{ userId, organizationId }to prevent mis‑ordered calls and align with the project guideline.As per coding guidelines: Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments.
apps/api/src/lib/slack-agent/mcp-clients.ts (1)
60-68: Prefer params object for multi‑arg helpers.
mcpToolToAnthropicTool(tool, prefix)should take a single{ tool, prefix }object to align with the guideline and reduce call‑site ambiguity.As per coding guidelines: Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments.
packages/mcp/src/tools/utils/register-tool.ts (2)
15-17: ReplaceanyinZodShape.Use
unknownto avoidanywhile preserving flexibility.♻️ Proposed change
- type ZodShape = Record<string, any>; + type ZodShape = Record<string, unknown>;As per coding guidelines: Avoid
anytype unless necessary.
46-57: Prefer params object forregisterTool.Consider a single params object
{ name, config, handler }to align with the guideline and make future extensions less breaking.As per coding guidelines: Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments.
apps/api/src/lib/slack-agent/work-objects.ts (1)
85-87: Extract ms→s conversion to a named constant.♻️ Suggested refactor
const SUPERSET_PRODUCT_NAME = "Superset"; +const MS_PER_SECOND = 1000; @@ - metadata_last_modified: Math.floor( - new Date(task.updatedAt).getTime() / 1000, - ), + metadata_last_modified: Math.floor( + new Date(task.updatedAt).getTime() / MS_PER_SECOND, + ), @@ - value: Math.floor(new Date(task.createdAt).getTime() / 1000), + value: Math.floor(new Date(task.createdAt).getTime() / MS_PER_SECOND), @@ - value: Math.floor(new Date(task.updatedAt).getTime() / 1000), + value: Math.floor(new Date(task.updatedAt).getTime() / MS_PER_SECOND),As per coding guidelines: Avoid magic numbers by extracting them to named constants at module top.
Also applies to: 209-217
apps/api/src/lib/slack-agent/run-agent.ts (3)
21-58: Standardize the log prefix infetchThreadContext.The prefix currently doesn’t follow the
[domain/operation]pattern; please align it here and mirror the same pattern on other logs in this file for consistency. As per coding guidelines, Use prefixed console logging with pattern[domain/operation] messagefor all logging.🧩 Example adjustment
- console.warn("[slack-agent] Failed to fetch thread context:", error); + console.warn( + "[slack-agent/fetch-thread-context] Failed to fetch thread context:", + error, + );
21-31: Extract repeated numeric literals to module constants.The default thread limit, max tokens, and tool-iteration limit are magic numbers scattered in-function; lifting them to module constants will make tuning safer and clearer. As per coding guidelines, Avoid magic numbers by extracting them to named constants at module top.
♻️ Suggested refactor
+const THREAD_CONTEXT_LIMIT = 20; +const MAX_TOKENS = 2048; +const MAX_TOOL_ITERATIONS = 10; + async function fetchThreadContext({ token, channelId, threadTs, - limit = 20, + limit = THREAD_CONTEXT_LIMIT, }: { token: string; channelId: string; threadTs: string; limit?: number; }): Promise<string> {let response = await anthropic.messages.create({ model: "claude-sonnet-4-5", - max_tokens: 2048, + max_tokens: MAX_TOKENS, system: contextualSystem, tools, messages, }); - const MAX_TOOL_ITERATIONS = 10; let iterations = 0;response = await anthropic.messages.create({ model: "claude-sonnet-4-5", - max_tokens: 2048, + max_tokens: MAX_TOKENS, system: contextualSystem, tools, messages, });Also applies to: 298-309, 300-305, 386-388
79-147: Use a params object and avoidanyin tool-result handling.This helper takes 2 positional params and uses
any. A small params object plusunknown/typed shape keeps it aligned with TS guidelines and makes call sites clearer. As per coding guidelines, Avoidanytype unless necessary; Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments.♻️ Suggested refactor
+type ToolResult = { + structuredContent?: unknown; + content?: unknown; +}; + -function getActionFromToolResult( - toolName: string, - // biome-ignore lint/suspicious/noExplicitAny: MCP result varies by tool - result: any, -): AgentAction | null { +function getActionFromToolResult({ + toolName, + result, +}: { + toolName: string; + result: ToolResult; +}): AgentAction | null { // Prefer structuredContent (typed) over parsing text const data = result.structuredContent ?? parseTextContent(result.content); if (!data) return null;- const action = getActionFromToolResult(toolName, result); + const action = getActionFromToolResult({ toolName, result });Also applies to: 349-352
apps/api/src/lib/slack-agent/slack-blocks.ts (1)
40-50: Use params objects and extract repeated constants in attachment builders.Both attachment builders take 2 positional params and duplicate literal values (color, epoch divisor). A params object plus shared constants improves clarity and reduces drift. As per coding guidelines, Functions with 2+ parameters should accept a single params object with named properties instead of positional arguments; Avoid magic numbers by extracting them to named constants at module top.
♻️ Suggested refactor
const SUPERSET_ICON_URL = "https://superset.sh/favicon-192.png"; +const SUPERSET_PURPLE = "#7C3AED"; +const MS_PER_SECOND = 1000; +const nowUnixSeconds = () => Math.floor(Date.now() / MS_PER_SECOND);-function createTaskAttachment( - task: TaskData, - _actionType: TaskActionType, -): MessageAttachment { +function createTaskAttachment({ + task, + actionType: _actionType, +}: { + task: TaskData; + actionType: TaskActionType; +}): MessageAttachment { const taskUrl = `${env.NEXT_PUBLIC_WEB_URL}/tasks/${task.slug}`; @@ return { - color: "#7C3AED", // Superset purple + color: SUPERSET_PURPLE, // Superset purple author_icon: SUPERSET_ICON_URL, author_name: task.title, author_link: taskUrl, text, fields: fields.length > 0 ? fields : undefined, - ts: String(Math.floor(Date.now() / 1000)), + ts: String(nowUnixSeconds()), }; }-function createWorkspaceAttachment( - workspace: WorkspaceData, - _actionType: "workspace_created" | "workspace_switched", -): MessageAttachment { +function createWorkspaceAttachment({ + workspace, + actionType: _actionType, +}: { + workspace: WorkspaceData; + actionType: "workspace_created" | "workspace_switched"; +}): MessageAttachment { @@ return { - color: "#7C3AED", // Superset purple + color: SUPERSET_PURPLE, // Superset purple author_icon: SUPERSET_ICON_URL, author_name: workspace.name, author_link: deepLink, text: subtitle, fields: fields.length > 0 ? fields : undefined, - ts: String(Math.floor(Date.now() / 1000)), + ts: String(nowUnixSeconds()), }; }- attachments.push(createTaskAttachment(task, action.type)); + attachments.push(createTaskAttachment({ task, actionType: action.type })); @@ - attachments.push(createWorkspaceAttachment(workspace, action.type)); + attachments.push( + createWorkspaceAttachment({ workspace, actionType: action.type }), + );Also applies to: 89-162, 176-188
| "url": "http://localhost:3001/api/agent/mcp", | ||
| "oauth": { | ||
| "clientId": "claude-code", | ||
| "authorizationUrl": "https://api.superset.sh/api/auth/mcp/authorize", | ||
| "tokenUrl": "https://api.superset.sh/api/auth/mcp/token", | ||
| "authorizationUrl": "http://localhost:3001/api/auth/mcp/authorize", | ||
| "tokenUrl": "http://localhost:3001/api/auth/mcp/token", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if .mcp.json is in gitignore
echo "=== Checking .gitignore for .mcp.json ==="
if [ -f .gitignore ]; then
grep -n "\.mcp\.json" .gitignore 2>/dev/null || echo ".mcp.json not found in .gitignore"
else
echo "No .gitignore file found"
fi
echo ""
echo "=== Checking for example/template MCP config files ==="
fd -t f -e json "mcp" --max-depth 3 2>/dev/null || find . -name "*mcp*" -type f 2>/dev/null | head -20
echo ""
echo "=== Current .mcp.json content ==="
if [ -f .mcp.json ]; then
cat -n .mcp.json
else
echo ".mcp.json not found"
fi
echo ""
echo "=== Checking git history of .mcp.json ==="
git log --oneline -n 20 -- .mcp.json 2>/dev/null || echo "No git history available"
echo ""
echo "=== Recent changes to .mcp.json ==="
git diff HEAD~5 -- .mcp.json 2>/dev/null | head -50 || echo "Unable to show recent diffs"Repository: superset-sh/superset
Length of output: 828
Add .mcp.json to .gitignore or provide a template file.
The .mcp.json file contains localhost URLs (localhost:3001) and is tracked in the repository. This will break MCP integration for other developers and in CI/CD environments since they'll clone a config pointing to a non-existent local server.
Either:
- Add
.mcp.jsonto.gitignoreand provide.mcp.json.exampleas a template, or - Use environment variable interpolation if the tooling supports it
🤖 Prompt for AI Agents
In @.mcp.json around lines 5 - 9, The .mcp.json file currently contains
hardcoded localhost endpoints ("url", "oauth.authorizationUrl",
"oauth.tokenUrl") and should not be tracked; add .mcp.json to .gitignore and
commit a template like .mcp.json.example containing placeholder values for
"url", "oauth.clientId", "oauth.authorizationUrl", and "oauth.tokenUrl" (or
alternatively update the config to support environment variable interpolation
for those keys if the loader supports it), and update any README/docs to
instruct developers to copy the example to .mcp.json and fill in their
environment-specific values.
| import { db } from "@superset/db/client"; | ||
| import type { SlackConfig } from "@superset/db/schema"; | ||
| import { integrationConnections, members } from "@superset/db/schema"; | ||
| import { and, eq } from "drizzle-orm"; | ||
|
|
||
| import { env } from "@/env"; | ||
| import { verifySignedState } from "@/lib/oauth-state"; | ||
|
|
||
| interface SlackOAuthResponse { | ||
| ok: boolean; | ||
| error?: string; | ||
| access_token: string; | ||
| token_type: string; | ||
| scope: string; | ||
| bot_user_id: string; | ||
| app_id: string; | ||
| team: { | ||
| id: string; | ||
| name: string; | ||
| }; | ||
| authed_user: { | ||
| id: string; | ||
| }; | ||
| } | ||
|
|
||
| export async function GET(request: Request) { | ||
| const url = new URL(request.url); | ||
| const code = url.searchParams.get("code"); | ||
| const state = url.searchParams.get("state"); | ||
| const error = url.searchParams.get("error"); | ||
|
|
||
| if (error) { | ||
| return Response.redirect( | ||
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=oauth_denied`, | ||
| ); | ||
| } | ||
|
|
||
| if (!code || !state) { | ||
| return Response.redirect( | ||
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params`, | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/api/src/app/api/integrations/slack/callback/route.tsRepository: superset-sh/superset
Length of output: 4751
Add Zod parsing for OAuth callback query params.
Guidelines call for Zod validation at API boundaries; validate code and state as non-empty before use.
🔧 Suggested fix
import { integrationConnections, members } from "@superset/db/schema";
import { and, eq } from "drizzle-orm";
+import { z } from "zod";
@@
interface SlackOAuthResponse {
@@
}
+const callbackParamsSchema = z.object({
+ code: z.string().min(1),
+ state: z.string().min(1),
+});
+
export async function GET(request: Request) {
const url = new URL(request.url);
- const code = url.searchParams.get("code");
- const state = url.searchParams.get("state");
const error = url.searchParams.get("error");
+ const parsedParams = callbackParamsSchema.safeParse({
+ code: url.searchParams.get("code"),
+ state: url.searchParams.get("state"),
+ });
+ if (!parsedParams.success) {
+ return Response.redirect(
+ `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params`,
+ );
+ }
+ const { code, state } = parsedParams.data;As per coding guidelines: Validate at boundaries using Zod schemas for tRPC inputs and API route bodies.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { db } from "@superset/db/client"; | |
| import type { SlackConfig } from "@superset/db/schema"; | |
| import { integrationConnections, members } from "@superset/db/schema"; | |
| import { and, eq } from "drizzle-orm"; | |
| import { env } from "@/env"; | |
| import { verifySignedState } from "@/lib/oauth-state"; | |
| interface SlackOAuthResponse { | |
| ok: boolean; | |
| error?: string; | |
| access_token: string; | |
| token_type: string; | |
| scope: string; | |
| bot_user_id: string; | |
| app_id: string; | |
| team: { | |
| id: string; | |
| name: string; | |
| }; | |
| authed_user: { | |
| id: string; | |
| }; | |
| } | |
| export async function GET(request: Request) { | |
| const url = new URL(request.url); | |
| const code = url.searchParams.get("code"); | |
| const state = url.searchParams.get("state"); | |
| const error = url.searchParams.get("error"); | |
| if (error) { | |
| return Response.redirect( | |
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=oauth_denied`, | |
| ); | |
| } | |
| if (!code || !state) { | |
| return Response.redirect( | |
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params`, | |
| ); | |
| } | |
| import { db } from "@superset/db/client"; | |
| import type { SlackConfig } from "@superset/db/schema"; | |
| import { integrationConnections, members } from "@superset/db/schema"; | |
| import { and, eq } from "drizzle-orm"; | |
| import { z } from "zod"; | |
| import { env } from "@/env"; | |
| import { verifySignedState } from "@/lib/oauth-state"; | |
| interface SlackOAuthResponse { | |
| ok: boolean; | |
| error?: string; | |
| access_token: string; | |
| token_type: string; | |
| scope: string; | |
| bot_user_id: string; | |
| app_id: string; | |
| team: { | |
| id: string; | |
| name: string; | |
| }; | |
| authed_user: { | |
| id: string; | |
| }; | |
| } | |
| const callbackParamsSchema = z.object({ | |
| code: z.string().min(1), | |
| state: z.string().min(1), | |
| }); | |
| export async function GET(request: Request) { | |
| const url = new URL(request.url); | |
| const error = url.searchParams.get("error"); | |
| const parsedParams = callbackParamsSchema.safeParse({ | |
| code: url.searchParams.get("code"), | |
| state: url.searchParams.get("state"), | |
| }); | |
| if (!parsedParams.success) { | |
| return Response.redirect( | |
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params`, | |
| ); | |
| } | |
| const { code, state } = parsedParams.data; | |
| if (error) { | |
| return Response.redirect( | |
| `${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=oauth_denied`, | |
| ); | |
| } |
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/integrations/slack/callback/route.ts` around lines 1 -
42, Validate the OAuth callback query params at the API boundary by adding a Zod
schema and parsing the incoming request in the GET handler: import z from zod,
define a schema like z.object({ code: z.string().nonempty(), state:
z.string().nonempty() }) and parse the URL searchParams into that schema before
using code or state; on parse failure redirect to
`${env.NEXT_PUBLIC_WEB_URL}/integrations/slack?error=missing_params` (same
redirect used currently) and only proceed to call verifySignedState or further
logic after successful validation so code and state are guaranteed non-empty.
| interface SlackMessageImEvent { | ||
| type: "message"; | ||
| channel_type: "im"; | ||
| user?: string; | ||
| text: string; | ||
| ts: string; | ||
| channel: string; | ||
| event_ts: string; | ||
| thread_ts?: string; | ||
| bot_id?: string; | ||
| subtype?: string; | ||
| } | ||
|
|
||
| interface ProcessAssistantMessageParams { | ||
| event: SlackMessageImEvent; | ||
| teamId: string; | ||
| eventId: string; | ||
| } | ||
|
|
||
| export async function processAssistantMessage({ | ||
| event, | ||
| teamId, | ||
| eventId, | ||
| }: ProcessAssistantMessageParams): Promise<void> { | ||
| console.log("[slack/process-assistant-message] Processing message:", { | ||
| eventId, | ||
| teamId, | ||
| channel: event.channel, | ||
| user: event.user, | ||
| }); | ||
|
|
||
| // Find connection by Slack team ID | ||
| const connection = await db.query.integrationConnections.findFirst({ | ||
| where: and( | ||
| eq(integrationConnections.provider, "slack"), | ||
| eq(integrationConnections.externalOrgId, teamId), | ||
| ), | ||
| }); | ||
|
|
||
| if (!connection) { | ||
| console.error( | ||
| "[slack/process-assistant-message] No connection found for team:", | ||
| teamId, | ||
| ); | ||
| return; | ||
| } | ||
|
|
||
| const slack = createSlackClient(connection.accessToken); | ||
|
|
||
| // Use thread_ts if in a thread, otherwise use message ts | ||
| const threadTs = event.thread_ts ?? event.ts; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l apps/api/src/app/api/integrations/slack/events/process-assistant-message.tsRepository: superset-sh/superset
Length of output: 143
🏁 Script executed:
cat -n apps/api/src/app/api/integrations/slack/events/process-assistant-message.tsRepository: superset-sh/superset
Length of output: 3997
🌐 Web query:
Slack Events API message.im bot_message subtype and bot_id field documentation
💡 Result:
message.imis an Events API subscription that delivers events whose inner payload is atype: "message"(messages posted in direct messages). The message event family supports message subtypes via thesubtypefield (when present). [1]- A bot-authored DM message may arrive as
subtype: "bot_message". However, Slack notes that since ~Jan 2021, most bot messages no longer include thebot_messagesubtype (you may still see it with “classic” bot tokens / older behaviors). [2] - When Slack does include
subtype: "bot_message", the message payload can includebot_id, which identifies the bot instance in that workspace. Slack’sbots.infodocs explicitly state thatbot_idappears inbot_messagemessage events (and in message objects returned by APIs likeconversations.history). [3] - The Slack Node SDK event typing for
BotMessageEventincludes a requiredbot_id: stringfield, reflecting the same payload shape. [4] - Background on
bot_idin message objects: Slack has documented API responses where messages can include bothuserandbot_idfields, and clarified howbot_idshould be interpreted. [5]
Sources
[1] Slack “message event” (includes message.im and subtype concept): (api.slack.com)
[2] Slack Bolt for Python reference (note about bot_message subtype no longer common since Jan 2021): (docs.slack.dev)
[3] Slack bots.info method docs (states bot_id appears in bot_message events): (docs.slack.dev)
[4] Slack Node SDK docs: BotMessageEvent includes bot_id: (docs.slack.dev)
[5] Slack Platform blog post on new bot_id field in message responses: (medium.com)
Citations:
- 1: https://api.slack.com/events/message?utm_source=openai
- 2: https://docs.slack.dev/tools/bolt-python/reference/app/app.html?utm_source=openai
- 3: https://docs.slack.dev/reference/methods/bots.info?utm_source=openai
- 4: https://docs.slack.dev/tools/node-slack-sdk/reference/web-api/interfaces/BotMessageEvent?utm_source=openai
- 5: https://medium.com/slack-developer-blog/api-update-new-field-in-api-responses-d23076ea2ef3?utm_source=openai
Add guard against bot/self messages and log errors when clearing status.
The function lacks a guard against bot-authored messages; without one, the assistant can respond to its own posts or other bots' messages, creating a reply loop. Additionally, the status cleanup catch block at lines 122–124 silently swallows errors—errors must be logged.
🔧 Suggested fixes
Add bot message guard after initial logging:
console.log("[slack/process-assistant-message] Processing message:", {
eventId,
teamId,
channel: event.channel,
user: event.user,
});
+
+ if (event.bot_id || event.subtype === "bot_message") {
+ console.log("[slack/process-assistant-message] Ignoring bot message:", {
+ eventId,
+ teamId,
+ channel: event.channel,
+ });
+ return;
+ }Log errors when clearing status:
} finally {
// Clear the status
try {
await slack.assistant.threads.setStatus({
channel_id: event.channel,
thread_ts: threadTs,
status: "",
});
- } catch {
- // Ignore errors clearing status
+ } catch (err) {
+ console.warn(
+ "[slack/process-assistant-message] Failed to clear status:",
+ err,
+ );
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| interface SlackMessageImEvent { | |
| type: "message"; | |
| channel_type: "im"; | |
| user?: string; | |
| text: string; | |
| ts: string; | |
| channel: string; | |
| event_ts: string; | |
| thread_ts?: string; | |
| bot_id?: string; | |
| subtype?: string; | |
| } | |
| interface ProcessAssistantMessageParams { | |
| event: SlackMessageImEvent; | |
| teamId: string; | |
| eventId: string; | |
| } | |
| export async function processAssistantMessage({ | |
| event, | |
| teamId, | |
| eventId, | |
| }: ProcessAssistantMessageParams): Promise<void> { | |
| console.log("[slack/process-assistant-message] Processing message:", { | |
| eventId, | |
| teamId, | |
| channel: event.channel, | |
| user: event.user, | |
| }); | |
| // Find connection by Slack team ID | |
| const connection = await db.query.integrationConnections.findFirst({ | |
| where: and( | |
| eq(integrationConnections.provider, "slack"), | |
| eq(integrationConnections.externalOrgId, teamId), | |
| ), | |
| }); | |
| if (!connection) { | |
| console.error( | |
| "[slack/process-assistant-message] No connection found for team:", | |
| teamId, | |
| ); | |
| return; | |
| } | |
| const slack = createSlackClient(connection.accessToken); | |
| // Use thread_ts if in a thread, otherwise use message ts | |
| const threadTs = event.thread_ts ?? event.ts; | |
| interface SlackMessageImEvent { | |
| type: "message"; | |
| channel_type: "im"; | |
| user?: string; | |
| text: string; | |
| ts: string; | |
| channel: string; | |
| event_ts: string; | |
| thread_ts?: string; | |
| bot_id?: string; | |
| subtype?: string; | |
| } | |
| interface ProcessAssistantMessageParams { | |
| event: SlackMessageImEvent; | |
| teamId: string; | |
| eventId: string; | |
| } | |
| export async function processAssistantMessage({ | |
| event, | |
| teamId, | |
| eventId, | |
| }: ProcessAssistantMessageParams): Promise<void> { | |
| console.log("[slack/process-assistant-message] Processing message:", { | |
| eventId, | |
| teamId, | |
| channel: event.channel, | |
| user: event.user, | |
| }); | |
| if (event.bot_id || event.subtype === "bot_message") { | |
| console.log("[slack/process-assistant-message] Ignoring bot message:", { | |
| eventId, | |
| teamId, | |
| channel: event.channel, | |
| }); | |
| return; | |
| } | |
| // Find connection by Slack team ID | |
| const connection = await db.query.integrationConnections.findFirst({ | |
| where: and( | |
| eq(integrationConnections.provider, "slack"), | |
| eq(integrationConnections.externalOrgId, teamId), | |
| ), | |
| }); | |
| if (!connection) { | |
| console.error( | |
| "[slack/process-assistant-message] No connection found for team:", | |
| teamId, | |
| ); | |
| return; | |
| } | |
| const slack = createSlackClient(connection.accessToken); | |
| // Use thread_ts if in a thread, otherwise use message ts | |
| const threadTs = event.thread_ts ?? event.ts; |
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/integrations/slack/events/process-assistant-message.ts`
around lines 9 - 60, Add a guard inside processAssistantMessage to ignore
bot/self messages by checking fields on the incoming SlackMessageImEvent (e.g.,
if event.bot_id is set or event.subtype === "bot_message" or if event.user is
missing) and return early; place this right after the initial console.log and
before using threadTs or creating replies. Also update the catch block that
clears the Slack status (the error handler after calling the Slack client via
createSlackClient) to log the caught error using console.error or
processLogger.error with context (include teamId, eventId, and channel/threadTs)
instead of silently swallowing it.
| // Clear the status | ||
| try { | ||
| await slack.assistant.threads.setStatus({ | ||
| channel_id: event.channel, | ||
| thread_ts: threadTs, | ||
| status: "", | ||
| }); | ||
| } catch { | ||
| // Ignore errors clearing status |
There was a problem hiding this comment.
Don’t silently ignore status-clear failures.
🔧 Suggested fix
- } catch {
- // Ignore errors clearing status
- }
+ } catch (err) {
+ console.warn(
+ "[slack/process-assistant-message] Failed to clear status:",
+ err,
+ );
+ }As per coding guidelines: Never swallow errors silently; at minimum log them with context.
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/integrations/slack/events/process-assistant-message.ts`
around lines 115 - 123, The catch block for slack.assistant.threads.setStatus
silently swallows errors; change it to log the error with context instead of
ignoring it — e.g., in the try/catch around slack.assistant.threads.setStatus({
channel_id: event.channel, thread_ts: threadTs, status: "" }) call, capture the
caught error and call your module logger (or console.error) with a descriptive
message including event.channel and threadTs and the error object so failures to
clear status are recorded.
| // Cleanup: close MCP clients | ||
| if (cleanupSuperset) { | ||
| try { | ||
| await cleanupSuperset(); | ||
| } catch { | ||
| // Ignore close errors | ||
| } | ||
| } | ||
| if (slackMcp) { | ||
| try { | ||
| await slackMcp.close(); | ||
| } catch { | ||
| // Ignore close errors | ||
| } |
There was a problem hiding this comment.
Log MCP cleanup failures instead of ignoring them.
Both cleanup blocks silently swallow errors. Please log with a prefixed message so failures are diagnosable. As per coding guidelines, Never swallow errors silently; at minimum log them with context; Use prefixed console logging with pattern [domain/operation] message for all logging.
🧹 Suggested fix
if (cleanupSuperset) {
try {
await cleanupSuperset();
- } catch {
- // Ignore close errors
+ } catch (error) {
+ console.warn(
+ "[slack-agent/cleanup] Failed to close Superset MCP client:",
+ error,
+ );
}
}
if (slackMcp) {
try {
await slackMcp.close();
- } catch {
- // Ignore close errors
+ } catch (error) {
+ console.warn(
+ "[slack-agent/cleanup] Failed to close Slack MCP client:",
+ error,
+ );
}
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Cleanup: close MCP clients | |
| if (cleanupSuperset) { | |
| try { | |
| await cleanupSuperset(); | |
| } catch { | |
| // Ignore close errors | |
| } | |
| } | |
| if (slackMcp) { | |
| try { | |
| await slackMcp.close(); | |
| } catch { | |
| // Ignore close errors | |
| } | |
| // Cleanup: close MCP clients | |
| if (cleanupSuperset) { | |
| try { | |
| await cleanupSuperset(); | |
| } catch (error) { | |
| console.warn( | |
| "[slack-agent/cleanup] Failed to close Superset MCP client:", | |
| error, | |
| ); | |
| } | |
| } | |
| if (slackMcp) { | |
| try { | |
| await slackMcp.close(); | |
| } catch (error) { | |
| console.warn( | |
| "[slack-agent/cleanup] Failed to close Slack MCP client:", | |
| error, | |
| ); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/slack-agent/run-agent.ts` around lines 405 - 418, Replace
the empty catch blocks that silently swallow errors in the MCP cleanup with
logs: when calling cleanupSuperset() (symbol cleanupSuperset) and
slackMcp.close() (symbol slackMcp) catch the error as e and log it using a
prefixed console message like "[slack-agent/cleanup] Failed to close
<cleanupSuperset|slackMcp>: " plus the error details so failures are
diagnosable; ensure the log includes which operation failed (cleanupSuperset vs
slackMcp) and the error object/string.
| export function parseTaskSlugFromUrl(url: string): string | null { | ||
| try { | ||
| const parsed = new URL(url); | ||
| // Try web app format first: /tasks/{slug} | ||
| const webMatch = parsed.pathname.match(/^\/tasks\/([^/]+)/); | ||
| if (webMatch?.[1]) { | ||
| return webMatch[1]; | ||
| } | ||
| // Fall back to legacy API format: /api/integrations/slack/tasks/{slug} | ||
| const apiMatch = parsed.pathname.match( | ||
| /^\/api\/integrations\/slack\/tasks\/([^/]+)/, | ||
| ); | ||
| return apiMatch?.[1] ?? null; | ||
| } catch { | ||
| return null; |
There was a problem hiding this comment.
Log URL parse failures rather than silently returning null.
🔧 Suggested fix
- } catch {
- return null;
- }
+ } catch (err) {
+ console.warn("[slack/work-objects] Failed to parse task URL:", {
+ url,
+ err,
+ });
+ return null;
+ }As per coding guidelines: Never swallow errors silently; at minimum log them with context.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function parseTaskSlugFromUrl(url: string): string | null { | |
| try { | |
| const parsed = new URL(url); | |
| // Try web app format first: /tasks/{slug} | |
| const webMatch = parsed.pathname.match(/^\/tasks\/([^/]+)/); | |
| if (webMatch?.[1]) { | |
| return webMatch[1]; | |
| } | |
| // Fall back to legacy API format: /api/integrations/slack/tasks/{slug} | |
| const apiMatch = parsed.pathname.match( | |
| /^\/api\/integrations\/slack\/tasks\/([^/]+)/, | |
| ); | |
| return apiMatch?.[1] ?? null; | |
| } catch { | |
| return null; | |
| export function parseTaskSlugFromUrl(url: string): string | null { | |
| try { | |
| const parsed = new URL(url); | |
| // Try web app format first: /tasks/{slug} | |
| const webMatch = parsed.pathname.match(/^\/tasks\/([^/]+)/); | |
| if (webMatch?.[1]) { | |
| return webMatch[1]; | |
| } | |
| // Fall back to legacy API format: /api/integrations/slack/tasks/{slug} | |
| const apiMatch = parsed.pathname.match( | |
| /^\/api\/integrations\/slack\/tasks\/([^/]+)/, | |
| ); | |
| return apiMatch?.[1] ?? null; | |
| } catch (err) { | |
| console.warn("[slack/work-objects] Failed to parse task URL:", { | |
| url, | |
| err, | |
| }); | |
| return null; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/slack-agent/work-objects.ts` around lines 281 - 295, The
parseTaskSlugFromUrl function currently swallows URL parse errors and returns
null; update its catch block to log the error with context instead of silently
returning—use the project's logger (e.g., processLogger or the established
logger used elsewhere in this file) to emit a descriptive message including the
input url and the caught error, then return null as before; make sure the log
call is added inside the catch in parseTaskSlugFromUrl so failures are recorded
for debugging.
| const disconnectMutation = useMutation( | ||
| trpc.integration.slack.disconnect.mutationOptions({ | ||
| onSuccess: () => { | ||
| queryClient.invalidateQueries({ | ||
| queryKey: trpc.integration.slack.getConnection.queryKey({ | ||
| organizationId, | ||
| }), | ||
| }); | ||
| router.refresh(); | ||
| }, | ||
| }), | ||
| ); |
There was a problem hiding this comment.
Add error handling for the disconnect mutation.
The mutation lacks an onError callback. If the disconnect fails, the user receives no feedback, and the error is silently ignored. This violates the guideline to never swallow errors silently.
🛡️ Proposed fix to add error handling
const disconnectMutation = useMutation(
trpc.integration.slack.disconnect.mutationOptions({
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: trpc.integration.slack.getConnection.queryKey({
organizationId,
}),
});
router.refresh();
},
+ onError: (error) => {
+ console.error("[slack/disconnect] Failed to disconnect:", error);
+ // Consider adding a toast notification to inform the user
+ },
}),
);As per coding guidelines: "Never swallow errors silently; at minimum log them with context" and "Use prefixed console logging with pattern [domain/operation] message".
🤖 Prompt for AI Agents
In
`@apps/web/src/app/`(dashboard)/integrations/slack/components/ConnectionControls/ConnectionControls.tsx
around lines 34 - 45, The disconnect mutation (disconnectMutation using
trpc.integration.slack.disconnect.mutationOptions) is missing error handling so
failures are silently ignored; add an onError callback to the mutation options
that logs the error with a prefixed message (e.g.,
console.error("[slack/disconnect] ...", error)) and optionally surface user
feedback (toast or set state) so the UI informs the user on failure; keep the
existing onSuccess logic (queryClient.invalidateQueries for
trpc.integration.slack.getConnection with organizationId and router.refresh)
intact and only augment the mutation options with the onError handler.
| @@ -0,0 +1 @@ | |||
| ALTER TYPE "public"."integration_provider" ADD VALUE 'slack'; No newline at end of file | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the drizzle journal exists and contains info about this migration
find . -type f -name "_journal.json" -path "*/drizzle/*"Repository: superset-sh/superset
Length of output: 151
🏁 Script executed:
# Look for schema files defining integration_provider
find packages/db/src/schema -type f -name "*.ts" | head -20Repository: superset-sh/superset
Length of output: 325
🏁 Script executed:
# Search for integration_provider enum definition
rg "integration_provider" packages/db/src/schema --type ts -B 2 -A 2Repository: superset-sh/superset
Length of output: 400
🏁 Script executed:
# Check the migration file's contents and nearby migrations for context
ls -la packages/db/drizzle/ | grep -E "\.sql|_journal" | head -20Repository: superset-sh/superset
Length of output: 1561
🏁 Script executed:
# Read the drizzle journal to see registered migrations
cat packages/db/drizzle/meta/_journal.json | jq '.' | grep -E "slack|0015|0016" -A 5 -B 5Repository: superset-sh/superset
Length of output: 380
🏁 Script executed:
# Read both slack integration migration files
echo "=== 0015_slack_integration.sql ===" && cat packages/db/drizzle/0015_slack_integration.sql && echo -e "\n=== 0016_slack_integration.sql ===" && cat packages/db/drizzle/0016_slack_integration.sqlRepository: superset-sh/superset
Length of output: 257
🏁 Script executed:
# Check integrationProviderValues in schema.ts
rg "integrationProviderValues" packages/db/src/schema/schema.ts -B 5 -A 10Repository: superset-sh/superset
Length of output: 845
🏁 Script executed:
# Check git log to see when these files were added
git log --oneline packages/db/drizzle/0015_slack_integration.sql packages/db/drizzle/0016_slack_integration.sql 2>/dev/null | head -10Repository: superset-sh/superset
Length of output: 144
Remove duplicate migration file 0015_slack_integration.sql; 0016_slack_integration.sql is properly generated.
The file 0016_slack_integration.sql is correctly registered in packages/db/drizzle/meta/_journal.json (entry idx 16), confirming it was auto-generated by drizzle-kit. However, 0015_slack_integration.sql contains identical SQL but is not registered in the journal, indicating it's a manual duplicate that should be removed.
🤖 Prompt for AI Agents
In `@packages/db/drizzle/0016_slack_integration.sql` at line 1, Remove the manual
duplicate migration file 0015_slack_integration.sql so only the auto-generated
0016_slack_integration.sql remains (the latter is registered in
packages/db/drizzle/meta/_journal.json at idx 16); verify that ALTER TYPE
"public"."integration_provider" ADD VALUE 'slack'; exists only in
0016_slack_integration.sql and then delete 0015_slack_integration.sql from the
repository and commit the change.
Move Slack-specific lib code from src/lib/slack-agent/ into the route handler directory at integrations/slack/events/utils/. Wrap each event processor in its own folder with barrel exports. Move the app manifest into the slack integration directory. Fix implicit-any lint issue in callback/route.ts by restructuring try/catch to use const.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In
`@apps/api/src/app/api/integrations/slack/events/process-entity-details/process-entity-details.ts`:
- Around line 86-92: The task lookup misses a soft-delete filter; update the
db.query.tasks.findFirst call(s) (the queries that produce the variable `task`)
to add isNull(tasks.deletedAt) into the and(...) where clause so soft-deleted
rows are excluded; apply the same change to both places where this query is used
(the modules handling the Slack events that call db.query.tasks.findFirst, e.g.,
the code handling `taskSlug` and `connection.organizationId`) so the flexpane
never returns deleted tasks.
In
`@apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.ts`:
- Around line 59-63: The query that fetches a task via db.query.tasks.findFirst
(assigning to variable task) needs a soft-delete filter added so deleted tasks
aren't returned; update the where clause passed to findFirst to include the
existing soft-delete predicate used elsewhere (e.g., add isNull(tasks.deletedAt)
inside the and(...) alongside eq(tasks.organizationId,
connection.organizationId) and eq(tasks.slug, taskSlug)) so only non-deleted
tasks are returned for Slack unfurls.
In `@apps/api/src/app/api/integrations/slack/events/route.ts`:
- Around line 44-72: In POST, avoid directly calling JSON.parse and branching on
an unvalidated payload: define a Zod schema (e.g., SlackEventPayloadSchema) that
matches the Slack payload shape (type, challenge, event, team_id, event_id,
etc.), then parse/validate the raw body with that schema before using
payload.type; if parsing/validation fails return a 400 with an error and do not
proceed to event handling. Keep the signature/timestamp verification
(verifySlackSignature) as-is but only validate the JSON payload via the schema
after reading body and before the "url_verification" / "event_callback" branches
so malformed or unexpected payloads are rejected early.
In
`@apps/api/src/app/api/integrations/slack/events/utils/slack-blocks/slack-blocks.ts`:
- Around line 48-49: The file defines a hardcoded WEB_APP_URL constant which
conflicts with uses of env.NEXT_PUBLIC_WEB_URL (causing inconsistent unfurl/task
links); change the implementation so WEB_APP_URL is derived from the environment
variable (use env.NEXT_PUBLIC_WEB_URL as the primary value with a sensible
fallback like the existing production host) and update any references to use
that single source of truth (look for the WEB_APP_URL constant and usages around
slack block creation and the code that references env.NEXT_PUBLIC_WEB_URL).
In `@packages/mcp/src/tools/organizations/list-members/list-members.ts`:
- Around line 63-64: The ilike() calls in list-members.ts are vulnerable to
wildcard injection because the search string is interpolated directly into
`%${search}%`; escape `%`, `_`, and `\` in the search input before building the
pattern (e.g., transform the search variable by replacing /[\\%_]/g with escaped
versions) and then use ilike(users.name, `%${escapedSearch}%`) and
ilike(users.email, `%${escapedSearch}%`) (or alternatively use a raw sql
template with an ESCAPE clause) so that users.name and users.email comparisons
treat those characters as literals.
In `@packages/mcp/src/tools/tasks/list-tasks/list-tasks.ts`:
- Around line 181-214: The pagination currently sets data.count =
tasksList.length and data.hasMore = tasksList.length === limit which is wrong;
run a separate COUNT query using the same where(and(...conditions)) (e.g., const
total = await db.select({total:
sql<number>`count(*)`}).from(tasks).leftJoin(...).where(and(...conditions))) to
get the total matching rows, then set data.count = total and data.hasMore =
offset + tasksList.length < total; keep using the existing tasksList, limit,
offset, db, tasks and conditions symbols so the count query mirrors the page
query.
In `@packages/mcp/src/tools/tasks/update-task/update-task.ts`:
- Around line 138-141: The code silently skips tasks when update().returning()
yields no rows (the block that pushes into updatedTasks), which can hide a
TOCTOU race where a task was deleted between the earlier existence check and the
transaction; modify the update flow (the transaction that calls
update().returning()) to detect when returning() is empty and handle it
explicitly by either logging a warning/error with the task id and context or by
adding that task id to a returned partial-failure list (e.g., a failedTaskIds
array) so callers can observe which tasks failed to update instead of silently
omitting them; ensure you update any result object to include the partial
failures and use the same unique symbols updatedTasks and update().returning()
when implementing the change.
🧹 Nitpick comments (19)
packages/mcp/src/tools/tasks/list-tasks/list-tasks.ts (2)
17-47: ReusePRIORITIESin the schema to prevent drift.
The Zod enum duplicates the tuple, so schema validation can diverge fromisPriority. Consider reusing the constant.♻️ Suggested change
- priority: z - .enum(["urgent", "high", "medium", "low", "none"]) - .optional(), + priority: z.enum(PRIORITIES).optional(),Please confirm
z.enum(PRIORITIES)is supported in the Zod version used here.
57-58: Extract pagination limits into named constants.
This avoids magic numbers and centralizes tuning.♻️ Suggested change
const PRIORITIES = ["urgent", "high", "medium", "low", "none"] as const; type TaskPriority = (typeof PRIORITIES)[number]; +const DEFAULT_LIMIT = 50; +const MAX_LIMIT = 100; +const MIN_LIMIT = 1; +const DEFAULT_OFFSET = 0; @@ - limit: z.number().int().min(1).max(100).default(50), - offset: z.number().int().min(0).default(0), + limit: z + .number() + .int() + .min(MIN_LIMIT) + .max(MAX_LIMIT) + .default(DEFAULT_LIMIT), + offset: z.number().int().min(DEFAULT_OFFSET).default(DEFAULT_OFFSET),As per coding guidelines, avoid magic numbers by extracting them to named constants at module top.
packages/trpc/src/router/integration/slack/utils.ts (1)
5-14: Minor: Redundant null coalescing.
db.query.integrationConnections.findFirst()already returnsundefinedwhen no record is found. The?? nullon line 13 convertsundefinedtonull, which is fine for API consistency, but consider adding an explicit return type to document this intentional behavior.-export async function getSlackConnection(organizationId: string) { +export async function getSlackConnection(organizationId: string): Promise<typeof connection | null> {Alternatively, if the goal is to return
nullexplicitly (rather thanundefined), the current implementation is correct—just noting that it's an intentional normalization.packages/mcp/src/tools/devices/get-app-context/get-app-context.ts (1)
11-24: Consider schema-level validation for non-empty strings.The
inputSchemadefinesdeviceIdasz.string(), which allows empty strings. The runtime checkif (!deviceId)on line 19 catches this, but the validation could be pushed to the schema level for consistency:inputSchema: { - deviceId: z.string().describe("Target device ID"), + deviceId: z.string().min(1).describe("Target device ID"), },This would make the manual check redundant and provide better error messages from Zod validation. That said, the current pattern is consistent with other MCP tools in this codebase (e.g.,
list-projects), so this is optional.packages/mcp/src/tools/tasks/get-task/get-task.ts (1)
68-74: Date fields may serialize unexpectedly.The
dueDate,createdAt, andupdatedAtfields are selected directly from the database asDateobjects, but the output schema declares them asz.string(). When these are serialized viaJSON.stringify, they'll become ISO strings, which works, but thestructuredContentwill containDateobjects that may not match the declared schema during validation.Consider explicitly converting dates to ISO strings for consistency:
♻️ Suggested fix
.select({ id: tasks.id, slug: tasks.slug, // ... other fields ... - dueDate: tasks.dueDate, + dueDate: tasks.dueDate, // ... other fields ... - createdAt: tasks.createdAt, - updatedAt: tasks.updatedAt, + createdAt: tasks.createdAt, + updatedAt: tasks.updatedAt, })Then in the return:
return { - structuredContent: { task }, + structuredContent: { + task: { + ...task, + dueDate: task.dueDate?.toISOString() ?? null, + createdAt: task.createdAt.toISOString(), + updatedAt: task.updatedAt.toISOString(), + }, + }, content: [{ type: "text", text: JSON.stringify({ task }, null, 2) }], };packages/mcp/src/tools/devices/navigate-to-workspace/navigate-to-workspace.ts (1)
10-20: Inconsistent schema validation compared toswitch-workspace.In
switch-workspace.ts,workspaceIduses.uuid()validation (line 14), but here it's just.string().optional(). For consistency and input validation, consider adding the UUID constraint if workspace IDs are expected to be UUIDs.♻️ Suggested fix
workspaceId: z .string() + .uuid() .optional() .describe("Workspace ID to navigate to"),packages/mcp/src/tools/devices/delete-workspace/delete-workspace.ts (1)
17-25: RedundantdeviceIdvalidation.The
deviceIdparameter is already defined as a requiredz.string()in the input schema (line 11). The Zod validation should reject calls with missingdeviceIdbefore the handler executes, making the manual check on lines 20-25 unnecessary.This pattern appears in other device tools as well, so it may be intentional defensive coding. If you want to keep it for defense-in-depth, consider also validating
workspaceIdfor consistency.packages/trpc/src/router/integration/slack/slack.ts (1)
51-53: Consider consistent error handling for "not found" case.The
disconnectmutation returns{ success: false, error: "No connection found" }when no connection exists. This differs from the pattern used inverifyOrgAdminwhich throws aTRPCError.While returning an error object allows graceful client-side handling, consider whether this should throw a
TRPCErrorwith code"NOT_FOUND"for consistency with other mutation error patterns, or document that this is intentionally idempotent.packages/mcp/src/tools/tasks/update-task/update-task.ts (1)
65-91: Sequential task resolution queries.Each task is resolved individually (N queries for N updates). While the 25-item limit keeps this bounded, consider batching the existence checks into a single query for better performance if this tool sees heavy usage.
packages/mcp/src/tools/devices/list-devices/list-devices.ts (1)
38-40: Extract magic number to named constant.The multiplier
10for calculatingofflineThresholdshould be extracted to a named constant for clarity and maintainability.♻️ Proposed fix
+const OFFLINE_THRESHOLD_MULTIPLIER = 10; + export function register(server: McpServer) { server.registerTool( "list_devices",Then at usage:
const offlineThreshold = new Date( - Date.now() - DEVICE_ONLINE_THRESHOLD_MS * 10, + Date.now() - DEVICE_ONLINE_THRESHOLD_MS * OFFLINE_THRESHOLD_MULTIPLIER, );As per coding guidelines: "Avoid magic numbers by extracting them to named constants at module top"
packages/mcp/src/tools/tasks/create-task/create-task.ts (2)
40-46: Extract slug length limit to named constant.The value
50for maximum slug length should be extracted to a named constant.♻️ Proposed fix
+const MAX_SLUG_LENGTH = 50; + function generateBaseSlug(title: string): string { return title .toLowerCase() .replace(/[^a-z0-9]+/g, "-") .replace(/^-|-$/g, "") - .slice(0, 50); + .slice(0, MAX_SLUG_LENGTH); }As per coding guidelines: "Avoid magic numbers by extracting them to named constants at module top"
48-60: Unbounded loop in slug generation.The
whileloop ingenerateUniqueSlughas no upper bound on the counter. In pathological cases (e.g., ifexistingSlugsis corrupted or the check fails), this could loop indefinitely. Consider adding a maximum iteration guard.🛡️ Defensive improvement
+const MAX_SLUG_COLLISION_ATTEMPTS = 1000; + function generateUniqueSlug( baseSlug: string, existingSlugs: Set<string>, ): string { let slug = baseSlug; if (existingSlugs.has(slug)) { let counter = 1; - while (existingSlugs.has(slug)) { + while (existingSlugs.has(slug) && counter < MAX_SLUG_COLLISION_ATTEMPTS) { slug = `${baseSlug}-${counter++}`; } + if (counter >= MAX_SLUG_COLLISION_ATTEMPTS) { + throw new Error(`Unable to generate unique slug for: ${baseSlug}`); + } } return slug; }packages/mcp/src/tools/organizations/list-members/list-members.ts (1)
48-69: Reduce query duplication by conditionally adding search filter.The entire query is rebuilt when a search term is provided. Consider extending the base query conditionally to avoid duplicating the select/join/limit logic.
♻️ Proposed refactor
- let query = db + const baseConditions = [eq(members.organizationId, ctx.organizationId)]; + + if (search) { + baseConditions.push( + or( + ilike(users.name, `%${search}%`), + ilike(users.email, `%${search}%`), + )!, + ); + } + + const membersList = await db .select({ id: users.id, name: users.name, email: users.email, image: users.image, role: members.role, }) .from(members) .innerJoin(users, eq(members.userId, users.id)) - .where(and(...conditions)) + .where(and(...baseConditions)) .limit(limit); - - if (search) { - query = db - .select({ - id: users.id, - name: users.name, - email: users.email, - image: users.image, - role: members.role, - }) - .from(members) - .innerJoin(users, eq(members.userId, users.id)) - .where( - and( - ...conditions, - or( - ilike(users.name, `%${search}%`), - ilike(users.email, `%${search}%`), - ), - ), - ) - .limit(limit); - } - - const membersList = await query;apps/api/src/app/api/integrations/slack/manifest.json (1)
68-71: Consider enabling token rotation for improved security.
token_rotation_enabled: falsemeans OAuth tokens won't automatically rotate. While this simplifies token management, enabling rotation provides better security by limiting the exposure window if a token is compromised. Consider enabling this in production and implementing the token refresh logic.packages/mcp/src/tools/tasks/delete-task/delete-task.ts (2)
9-29: Avoid implicitanyin the tool handler by inferring from a Zod schema.
argsis untyped and needsas string[]. Defining a Zod object schema and inferring the handler args keeps types safe and removes the cast.♻️ Suggested typed schema + inferred args
+const deleteTaskInputSchema = z.object({ + taskIds: z + .array(z.string()) + .min(1) + .max(25) + .describe("Task IDs (uuid or slug) to delete (1-25)"), +}); +const deleteTaskOutputSchema = z.object({ + deleted: z.array(z.string()), + txid: z.string(), +}); + export function register(server: McpServer) { server.registerTool( "delete_task", { description: "Soft delete one or more tasks", - inputSchema: { - taskIds: z - .array(z.string()) - .min(1) - .max(25) - .describe("Task IDs (uuid or slug) to delete (1-25)"), - }, - outputSchema: { - deleted: z.array(z.string()), - txid: z.string(), - }, + inputSchema: deleteTaskInputSchema, + outputSchema: deleteTaskOutputSchema, }, - async (args, extra) => { + async (args: z.infer<typeof deleteTaskInputSchema>, extra) => { const ctx = getMcpContext(extra); - const taskIds = args.taskIds as string[]; + const taskIds = args.taskIds;As per coding guidelines: Avoid
anytype unless necessary.
15-19: Replace min/max literals with named constants.This keeps the limits self-documenting and centralized.
♻️ Suggested refactor
+const TASK_ID_MIN = 1; +const TASK_ID_MAX = 25; + inputSchema: { taskIds: z .array(z.string()) - .min(1) - .max(25) - .describe("Task IDs (uuid or slug) to delete (1-25)"), + .min(TASK_ID_MIN) + .max(TASK_ID_MAX) + .describe( + `Task IDs (uuid or slug) to delete (${TASK_ID_MIN}-${TASK_ID_MAX})`, + ), },As per coding guidelines: Avoid magic numbers by extracting them to named constants at module top.
apps/api/src/app/api/integrations/slack/events/route.ts (1)
22-25: Extract replay window and retry counts into constants.Helps avoid scattered policy changes and clarifies intent.
♻️ Suggested refactor
+const SLACK_REPLAY_WINDOW_SEC = 60 * 5; +const QSTASH_RETRIES = 3; @@ - if (Math.abs(now - timestampSec) > 60 * 5) { + if (Math.abs(now - timestampSec) > SLACK_REPLAY_WINDOW_SEC) { @@ - retries: 3, + retries: QSTASH_RETRIES, @@ - retries: 3, + retries: QSTASH_RETRIES,As per coding guidelines: Avoid magic numbers by extracting them to named constants at module top.
Also applies to: 96-104, 136-144
apps/api/src/app/api/integrations/slack/events/utils/run-agent/mcp-clients.ts (2)
30-54: Slack MCP client lacks a returned cleanup function unlike the Superset client.The Superset client returns
{ client, cleanup }but this returns onlyClient. The spawnednpxprocess should be cleaned up viaclient.close()in the caller, but the API inconsistency could lead to resource leaks if callers expect a cleanup function.Consider returning a consistent interface:
🔧 Suggested fix
export async function createSlackMcpClient({ token, teamId, }: { token: string; teamId: string; -}): Promise<Client> { +}): Promise<{ client: Client; cleanup: () => Promise<void> }> { const transport = new StdioClientTransport({ command: "npx", args: ["-y", "@modelcontextprotocol/server-slack"], env: { ...process.env, SLACK_BOT_TOKEN: token, SLACK_TEAM_ID: teamId, }, }); const client = new Client({ name: "slack-agent-slack", version: "1.0.0", }); await client.connect(transport); - return client; + return { client, cleanup: () => client.close() }; }
74-85: Edge case:parseToolNamereturns emptytoolNamewhen no underscore is found.When the input has no underscore (e.g.,
"sometool"), the function returns{ prefix: "sometool", toolName: "" }. This could cause the caller to invoke an MCP tool with an empty name. Consider logging a warning for this edge case or documenting the expected behavior.
| // Fetch the task from the database with full relations for flexpane | ||
| const task = await db.query.tasks.findFirst({ | ||
| where: and( | ||
| eq(tasks.organizationId, connection.organizationId), | ||
| eq(tasks.slug, taskSlug), | ||
| ), | ||
| with: { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n "apps/api/src/app/api/integrations/slack/events/process-entity-details/process-entity-details.ts" | head -120Repository: superset-sh/superset
Length of output: 3811
🏁 Script executed:
find . -type f -name "*.ts" | xargs grep -l "export const tasks" | head -5Repository: superset-sh/superset
Length of output: 177
🏁 Script executed:
grep -A 40 "export const tasks" packages/db/src/schema/schema.ts | head -50Repository: superset-sh/superset
Length of output: 1209
🏁 Script executed:
grep -A 80 "export const tasks" packages/db/src/schema/schema.ts | tail -50Repository: superset-sh/superset
Length of output: 1708
🏁 Script executed:
rg "db.query.tasks.findFirst" -A 8 --type ts --type tsx | head -80Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
rg "db.query.tasks.findFirst" -A 8 --type ts | head -100Repository: superset-sh/superset
Length of output: 3605
🏁 Script executed:
rg "tasks.*deletedAt" --type ts | head -20Repository: superset-sh/superset
Length of output: 1285
🏁 Script executed:
sed -n '80,110p' "apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.ts"Repository: superset-sh/superset
Length of output: 634
🏁 Script executed:
grep -n "db.query.tasks.findFirst" "apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.ts"Repository: superset-sh/superset
Length of output: 114
🏁 Script executed:
sed -n '55,70p' "apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.ts"Repository: superset-sh/superset
Length of output: 324
Add soft-delete filter to task lookups in Slack events.
The task queries in both process-entity-details.ts and process-link-shared.ts do not exclude soft-deleted tasks, allowing them to be exposed via the flexpane. Add isNull(tasks.deletedAt) filters to both queries.
🛡️ Suggested fixes
process-entity-details.ts:
-import { and, eq } from "drizzle-orm";
+import { and, eq, isNull } from "drizzle-orm";
@@
const task = await db.query.tasks.findFirst({
where: and(
eq(tasks.organizationId, connection.organizationId),
eq(tasks.slug, taskSlug),
+ isNull(tasks.deletedAt),
),process-link-shared.ts:
const task = await db.query.tasks.findFirst({
where: and(
eq(tasks.organizationId, connection.organizationId),
eq(tasks.slug, taskSlug),
+ isNull(tasks.deletedAt),
),🤖 Prompt for AI Agents
In
`@apps/api/src/app/api/integrations/slack/events/process-entity-details/process-entity-details.ts`
around lines 86 - 92, The task lookup misses a soft-delete filter; update the
db.query.tasks.findFirst call(s) (the queries that produce the variable `task`)
to add isNull(tasks.deletedAt) into the and(...) where clause so soft-deleted
rows are excluded; apply the same change to both places where this query is used
(the modules handling the Slack events that call db.query.tasks.findFirst, e.g.,
the code handling `taskSlug` and `connection.organizationId`) so the flexpane
never returns deleted tasks.
| const task = await db.query.tasks.findFirst({ | ||
| where: and( | ||
| eq(tasks.organizationId, connection.organizationId), | ||
| eq(tasks.slug, taskSlug), | ||
| ), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "process-link-shared.ts" | head -5Repository: superset-sh/superset
Length of output: 155
🏁 Script executed:
# Check the file structure
git ls-files | grep -E "process-link-shared|tasks.*schema" | head -20Repository: superset-sh/superset
Length of output: 229
🏁 Script executed:
# Look for the specific file in the apps/api directory
fd "process-link-shared.ts" apps/api/Repository: superset-sh/superset
Length of output: 153
🏁 Script executed:
wc -l apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.tsRepository: superset-sh/superset
Length of output: 157
🏁 Script executed:
cat -n apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.tsRepository: superset-sh/superset
Length of output: 3332
🏁 Script executed:
fd "schema" apps/api/src -type f | grep -E "\.(ts|tsx)$" | head -10Repository: superset-sh/superset
Length of output: 234
🏁 Script executed:
# Search for tasks table/schema definition
rg "export const tasks" apps/api/src -A 20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
find . -path "*/db/schema*" -type f -name "*.ts" 2>/dev/null | head -10Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Look for schema files more broadly
fd schema -type f apps/ | grep -i "\.ts$" | head -15Repository: superset-sh/superset
Length of output: 234
🏁 Script executed:
# The file imports from "@superset/db/schema", let's find that
find . -path "*db/schema*" 2>/dev/null | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Search for files containing "export const tasks"
rg "export const tasks" --type tsRepository: superset-sh/superset
Length of output: 298
🏁 Script executed:
wc -l packages/db/src/schema/schema.tsRepository: superset-sh/superset
Length of output: 100
🏁 Script executed:
# Find the tasks table definition
rg "export const tasks = pgTable" packages/db/src/schema/schema.ts -A 50Repository: superset-sh/superset
Length of output: 1521
🏁 Script executed:
# Search for other task queries that filter deletedAt to see the pattern
rg "tasks.findFirst" apps/ -A 8 | grep -E "(findFirst|isNull|deletedAt)" | head -30Repository: superset-sh/superset
Length of output: 456
🏁 Script executed:
# More specific search for isNull on deletedAt with tasks
rg "isNull.*deletedAt|deletedAt.*isNull" --type tsRepository: superset-sh/superset
Length of output: 1198
🏁 Script executed:
# Check the other two task queries to see if they also need deletion filters
rg "const task = await db.query.tasks.findFirst" apps/api/src/app/api/integrations -A 10Repository: superset-sh/superset
Length of output: 3587
🏁 Script executed:
# Check the linear sync-task file
rg "const task = await db.query.tasks.findFirst" apps/api/src -A 10Repository: superset-sh/superset
Length of output: 3587
Add soft-delete filter when querying tasks for Slack unfurls.
Tasks table uses soft deletes (deletedAt field), but this query doesn't filter them. Deleted tasks can still be unfurled in Slack. Update the where clause to exclude soft-deleted records using the pattern established across the codebase.
Suggested fix
-import { and, eq } from "drizzle-orm";
+import { and, eq, isNull } from "drizzle-orm";
@@
const task = await db.query.tasks.findFirst({
where: and(
eq(tasks.organizationId, connection.organizationId),
eq(tasks.slug, taskSlug),
+ isNull(tasks.deletedAt),
),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const task = await db.query.tasks.findFirst({ | |
| where: and( | |
| eq(tasks.organizationId, connection.organizationId), | |
| eq(tasks.slug, taskSlug), | |
| ), | |
| import { and, eq, isNull } from "drizzle-orm"; | |
| const task = await db.query.tasks.findFirst({ | |
| where: and( | |
| eq(tasks.organizationId, connection.organizationId), | |
| eq(tasks.slug, taskSlug), | |
| isNull(tasks.deletedAt), | |
| ), |
🤖 Prompt for AI Agents
In
`@apps/api/src/app/api/integrations/slack/events/process-link-shared/process-link-shared.ts`
around lines 59 - 63, The query that fetches a task via db.query.tasks.findFirst
(assigning to variable task) needs a soft-delete filter added so deleted tasks
aren't returned; update the where clause passed to findFirst to include the
existing soft-delete predicate used elsewhere (e.g., add isNull(tasks.deletedAt)
inside the and(...) alongside eq(tasks.organizationId,
connection.organizationId) and eq(tasks.slug, taskSlug)) so only non-deleted
tasks are returned for Slack unfurls.
| export async function POST(request: Request) { | ||
| const body = await request.text(); | ||
| const signature = request.headers.get("x-slack-signature"); | ||
| const timestamp = request.headers.get("x-slack-request-timestamp"); | ||
|
|
||
| if (!signature || !timestamp) { | ||
| return Response.json( | ||
| { error: "Missing signature headers" }, | ||
| { status: 401 }, | ||
| ); | ||
| } | ||
|
|
||
| // Verify signature | ||
| if (!verifySlackSignature({ body, signature, timestamp })) { | ||
| console.error("[slack/events] Signature verification failed"); | ||
| return Response.json({ error: "Invalid signature" }, { status: 401 }); | ||
| } | ||
|
|
||
| const payload = JSON.parse(body); | ||
|
|
||
| // Handle URL verification challenge (Slack sends this when setting up Events URL) | ||
| if (payload.type === "url_verification") { | ||
| return Response.json({ challenge: payload.challenge }); | ||
| } | ||
|
|
||
| // Handle event callbacks | ||
| if (payload.type === "event_callback") { | ||
| const { event, team_id, event_id } = payload; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n apps/api/src/app/api/integrations/slack/events/route.tsRepository: superset-sh/superset
Length of output: 7843
Validate the Slack payload with Zod before branching.
JSON.parse at line 62 can throw if the body is malformed, and the payload shape is unchecked. This API route boundary must validate inputs with a Zod schema per the coding guidelines.
🛈 Suggested pattern
+import { z } from "zod";
+
+const slackPayloadSchema = z
+ .object({
+ type: z.enum(["url_verification", "event_callback"]),
+ })
+ .passthrough();
+
export async function POST(request: Request) {
const body = await request.text();
@@
- const payload = JSON.parse(body);
+ let raw: unknown;
+ try {
+ raw = JSON.parse(body);
+ } catch (err) {
+ console.error("[slack/events] Invalid JSON payload:", err);
+ return Response.json({ error: "Invalid JSON" }, { status: 400 });
+ }
+ const parsed = slackPayloadSchema.safeParse(raw);
+ if (!parsed.success) {
+ console.error("[slack/events] Invalid payload:", parsed.error);
+ return Response.json({ error: "Invalid payload" }, { status: 400 });
+ }
+ const payload = parsed.data;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function POST(request: Request) { | |
| const body = await request.text(); | |
| const signature = request.headers.get("x-slack-signature"); | |
| const timestamp = request.headers.get("x-slack-request-timestamp"); | |
| if (!signature || !timestamp) { | |
| return Response.json( | |
| { error: "Missing signature headers" }, | |
| { status: 401 }, | |
| ); | |
| } | |
| // Verify signature | |
| if (!verifySlackSignature({ body, signature, timestamp })) { | |
| console.error("[slack/events] Signature verification failed"); | |
| return Response.json({ error: "Invalid signature" }, { status: 401 }); | |
| } | |
| const payload = JSON.parse(body); | |
| // Handle URL verification challenge (Slack sends this when setting up Events URL) | |
| if (payload.type === "url_verification") { | |
| return Response.json({ challenge: payload.challenge }); | |
| } | |
| // Handle event callbacks | |
| if (payload.type === "event_callback") { | |
| const { event, team_id, event_id } = payload; | |
| import { z } from "zod"; | |
| const slackPayloadSchema = z | |
| .object({ | |
| type: z.enum(["url_verification", "event_callback"]), | |
| }) | |
| .passthrough(); | |
| export async function POST(request: Request) { | |
| const body = await request.text(); | |
| const signature = request.headers.get("x-slack-signature"); | |
| const timestamp = request.headers.get("x-slack-request-timestamp"); | |
| if (!signature || !timestamp) { | |
| return Response.json( | |
| { error: "Missing signature headers" }, | |
| { status: 401 }, | |
| ); | |
| } | |
| // Verify signature | |
| if (!verifySlackSignature({ body, signature, timestamp })) { | |
| console.error("[slack/events] Signature verification failed"); | |
| return Response.json({ error: "Invalid signature" }, { status: 401 }); | |
| } | |
| let raw: unknown; | |
| try { | |
| raw = JSON.parse(body); | |
| } catch (err) { | |
| console.error("[slack/events] Invalid JSON payload:", err); | |
| return Response.json({ error: "Invalid JSON" }, { status: 400 }); | |
| } | |
| const parsed = slackPayloadSchema.safeParse(raw); | |
| if (!parsed.success) { | |
| console.error("[slack/events] Invalid payload:", parsed.error); | |
| return Response.json({ error: "Invalid payload" }, { status: 400 }); | |
| } | |
| const payload = parsed.data; | |
| // Handle URL verification challenge (Slack sends this when setting up Events URL) | |
| if (payload.type === "url_verification") { | |
| return Response.json({ challenge: payload.challenge }); | |
| } | |
| // Handle event callbacks | |
| if (payload.type === "event_callback") { | |
| const { event, team_id, event_id } = payload; |
🤖 Prompt for AI Agents
In `@apps/api/src/app/api/integrations/slack/events/route.ts` around lines 44 -
72, In POST, avoid directly calling JSON.parse and branching on an unvalidated
payload: define a Zod schema (e.g., SlackEventPayloadSchema) that matches the
Slack payload shape (type, challenge, event, team_id, event_id, etc.), then
parse/validate the raw body with that schema before using payload.type; if
parsing/validation fails return a 400 with an error and do not proceed to event
handling. Keep the signature/timestamp verification (verifySlackSignature) as-is
but only validate the JSON payload via the schema after reading body and before
the "url_verification" / "event_callback" branches so malformed or unexpected
payloads are rejected early.
| ilike(users.name, `%${search}%`), | ||
| ilike(users.email, `%${search}%`), |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Drizzle ORM ilike escape special characters wildcard matching
💡 Result:
Drizzle’s ilike() just generates ILIKE <pattern>—it does not add an ESCAPE clause or auto-escape % / _ for you (so user input containing those will act as wildcards). [1]
Safe “contains” search that treats % / _ literally (PostgreSQL)
- Escape user input for
LIKE/ILIKEpatterns (\,%,_), then wrap with%…%. - Use a raw
sqlWHERE withESCAPE '\\'(still parameterized).
import { sql } from "drizzle-orm";
function escapeLike(input: string) {
// escape \, %, _
return input.replace(/[\\%_]/g, "\\$&");
}
const q = "50%_off"; // user input
const pattern = `%${escapeLike(q)}%`;
db.select()
.from(users)
.where(sql`${users.name} ILIKE ${pattern} ESCAPE '\\'`);Why this works: in PostgreSQL, % and _ are wildcards, and you escape them by prefixing with the escape character (default is \, or specify one via ESCAPE). [3] Drizzle’s sql\`template keeps${pattern}` parameterized (so you’re not string-concatenating SQL). [2]
Note: If you actually want wildcard matching from user input, don’t escape % / _; just build the pattern you want and use ilike(users.name, pattern) directly. [1]
Sources: [1] Drizzle operators (like/ilike) docs, [2] Drizzle sql template docs, [3] PostgreSQL pattern matching / ESCAPE behavior.
Escape special characters in the search input to prevent wildcard injection.
Drizzle ORM's ilike() does not automatically escape % and _ in user input. Characters like %, _, and \ will be treated as wildcards, not literals. Escape the search parameter before interpolating it into the pattern (e.g., search.replace(/[\\%_]/g, '\\$&')) or use a raw sql template with an ESCAPE clause to safely match literal characters.
🤖 Prompt for AI Agents
In `@packages/mcp/src/tools/organizations/list-members/list-members.ts` around
lines 63 - 64, The ilike() calls in list-members.ts are vulnerable to wildcard
injection because the search string is interpolated directly into `%${search}%`;
escape `%`, `_`, and `\` in the search input before building the pattern (e.g.,
transform the search variable by replacing /[\\%_]/g with escaped versions) and
then use ilike(users.name, `%${escapedSearch}%`) and ilike(users.email,
`%${escapedSearch}%`) (or alternatively use a raw sql template with an ESCAPE
clause) so that users.name and users.email comparisons treat those characters as
literals.
| const tasksList = await db | ||
| .select({ | ||
| id: tasks.id, | ||
| slug: tasks.slug, | ||
| title: tasks.title, | ||
| description: tasks.description, | ||
| priority: tasks.priority, | ||
| statusId: tasks.statusId, | ||
| statusName: status.name, | ||
| statusType: status.type, | ||
| assigneeId: tasks.assigneeId, | ||
| assigneeName: assignee.name, | ||
| creatorId: tasks.creatorId, | ||
| creatorName: creator.name, | ||
| labels: tasks.labels, | ||
| dueDate: tasks.dueDate, | ||
| estimate: tasks.estimate, | ||
| createdAt: tasks.createdAt, | ||
| deletedAt: tasks.deletedAt, | ||
| }) | ||
| .from(tasks) | ||
| .leftJoin(assignee, eq(tasks.assigneeId, assignee.id)) | ||
| .leftJoin(creator, eq(tasks.creatorId, creator.id)) | ||
| .leftJoin(status, eq(tasks.statusId, status.id)) | ||
| .where(and(...conditions)) | ||
| .orderBy(desc(tasks.createdAt)) | ||
| .limit(limit) | ||
| .offset(offset); | ||
|
|
||
| const data = { | ||
| tasks: tasksList, | ||
| count: tasksList.length, | ||
| hasMore: tasksList.length === limit, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && wc -l list-tasks.tsRepository: superset-sh/superset
Length of output: 81
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && head -220 list-tasks.ts | tail -50Repository: superset-sh/superset
Length of output: 1293
🏁 Script executed:
# Search for the response type/interface definition
cd packages/mcp && find . -type f -name "*.ts" -o -name "*.tsx" | xargs grep -l "listTasks\|ListTasks" | head -10Repository: superset-sh/superset
Length of output: 84
🏁 Script executed:
# Find the interface/type for the list-tasks response
cd packages/mcp && rg "export.*interface.*Tasks|export.*type.*Tasks" --type ts -A 5 | head -50Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check how count and hasMore are used
cd packages/mcp && rg "\.count|\.hasMore" --type ts -B 2 -A 2Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && cat -n list-tasks.ts | head -50Repository: superset-sh/superset
Length of output: 2057
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && cat -n list-tasks.ts | tail -20Repository: superset-sh/superset
Length of output: 722
🏁 Script executed:
# Search for any interface or type definition related to this tool's output
cd packages/mcp && rg "listTasks|list-tasks|listTasks" --type ts -lRepository: superset-sh/superset
Length of output: 118
🏁 Script executed:
# Look at the beginning of list-tasks.ts for imports and type definitions
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '1,50p' list-tasks.tsRepository: superset-sh/superset
Length of output: 1707
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '40,100p' list-tasks.tsRepository: superset-sh/superset
Length of output: 2236
🏁 Script executed:
# Look for tests related to list-tasks
cd packages/mcp && find . -type f -name "*test*" -o -name "*spec*" | grep -i "list-task"Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Search for any limit/offset parameters definition
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '50,150p' list-tasks.tsRepository: superset-sh/superset
Length of output: 3142
🏁 Script executed:
# Check if count is used or documented elsewhere
cd packages/mcp && rg "count.*tasksList|hasMore" --type ts -B 3 -A 3Repository: superset-sh/superset
Length of output: 1413
🏁 Script executed:
# Look for any MCP response documentation or schemas
cd packages/mcp && find . -type f -name "*.ts" | xargs grep -l "structuredContent\|hasMore" | head -5Repository: superset-sh/superset
Length of output: 296
🏁 Script executed:
cd packages/mcp/src/tools && grep -A 5 "count:" devices/list-devices/list-devices.ts | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
cd packages/mcp/src/tools && grep -A 5 "count:" organizations/list-members/list-members.ts | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check the actual logic in list-devices to see how it implements count/hasMore
cd packages/mcp/src/tools/devices/list-devices && grep -B 10 -A 5 "hasMore\|count:" list-devices.ts | tail -30Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
cd packages/mcp && rg "offset.*limit|pagination" --type ts -B 2 -A 2 | head -40Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Look at how the response is actually consumed - check tests or integration
cd packages && find . -type f \( -name "*.test.ts" -o -name "*.spec.ts" \) | head -10Repository: superset-sh/superset
Length of output: 267
🏁 Script executed:
# Check if there are any API clients or consumers of this tool
cd packages/mcp && rg "list_tasks|listTasks" --type ts -lRepository: superset-sh/superset
Length of output: 123
🏁 Script executed:
# Examine the actual database query - look for where status filtering happens
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '140,180p' list-tasks.tsRepository: superset-sh/superset
Length of output: 1037
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '50,100p' list-tasks.tsRepository: superset-sh/superset
Length of output: 1922
🏁 Script executed:
# Check the complete file to see if count/hasMore are documented anywhere
cd packages/mcp/src/tools/tasks/list-tasks && cat -n list-tasks.ts | grep -i "count\|more" -B 2 -A 2Repository: superset-sh/superset
Length of output: 516
🏁 Script executed:
# Search for any database migration or schema that might define the semantics
cd packages && find . -path "*/schema*" -o -path "*/migrations*" | head -5Repository: superset-sh/superset
Length of output: 179
🏁 Script executed:
cat > /tmp/test_pagination.js << 'EOF'
// Test scenarios for current implementation
const scenarios = [
{ total: 100, limit: 50, offset: 0, name: "First page, even total" },
{ total: 100, limit: 50, offset: 50, name: "Last page, even total" },
{ total: 75, limit: 50, offset: 0, name: "First page, odd total" },
{ total: 75, limit: 50, offset: 50, name: "Last page, odd total" },
{ total: 150, limit: 50, offset: 100, name: "Middle page" },
{ total: 1, limit: 50, offset: 0, name: "Single item, small limit" },
];
console.log("Current implementation: hasMore = tasksList.length === limit\n");
scenarios.forEach(s => {
const tasksList_length = Math.min(s.limit, Math.max(0, s.total - s.offset));
const hasMore_current = tasksList_length === s.limit;
const hasMore_correct = (s.offset + tasksList_length) < s.total;
const match = hasMore_current === hasMore_correct ? "✓" : "✗ BUG";
console.log(`${s.name}`);
console.log(` Items returned: ${tasksList_length}, Current hasMore: ${hasMore_current}, Correct: ${hasMore_correct} ${match}\n`);
});
EOF
node /tmp/test_pagination.jsRepository: superset-sh/superset
Length of output: 635
🏁 Script executed:
# Check git history for this file to see if there are any comments about count/hasMore intent
cd packages/mcp && git log --oneline -n 20 src/tools/tasks/list-tasks/list-tasks.tsRepository: superset-sh/superset
Length of output: 138
🏁 Script executed:
# Search for any code that consumes this tool's count field
cd packages && rg "\.count|count:" --type ts | grep -i "task\|list" | head -20Repository: superset-sh/superset
Length of output: 309
🏁 Script executed:
# Look for related issues or TODOs in the file
cd packages/mcp/src/tools/tasks/list-tasks && grep -i "todo\|fixme\|bug\|note" list-tasks.tsRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
cd packages/mcp/src/tools/tasks/list-tasks && sed -n '165,175p' list-tasks.tsRepository: superset-sh/superset
Length of output: 330
Fix pagination logic: count and hasMore are inaccurate for paged results.
Currently, count equals the number of items in the current page, and hasMore is true whenever the page is full. This breaks on the last page when total items equal an exact multiple of limit—the page will be full, but hasMore will incorrectly return true.
To fix this, query the total count of matching tasks and compute hasMore from the total:
Example fix
+ const [{ count: totalCount }] = await db
+ .select({ count: sql<number>`count(*)` })
+ .from(tasks)
+ .where(and(...conditions));
const tasksList = await db
.select({
id: tasks.id,
@@
.limit(limit)
.offset(offset);
const data = {
tasks: tasksList,
- count: tasksList.length,
- hasMore: tasksList.length === limit,
+ count: totalCount,
+ hasMore: offset + tasksList.length < totalCount,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const tasksList = await db | |
| .select({ | |
| id: tasks.id, | |
| slug: tasks.slug, | |
| title: tasks.title, | |
| description: tasks.description, | |
| priority: tasks.priority, | |
| statusId: tasks.statusId, | |
| statusName: status.name, | |
| statusType: status.type, | |
| assigneeId: tasks.assigneeId, | |
| assigneeName: assignee.name, | |
| creatorId: tasks.creatorId, | |
| creatorName: creator.name, | |
| labels: tasks.labels, | |
| dueDate: tasks.dueDate, | |
| estimate: tasks.estimate, | |
| createdAt: tasks.createdAt, | |
| deletedAt: tasks.deletedAt, | |
| }) | |
| .from(tasks) | |
| .leftJoin(assignee, eq(tasks.assigneeId, assignee.id)) | |
| .leftJoin(creator, eq(tasks.creatorId, creator.id)) | |
| .leftJoin(status, eq(tasks.statusId, status.id)) | |
| .where(and(...conditions)) | |
| .orderBy(desc(tasks.createdAt)) | |
| .limit(limit) | |
| .offset(offset); | |
| const data = { | |
| tasks: tasksList, | |
| count: tasksList.length, | |
| hasMore: tasksList.length === limit, | |
| }; | |
| const [{ count: totalCount }] = await db | |
| .select({ count: sql<number>`count(*)` }) | |
| .from(tasks) | |
| .where(and(...conditions)); | |
| const tasksList = await db | |
| .select({ | |
| id: tasks.id, | |
| slug: tasks.slug, | |
| title: tasks.title, | |
| description: tasks.description, | |
| priority: tasks.priority, | |
| statusId: tasks.statusId, | |
| statusName: status.name, | |
| statusType: status.type, | |
| assigneeId: tasks.assigneeId, | |
| assigneeName: assignee.name, | |
| creatorId: tasks.creatorId, | |
| creatorName: creator.name, | |
| labels: tasks.labels, | |
| dueDate: tasks.dueDate, | |
| estimate: tasks.estimate, | |
| createdAt: tasks.createdAt, | |
| deletedAt: tasks.deletedAt, | |
| }) | |
| .from(tasks) | |
| .leftJoin(assignee, eq(tasks.assigneeId, assignee.id)) | |
| .leftJoin(creator, eq(tasks.creatorId, creator.id)) | |
| .leftJoin(status, eq(tasks.statusId, status.id)) | |
| .where(and(...conditions)) | |
| .orderBy(desc(tasks.createdAt)) | |
| .limit(limit) | |
| .offset(offset); | |
| const data = { | |
| tasks: tasksList, | |
| count: totalCount, | |
| hasMore: offset + tasksList.length < totalCount, | |
| }; |
🤖 Prompt for AI Agents
In `@packages/mcp/src/tools/tasks/list-tasks/list-tasks.ts` around lines 181 -
214, The pagination currently sets data.count = tasksList.length and
data.hasMore = tasksList.length === limit which is wrong; run a separate COUNT
query using the same where(and(...conditions)) (e.g., const total = await
db.select({total:
sql<number>`count(*)`}).from(tasks).leftJoin(...).where(and(...conditions))) to
get the total matching rows, then set data.count = total and data.hasMore =
offset + tasksList.length < total; keep using the existing tasksList, limit,
offset, db, tasks and conditions symbols so the count query mirrors the page
query.
| if (task) { | ||
| updatedTasks.push(task); | ||
| } | ||
| } |
There was a problem hiding this comment.
Silent skip on failed update may mask race condition.
If the update().returning() returns no rows (line 138), the task is silently skipped. This could happen if the task was deleted between the existence check (lines 69-79) and the transaction update, masking a TOCTOU race condition.
Consider logging when this occurs or returning a partial failure result indicating which tasks couldn't be updated.
🛡️ Suggested improvement
if (task) {
updatedTasks.push(task);
+ } else {
+ console.warn(`[mcp/update-task] Task ${taskId} was not updated - may have been deleted`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (task) { | |
| updatedTasks.push(task); | |
| } | |
| } | |
| if (task) { | |
| updatedTasks.push(task); | |
| } else { | |
| console.warn(`[mcp/update-task] Task ${taskId} was not updated - may have been deleted`); | |
| } |
🤖 Prompt for AI Agents
In `@packages/mcp/src/tools/tasks/update-task/update-task.ts` around lines 138 -
141, The code silently skips tasks when update().returning() yields no rows (the
block that pushes into updatedTasks), which can hide a TOCTOU race where a task
was deleted between the earlier existence check and the transaction; modify the
update flow (the transaction that calls update().returning()) to detect when
returning() is empty and handle it explicitly by either logging a warning/error
with the task id and context or by adding that task id to a returned
partial-failure list (e.g., a failedTaskIds array) so callers can observe which
tasks failed to update instead of silently omitting them; ensure you update any
result object to include the partial failures and use the same unique symbols
updatedTasks and update().returning() when implementing the change.
…coded URLs Remove unused rich attachment functions from slack-blocks, replace hardcoded WEB_APP_URL with env.NEXT_PUBLIC_WEB_URL, strip restating comments across all Slack files, remove isDev local dev bypasses (use ngrok instead), delete unused AgentActionType export, and register Slack secrets in deploy workflows.
node.name is typed as ReactNode (nullable, non-string) in fumadocs. Convert to string before calling .match() and assigning to title.
Summary
Features
Slack Agent
Work Objects (Rich Unfurling)
OAuth Integration
Architecture
Test plan
Summary by CodeRabbit
New Features
Bug Fixes
Database
✏️ Tip: You can customize this high-level summary in your review settings.