feat(desktop): redesign environment variables table#1420
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR introduces a comprehensive refactor from a repository-centric model to a projects-based architecture. It adds cloud workspace support, environment secrets management with encryption, restructures desktop settings navigation, and propagates encryption keys through deployment workflows. Database schema migration includes new projects, workspaces, sandbox images, and secrets tables while removing the repositories table. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Desktop as Desktop App
participant ElectronTRPC as Electron TRPC
participant API as API Server
participant DB as Database
participant Crypto as Encryption Service
User->>Desktop: Click "Connect Cloud Project"
Desktop->>API: Create cloud project (repoOwner, repoName, slug, repoUrl)
API->>DB: Insert projects record
DB-->>API: Return project record
API-->>Desktop: Return project details
Desktop->>API: Link to Neon (projectId, neonProjectId)
API->>DB: Update projects.neonProjectId
DB-->>API: Confirm update
API-->>Desktop: Success
Desktop->>Desktop: Load EnvironmentVariablesList for cloud project
sequenceDiagram
actor User
participant Desktop as Desktop App
participant Sheet as AddSecretSheet
participant API as API Server
participant Crypto as Encryption Service
participant DB as Database
User->>Sheet: Open Add Secret Sheet
User->>Sheet: Enter key & value pairs
User->>Sheet: Click Save
Sheet->>API: Upsert secret (projectId, key, value, sensitive)
API->>Crypto: Encrypt value with SECRETS_ENCRYPTION_KEY
Crypto-->>API: Return encrypted_value
API->>DB: Upsert secrets record (encrypted_value)
DB-->>API: Confirm insert/update
API-->>Sheet: Success response
Sheet->>Sheet: Show success toast
Sheet->>Desktop: Trigger refresh
Desktop->>API: Fetch decrypted secrets
API->>DB: Query secrets for project
DB-->>API: Return encrypted secrets
API->>Crypto: Decrypt values
Crypto-->>API: Return plaintext values
API-->>Desktop: Return decrypted secrets
Desktop->>Desktop: Update EnvironmentVariablesList UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/renderer/stores/settings-state.ts (1)
4-7:⚠️ Potential issue | 🟡 MinorStale comment: "workspace" is no longer a dynamic section.
The comment on line 6 still references "project/workspace are dynamic," but
"workspace"has been removed from theSettingsSectiontype. Update the comment to reflect only"project".📝 Proposed fix
/** * Settings sections available in the settings view. - * General sections are the main categories, project/workspace are dynamic. + * General sections are the main categories, project is dynamic. */
🤖 Fix all issues with AI agents
In @.github/workflows/deploy-preview.yml:
- Around line 678-687: Remove SECRETS_ENCRYPTION_KEY from the vercel deploy
invocations that build static Docs/Marketing sites: locate the run block that
sets VERCEL_URL via the vercel deploy --prebuilt command (the step that
constructs VERCEL_URL) and delete the --env
SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY argument there (and the
equivalent argument in the Marketing deploy block). Leave SECRETS_ENCRYPTION_KEY
only on deployments that actually require it (do not remove the top-level
secrets definition), ensuring vercel deploy --prebuilt and VERCEL_URL remain
unchanged otherwise.
- Around line 253-254: The workflow is missing a value for STREAMS_URL (used
later alongside SECRETS_ENCRYPTION_KEY) so add/emit it before the API
build/deploy: construct the URL from the Fly app naming convention (e.g.
https://superset-stream-pr-${{ github.event.pull_request.number }}.fly.dev) and
set STREAMS_URL in the top-level env: block or export it to GITHUB_ENV after the
Fly deploy step that creates the app; ensure the generated value is available to
the job that runs apps/api (apps/api/src/env.ts expects STREAMS_URL as a
required z.string().url()).
In @.github/workflows/deploy-production.yml:
- Around line 504-513: The SECRETS_ENCRYPTION_KEY is being passed into the
production vercel deploy step where Docs/Marketing don't need it; remove the
--env SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY entry from the vercel
deploy invocation (the run block containing vercel pull/build/deploy) so the
deploy command no longer injects SECRETS_ENCRYPTION_KEY into the Docs/Marketing
production deployment.
In `@apps/desktop/src/lib/trpc/routers/projects/projects.ts`:
- Around line 1004-1013: The linkToNeon mutation currently updates localDb
without verifying the project exists and allows empty neonProjectId; change the
input schema to neonProjectId: z.string().min(1) and, following the pattern used
by update/close/refreshDefaultBranch, first fetch the project by id (using the
same project lookup logic) and if not found throw a TRPCError with code
'NOT_FOUND'; only then perform the localDb.update(projects).set(...).where(...)
and await the .run() result before returning { success: true } so the procedure
behaves consistently with other id-based mutations.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts`:
- Around line 46-53: The onError handler for linkMutation.mutate is swallowing
errors — modify the onError callback in useNeonProjectLink so it accepts the
error parameter (e.g., (error) => { ... }), keeps the existing cleanup calling
linkingRef.current.delete(local.id), and also surface the failure by logging the
error (console.error or app logger) and notifying the user (use the app's
toast/notification utility or set an error state) so failures aren't silent;
keep the cleanup logic intact and include the error details in the
log/notification for debugging.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/EditSecretDialog/EditSecretDialog.tsx:
- Around line 40-45: The useEffect in EditSecretDialog currently depends on the
whole secret object which can cause in-progress edits to be reset when parent
passes a new secret reference; change the dependency to a stable identifier
(e.g., secret.id or secret.key) so the effect only runs when the actual secret
changes, and keep the existing logic that sets value via
setValue(secret.sensitive ? "" : secret.value) inside the effect that now lists
[open, secret.id] (or [open, secret.key]) instead of [open, secret].
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/EnvironmentVariablesList.tsx:
- Around line 72-84: The fetchSecrets callback currently logs errors to console
and only clears loading in finally, so failures are invisible to users and
subsequent refreshes don't show loading; update fetchSecrets to
setIsLoading(true) at the start of the function, catch the error and set a
visible error state (e.g., setFetchError or reuse an existing error state) when
apiTrpcClient.project.secrets.getDecrypted.query fails, and ensure you
clear/setSecrets only on success; display that error state in the
EnvironmentVariablesList UI so users see the failure and use setIsLoading(false)
in finally as you already do.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/SecretsSettings.tsx:
- Around line 185-206: The AddSecretSheet and EditSecretDialog are being passed
an empty string for projectId via project.neonProjectId ?? "" which allows these
components to render (and potentially call APIs) when no cloud project is
linked; update the render guards so both AddSecretSheet and EditSecretDialog
only render when project.neonProjectId is truthy (e.g., replace the current
organizationId checks with organizationId && project.neonProjectId or add an
explicit check) so that projectId is never an empty string, ensuring the
components receive a valid project.neonProjectId before invoking APIs.
- Around line 74-98: The call to linkToNeon.mutate inside
handleCreateCloudProject is fire-and-forget and any error from that mutation is
swallowed; update the code to surface failures by either adding an onError
handler to the linkToNeon mutation definition (attach logging/user-facing error
state in its onError) or await the mutation and wrap it in its own try/catch so
you can log the error and set appropriate UI state (e.g., call
setIsCreatingCloud(false) if needed and show an error toast). Reference
linkToNeon.mutate and handleCreateCloudProject (and keep
apiTrpcClient.project.create.mutate behavior unchanged) when making the change.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/utils/env-file/env-file.ts:
- Around line 27-32: The current loop that skips to the closing quote using
variables quote, afterEq, lines and index i can run off the end and treat an
unclosed multiline quote as valid; update the logic in the validate function
(and mirror in parseEnvContent) so after the inner while that advances i you
check if i >= lines.length (meaning no closing quote was found) and return a
validation error (or otherwise signal failure) instead of incrementing past the
end and continuing; ensure parseEnvContent uses the same detection to either
throw or return an error for unclosed quotes so values are not silently
truncated.
In `@packages/trpc/src/router/project/project.ts`:
- Around line 64-75: The mutation currently calls .set(data) even when data is
empty and returns updated without checking if a row was found; before calling
dbWs.update(projects).set(data) ensure there is at least one updatable field
(e.g., if Object.keys(data).length === 0) and throw a BAD_REQUEST TRPCError like
"No fields to update", and after performing the update check if the returned
updated is undefined and if so throw a NOT_FOUND TRPCError (use the same id and
organizationId context); reference verifyOrgMembership, the local variables
id/organizationId/data, the dbWs.update(projects).set(data)...returning() call,
and the updated result when adding these guards.
- Around line 27-53: The project insert and sandboxImages insert must be atomic
to avoid orphaned projects; wrap both operations in a single transaction using
the DB transaction API (e.g., dbWs.transaction) so both inserts run on the same
transaction context: perform the insert into projects (the existing
dbWs.insert(projects).values(...).returning()) inside the transaction, then
insert into sandboxImages using the transaction handle (use
tx.insert(sandboxImages).values(...)), check the returned project and throw the
same TRPCError if missing, and return the project from the transaction; this
ensures either both rows are committed or neither are.
In `@packages/trpc/src/router/project/secrets.ts`:
- Around line 97-112: The delete protectedProcedure currently always returns {
success: true } even when no row is deleted; modify the mutation to capture the
result of dbWs.delete(secrets).where(...).returning({id: secrets.id}) (or check
the affected row count from the executed query) and if the returned array is
empty throw a TRPCError with code 'NOT_FOUND' (after verifyOrgMembership).
Ensure you only return { success: true } when a row was actually deleted and add
an import for TRPCError if not present.
- Around line 28-29: Add an explicit check that the provided projectId belongs
to the provided organizationId before performing the mutation or getDecrypted
read: after verifyOrgMembership(ctx.session.user.id, input.organizationId) (in
the secrets create mutation) and at the start of the getDecrypted procedure,
query the projects table (or add a helper like
verifyProjectOrganization(projectId, organizationId)) to confirm that
input.projectId is associated with input.organizationId; if not, throw an
authorization error and halt; this ensures the insert/read cannot be made for a
project in a different org even when the user is a member of the provided
organizationId.
- Around line 47-52: The count check uses db.query.secrets.findMany (assigning
to existing) while the insert uses dbWs, creating a TOCTOU risk; change the read
that enforces MAX_SECRETS_PER_PROJECT to use the same client
(dbWs.query.secrets.findMany or equivalent) so the check and subsequent write
use the same transport, referencing the existing variable name existing, the
constant MAX_SECRETS_PER_PROJECT, and the write code that uses dbWs to keep the
operation atomic from the client-transport perspective (or explicitly document
this as an intentional soft limit if you choose not to change it).
🧹 Nitpick comments (18)
apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts (2)
29-56:linkMutationinuseCallbackdeps causes unnecessary effect re-fires.
useMutationreturns a new object reference whenever mutation state changes (e.g.,isPendingflips). IncludinglinkMutationin the dependency array meanslinkProjectsis recreated on every state transition, re-triggering theuseEffect. ThelinkingRefguard prevents duplicate mutations, but the churn is wasteful and makes the data flow harder to reason about.A cleaner approach: store the mutate function in a ref, or extract only
linkMutation.mutate(which has a stable identity in recent TanStack Query versions) as the dependency.Suggested fix
+ const mutateRef = useRef(linkMutation.mutate); + mutateRef.current = linkMutation.mutate; + const linkProjects = useCallback(() => { if (!localProjects || !cloudProjects) return; for (const local of localProjects) { if (local.neonProjectId || !local.githubOwner) continue; if (linkingRef.current.has(local.id)) continue; const repoName = local.mainRepoPath.split("/").pop(); if (!repoName) continue; const match = cloudProjects.find( (cloud) => cloud.repoOwner === local.githubOwner && cloud.repoName === repoName, ); if (match) { linkingRef.current.add(local.id); - linkMutation.mutate( + mutateRef.current( { id: local.id, neonProjectId: match.id }, { onError: () => { linkingRef.current.delete(local.id); }, }, ); } } - }, [localProjects, cloudProjects, linkMutation]); + }, [localProjects, cloudProjects]);
36-36: Fragile repo name extraction from path.
local.mainRepoPath.split("/").pop()works for simple paths but would return an empty string for a trailing-slash path (e.g.,"owner/repo/"). The guard on Line 37 catches the empty-string case, so this isn't a bug, but afilter(Boolean)or explicit trim could make intent clearer.apps/desktop/src/renderer/routes/_authenticated/settings/components/SettingsSidebar/ProjectsSettings.tsx (1)
95-126: Consider: nestedCollapsibleinsideCollapsibleContentmay feel heavy for a single child.Currently the "Cloud" section nests a
Collapsiblewith only one sub-item ("Environment Variables"). This adds visual and interaction overhead for a single link. If more cloud sub-items are planned, this is fine as-is. Otherwise, a flat link would simplify the sidebar.This is a design observation — no code change needed unless the team prefers simplification.
packages/trpc/src/lib/crypto.ts (2)
7-14: Use the validatedenvobject instead of rawprocess.env.
getKey()readsprocess.env.SECRETS_ENCRYPTION_KEYdirectly, bypassing the Zod validation already defined in../env.ts. This means the runtime check here (lines 9, 11-12) duplicates what the env schema should guarantee, and a typo or drift between the two won't be caught.Suggested fix
+import { env } from "../env"; + function getKey(): Buffer { - const raw = process.env.SECRETS_ENCRYPTION_KEY; - if (!raw) throw new Error("SECRETS_ENCRYPTION_KEY not set"); + const raw = env.SECRETS_ENCRYPTION_KEY; const key = Buffer.from(raw, "base64"); if (key.length !== 32) throw new Error("SECRETS_ENCRYPTION_KEY must be 32 bytes"); return key; }
30-41: Mixed encoding indecryptSecret— specify"utf8"consistently.On Line 40,
decipher.update(ciphertext)returns aBuffer(no output encoding specified), which is then concatenated withdecipher.final("utf8")(a string). This relies on implicitBuffer.toString()coercion during+concatenation. While it works because the default encoding is UTF-8, it's fragile and inconsistent with the explicit"utf8"onfinal().Suggested fix
- return decipher.update(ciphertext) + decipher.final("utf8"); + return Buffer.concat([decipher.update(ciphertext), decipher.final()]).toString("utf8");packages/trpc/src/router/project/secrets.ts (1)
132-140: No error handling ifdecryptSecretthrows.If the encryption key rotates or data is corrupted,
decryptSecretwill throw and the entiregetDecryptedquery fails for all secrets. Consider wrapping the decryption in a try/catch per row, returning an error indicator for the affected secret while still serving the rest.Proposed approach
return rows.map((row) => ({ id: row.id, key: row.key, - value: row.sensitive ? "" : decryptSecret(row.encryptedValue), + value: row.sensitive + ? "" + : (() => { + try { + return decryptSecret(row.encryptedValue); + } catch (e) { + console.error(`[secrets/getDecrypted] Failed to decrypt secret ${row.id}`, e); + return ""; + } + })(), sensitive: row.sensitive, createdAt: row.createdAt, updatedAt: row.updatedAt, createdBy: row.createdBy ?? null, }));packages/db/drizzle/0018_sandbox_start.sql (1)
57-58: Destructive migration:repositoriestable is dropped with data loss.
DROP TABLE "repositories" CASCADEpermanently removes this table and all its data. Ensure a data migration or backfill plan exists if there's any production data inrepositoriesthat needs to be preserved or migrated toprojects.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/secrets/page.tsx (1)
23-26: String-based error detection is brittle.
error.message.includes("not found")can match unrelated errors. If the server usesTRPCErrorwithcode: "NOT_FOUND", prefer checking the structured error code instead.Suggested improvement
- } catch (error) { - if (error instanceof Error && error.message.includes("not found")) { - throw notFound(); - } - throw error; + } catch (error) { + if ( + error instanceof Error && + "data" in error && + (error as any).data?.code === "NOT_FOUND" + ) { + throw notFound(); + } + throw error;Alternatively, check if a
TRPCClientErrortype is available for a type-safe check.packages/trpc/src/router/workspace/workspace.ts (2)
26-37:workspacecould beundefinedif.returning()yields no rows.The destructured
[workspace]is returned directly without a guard. While unlikely for anINSERT ... RETURNING, if it ever happens the caller receivesundefined. Other routers in this codebase (e.g., organizationcreate) have the same pattern, so this may be accepted — but a defensive check would be more robust.Optional guard
const [workspace] = await dbWs .insert(workspaces) .values({ ... }) .returning(); + if (!workspace) { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: "Failed to create workspace", + }); + } return workspace;
44-54: Delete succeeds silently when workspace doesn't exist.The mutation returns
{ success: true }regardless of whether any row was actually deleted. If the caller expects idempotent deletes this is fine; otherwise consider checking the result count or throwingNOT_FOUND.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/EditSecretDialog/EditSecretDialog.tsx (1)
47-68: Consider using tRPC React hooks instead of the raw client for consistency.
handleSaveusesapiTrpcClientdirectly with manualisSavingstate. UsinguseMutationfrom the tRPC React hooks would provide automatic loading/error state and integrate with React Query's cache invalidation, which is the pattern used elsewhere (e.g.,SecretsSettings.tsxuses hook-based queries).This is a minor consistency suggestion — the current approach works fine.
packages/trpc/src/router/project/project.ts (1)
77-92: Delete returns success even if no row was deleted.If the
id + organizationIdcombination doesn't match any row, the mutation still returns{ success: true }. Consider checking the affected row count or returning the deleted record.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx (2)
48-62: Delete error is logged but not surfaced to the user.Other mutation flows (add/edit) display
toast.error(...)on failure, but delete silently catches the error at line 58. Also,confirm()is used here whilealert.destructive()is used inAddSecretSheetfor similar destructive confirmations — consider aligning the patterns.🛡️ Proposed fix
+import { toast } from "@superset/ui/sonner"; +import { alert } from "@superset/ui/atoms/Alert"; // ... const handleDelete = useCallback(async () => { - if (!confirm(`Delete environment variable "${secret.key}"?`)) return; - setIsDeleting(true); - try { - await apiTrpcClient.project.secrets.delete.mutate({ - id: secret.id, - organizationId, - }); - onDeleted(); - } catch (err) { - console.error("[secrets/delete] Failed to delete:", err); - } finally { - setIsDeleting(false); - } + alert.destructive({ + title: `Delete "${secret.key}"?`, + description: "This environment variable will be permanently removed.", + confirmText: "Delete", + onConfirm: async () => { + setIsDeleting(true); + try { + await apiTrpcClient.project.secrets.delete.mutate({ + id: secret.id, + organizationId, + }); + onDeleted(); + } catch (err) { + console.error("[secrets/delete] Failed to delete:", err); + toast.error("Failed to delete environment variable"); + } finally { + setIsDeleting(false); + } + }, + }); }, [secret.id, secret.key, organizationId, onDeleted]);
64-68:navigator.clipboard.writeTextcan reject — add error handling.While clipboard failures are rare in Electron, wrapping in try/catch avoids an unhandled promise rejection.
🛡️ Proposed fix
const handleCopy = useCallback(async () => { - await navigator.clipboard.writeText(secret.value); - setCopied(true); - setTimeout(() => setCopied(false), 1500); + try { + await navigator.clipboard.writeText(secret.value); + setCopied(true); + setTimeout(() => setCopied(false), 1500); + } catch { + console.error("[secrets/copy] Failed to copy to clipboard"); + } }, [secret.value]);apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx (2)
148-148: MoveMAX_FILE_SIZEto module-level.As per coding guidelines, "Extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic."
+const MAX_FILE_SIZE = 256 * 1024; // 256 KB + export function AddSecretSheet({
198-226: Sequential upserts risk partial saves on failure.If an error occurs on the 3rd of 5 entries, the first two are already persisted but the user sees "Failed to save environment variables" with no indication of which succeeded. This could leave the user confused about the actual state.
Consider either:
- Using a batch/bulk endpoint if available.
- Collecting results and reporting partial success/failure:
try { - for (const entry of validEntries) { - await apiTrpcClient.project.secrets.upsert.mutate({ + await Promise.all( + validEntries.map((entry) => + apiTrpcClient.project.secrets.upsert.mutate({ projectId, organizationId, key: entry.key.trim(), value: entry.value.trim(), sensitive, - }); - } + }), + ), + );
Promise.allwould at least make the operation faster. The partial failure risk remains but is less likely with concurrent execution. A true fix would need a server-side batch endpoint.packages/db/src/schema/zod.ts (1)
23-27:workspaceConfigSchemaunion lacks a discriminator — consider adding atypefield.The union of
localWorkspaceConfigSchemaandcloudWorkspaceConfigSchemaworks because the shapes are distinct, but Zod must try parsing each variant sequentially. If atype: "local" | "cloud"discriminator field were added, you could usez.discriminatedUnion("type", [...])for more efficient parsing and clearer error messages. This aligns with theworkspaceTypeenum already defined in the DB schema.packages/db/src/schema/schema.ts (1)
398-425: Workspaces table stores config as unvalidated jsonb.The
configcolumn uses$type<WorkspaceConfig>()which only provides compile-time type hints. Consider adding a note or documenting that application-layer validation (viaworkspaceConfigSchema) should be applied when reading this data, especially since the config shape may evolve over time and existing rows may not match newer schemas.
| STREAMS_URL: ${{ env.STREAMS_URL }} | ||
| SECRETS_ENCRYPTION_KEY: ${{ secrets.SECRETS_ENCRYPTION_KEY }} |
There was a problem hiding this comment.
STREAMS_URL will be empty — it's never set in the preview workflow.
STREAMS_URL is referenced as ${{ env.STREAMS_URL }} but no step in this workflow defines or exports it to $GITHUB_ENV. The streams preview deploys to Fly.io (Line 145) but its URL is never captured. Since apps/api/src/env.ts validates STREAMS_URL as z.string().url() (required), the API preview deployment will fail env validation at build or runtime.
You likely need to construct the URL from the Fly app name (e.g., https://superset-stream-pr-${PR_NUMBER}.fly.dev) and either add it to the top-level env: block or load it from the streams status artifact.
Suggested fix: add STREAMS_URL to the top-level env block
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
API_ALIAS: api-pr-${{ github.event.pull_request.number }}-superset.vercel.app
WEB_ALIAS: web-pr-${{ github.event.pull_request.number }}-superset.vercel.app
MARKETING_ALIAS: marketing-pr-${{ github.event.pull_request.number }}-superset.vercel.app
ADMIN_ALIAS: admin-pr-${{ github.event.pull_request.number }}-superset.vercel.app
DOCS_ALIAS: docs-pr-${{ github.event.pull_request.number }}-superset.vercel.app
ELECTRIC_URL: https://superset-electric-pr-${{ github.event.pull_request.number }}.fly.dev/v1/shape
+ STREAMS_URL: https://superset-stream-pr-${{ github.event.pull_request.number }}.fly.dev🤖 Prompt for AI Agents
In @.github/workflows/deploy-preview.yml around lines 253 - 254, The workflow is
missing a value for STREAMS_URL (used later alongside SECRETS_ENCRYPTION_KEY) so
add/emit it before the API build/deploy: construct the URL from the Fly app
naming convention (e.g. https://superset-stream-pr-${{
github.event.pull_request.number }}.fly.dev) and set STREAMS_URL in the
top-level env: block or export it to GITHUB_ENV after the Fly deploy step that
creates the app; ensure the generated value is available to the job that runs
apps/api (apps/api/src/env.ts expects STREAMS_URL as a required
z.string().url()).
| SECRETS_ENCRYPTION_KEY: ${{ secrets.SECRETS_ENCRYPTION_KEY }} | ||
| run: | | ||
| vercel pull --yes --environment=preview --token=$VERCEL_TOKEN | ||
| vercel build --token=$VERCEL_TOKEN | ||
| VERCEL_URL=$(vercel deploy --prebuilt --token=$VERCEL_TOKEN \ | ||
| --env NEXT_PUBLIC_POSTHOG_KEY=$NEXT_PUBLIC_POSTHOG_KEY \ | ||
| --env NEXT_PUBLIC_POSTHOG_HOST=$NEXT_PUBLIC_POSTHOG_HOST \ | ||
| --env NEXT_PUBLIC_SENTRY_DSN_DOCS=$NEXT_PUBLIC_SENTRY_DSN_DOCS \ | ||
| --env NEXT_PUBLIC_SENTRY_ENVIRONMENT=$NEXT_PUBLIC_SENTRY_ENVIRONMENT) | ||
| --env NEXT_PUBLIC_SENTRY_ENVIRONMENT=$NEXT_PUBLIC_SENTRY_ENVIRONMENT \ | ||
| --env SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY) |
There was a problem hiding this comment.
SECRETS_ENCRYPTION_KEY is unnecessarily propagated to Docs (and Marketing).
The Docs site is a static documentation deployment — it has no server-side secrets management logic. Passing SECRETS_ENCRYPTION_KEY here violates the principle of least privilege. The same applies to the Marketing deployment (Line 483/507).
Consider removing SECRETS_ENCRYPTION_KEY from deployments that don't use it (Docs, Marketing) to reduce the secret's exposure surface.
🤖 Prompt for AI Agents
In @.github/workflows/deploy-preview.yml around lines 678 - 687, Remove
SECRETS_ENCRYPTION_KEY from the vercel deploy invocations that build static
Docs/Marketing sites: locate the run block that sets VERCEL_URL via the vercel
deploy --prebuilt command (the step that constructs VERCEL_URL) and delete the
--env SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY argument there (and the
equivalent argument in the Marketing deploy block). Leave SECRETS_ENCRYPTION_KEY
only on deployments that actually require it (do not remove the top-level
secrets definition), ensuring vercel deploy --prebuilt and VERCEL_URL remain
unchanged otherwise.
| SECRETS_ENCRYPTION_KEY: ${{ secrets.SECRETS_ENCRYPTION_KEY }} | ||
| run: | | ||
| vercel pull --yes --environment=production --token=$VERCEL_TOKEN | ||
| vercel build --prod --token=$VERCEL_TOKEN | ||
| vercel deploy --prod --prebuilt --token=$VERCEL_TOKEN \ | ||
| --env NEXT_PUBLIC_POSTHOG_KEY=$NEXT_PUBLIC_POSTHOG_KEY \ | ||
| --env NEXT_PUBLIC_POSTHOG_HOST=$NEXT_PUBLIC_POSTHOG_HOST \ | ||
| --env NEXT_PUBLIC_SENTRY_DSN_DOCS=$NEXT_PUBLIC_SENTRY_DSN_DOCS \ | ||
| --env NEXT_PUBLIC_SENTRY_ENVIRONMENT=$NEXT_PUBLIC_SENTRY_ENVIRONMENT | ||
| --env NEXT_PUBLIC_SENTRY_ENVIRONMENT=$NEXT_PUBLIC_SENTRY_ENVIRONMENT \ | ||
| --env SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY |
There was a problem hiding this comment.
Same least-privilege concern: SECRETS_ENCRYPTION_KEY on Docs and Marketing in production.
Same issue as flagged in the preview workflow — Docs and Marketing don't need SECRETS_ENCRYPTION_KEY. Remove it from these deployments to minimize secret exposure.
🤖 Prompt for AI Agents
In @.github/workflows/deploy-production.yml around lines 504 - 513, The
SECRETS_ENCRYPTION_KEY is being passed into the production vercel deploy step
where Docs/Marketing don't need it; remove the --env
SECRETS_ENCRYPTION_KEY=$SECRETS_ENCRYPTION_KEY entry from the vercel deploy
invocation (the run block containing vercel pull/build/deploy) so the deploy
command no longer injects SECRETS_ENCRYPTION_KEY into the Docs/Marketing
production deployment.
| linkToNeon: publicProcedure | ||
| .input(z.object({ id: z.string(), neonProjectId: z.string() })) | ||
| .mutation(({ input }) => { | ||
| localDb | ||
| .update(projects) | ||
| .set({ neonProjectId: input.neonProjectId }) | ||
| .where(eq(projects.id, input.id)) | ||
| .run(); | ||
| return { success: true }; | ||
| }), |
There was a problem hiding this comment.
Missing project existence check — silently succeeds for invalid IDs.
Every other mutation in this router that updates by id (e.g., update, close, refreshDefaultBranch) fetches the project first and throws NOT_FOUND / an error when it doesn't exist. This procedure skips that, so calling linkToNeon with a bogus id returns { success: true } without actually linking anything.
Also, z.string() on neonProjectId permits empty strings — consider adding .min(1).
Proposed fix
linkToNeon: publicProcedure
- .input(z.object({ id: z.string(), neonProjectId: z.string() }))
+ .input(z.object({ id: z.string(), neonProjectId: z.string().min(1) }))
.mutation(({ input }) => {
+ const project = localDb
+ .select()
+ .from(projects)
+ .where(eq(projects.id, input.id))
+ .get();
+
+ if (!project) {
+ throw new TRPCError({
+ code: "NOT_FOUND",
+ message: `Project ${input.id} not found`,
+ });
+ }
+
localDb
.update(projects)
.set({ neonProjectId: input.neonProjectId })
.where(eq(projects.id, input.id))
.run();
return { success: true };
}),As per coding guidelines, "Use appropriate TRPCError codes consistently: NOT_FOUND (resource doesn't exist)" and "Follow existing patterns in the codebase and match the codebase style rather than introducing novel patterns".
🤖 Prompt for AI Agents
In `@apps/desktop/src/lib/trpc/routers/projects/projects.ts` around lines 1004 -
1013, The linkToNeon mutation currently updates localDb without verifying the
project exists and allows empty neonProjectId; change the input schema to
neonProjectId: z.string().min(1) and, following the pattern used by
update/close/refreshDefaultBranch, first fetch the project by id (using the same
project lookup logic) and if not found throw a TRPCError with code 'NOT_FOUND';
only then perform the localDb.update(projects).set(...).where(...) and await the
.run() result before returning { success: true } so the procedure behaves
consistently with other id-based mutations.
| linkMutation.mutate( | ||
| { id: local.id, neonProjectId: match.id }, | ||
| { | ||
| onError: () => { | ||
| linkingRef.current.delete(local.id); | ||
| }, | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Silent error swallowing in onError.
The onError callback removes the project from the linking set but discards the error entirely — no logging, no user feedback. Per coding guidelines, errors should never be swallowed silently.
Suggested fix
onError: () => {
+ console.error(
+ `[neon-project-link] Failed to link project ${local.id} to neon project ${match.id}`,
+ );
linkingRef.current.delete(local.id);
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| linkMutation.mutate( | |
| { id: local.id, neonProjectId: match.id }, | |
| { | |
| onError: () => { | |
| linkingRef.current.delete(local.id); | |
| }, | |
| }, | |
| ); | |
| linkMutation.mutate( | |
| { id: local.id, neonProjectId: match.id }, | |
| { | |
| onError: () => { | |
| console.error( | |
| `[neon-project-link] Failed to link project ${local.id} to neon project ${match.id}`, | |
| ); | |
| linkingRef.current.delete(local.id); | |
| }, | |
| }, | |
| ); |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts`
around lines 46 - 53, The onError handler for linkMutation.mutate is swallowing
errors — modify the onError callback in useNeonProjectLink so it accepts the
error parameter (e.g., (error) => { ... }), keeps the existing cleanup calling
linkingRef.current.delete(local.id), and also surface the failure by logging the
error (console.error or app logger) and notifying the user (use the app's
toast/notification utility or set an error state) so failures aren't silent;
keep the cleanup logic intact and include the error details in the
log/notification for debugging.
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | ||
| const [project] = await dbWs | ||
| .insert(projects) | ||
| .values({ | ||
| organizationId: input.organizationId, | ||
| name: input.name, | ||
| slug: input.slug, | ||
| repoOwner: input.repoOwner, | ||
| repoName: input.repoName, | ||
| repoUrl: input.repoUrl, | ||
| defaultBranch: input.defaultBranch ?? "main", | ||
| githubRepositoryId: input.githubRepositoryId, | ||
| }) | ||
| .returning(); | ||
| if (!project) { | ||
| throw new TRPCError({ | ||
| code: "INTERNAL_SERVER_ERROR", | ||
| message: "Failed to create project", | ||
| }); | ||
| } | ||
| await dbWs.insert(sandboxImages).values({ | ||
| organizationId: input.organizationId, | ||
| projectId: project.id, | ||
| }); | ||
| return project; | ||
| }), |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Project creation and sandbox image insert are not atomic.
If dbWs.insert(sandboxImages) (line 48) fails after the project is successfully created (line 29-41), the database will contain an orphaned project with no sandbox image entry. Consider wrapping both inserts in a transaction:
🛡️ Proposed fix
.mutation(async ({ ctx, input }) => {
await verifyOrgMembership(ctx.session.user.id, input.organizationId);
- const [project] = await dbWs
- .insert(projects)
- .values({
- organizationId: input.organizationId,
- name: input.name,
- slug: input.slug,
- repoOwner: input.repoOwner,
- repoName: input.repoName,
- repoUrl: input.repoUrl,
- defaultBranch: input.defaultBranch ?? "main",
- githubRepositoryId: input.githubRepositoryId,
- })
- .returning();
- if (!project) {
- throw new TRPCError({
- code: "INTERNAL_SERVER_ERROR",
- message: "Failed to create project",
- });
- }
- await dbWs.insert(sandboxImages).values({
- organizationId: input.organizationId,
- projectId: project.id,
+ const project = await dbWs.transaction(async (tx) => {
+ const [created] = await tx
+ .insert(projects)
+ .values({
+ organizationId: input.organizationId,
+ name: input.name,
+ slug: input.slug,
+ repoOwner: input.repoOwner,
+ repoName: input.repoName,
+ repoUrl: input.repoUrl,
+ defaultBranch: input.defaultBranch ?? "main",
+ githubRepositoryId: input.githubRepositoryId,
+ })
+ .returning();
+ if (!created) {
+ throw new TRPCError({
+ code: "INTERNAL_SERVER_ERROR",
+ message: "Failed to create project",
+ });
+ }
+ await tx.insert(sandboxImages).values({
+ organizationId: input.organizationId,
+ projectId: created.id,
+ });
+ return created;
});
return project;
}),🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/project.ts` around lines 27 - 53, The
project insert and sandboxImages insert must be atomic to avoid orphaned
projects; wrap both operations in a single transaction using the DB transaction
API (e.g., dbWs.transaction) so both inserts run on the same transaction
context: perform the insert into projects (the existing
dbWs.insert(projects).values(...).returning()) inside the transaction, then
insert into sandboxImages using the transaction handle (use
tx.insert(sandboxImages).values(...)), check the returned project and throw the
same TRPCError if missing, and return the project from the transaction; this
ensures either both rows are committed or neither are.
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | ||
| const { id, organizationId, ...data } = input; | ||
| const [updated] = await dbWs | ||
| .update(projects) | ||
| .set(data) | ||
| .where( | ||
| and(eq(projects.id, id), eq(projects.organizationId, organizationId)), | ||
| ) | ||
| .returning(); | ||
| return updated; | ||
| }), |
There was a problem hiding this comment.
Update with no fields performs an empty SET and doesn't check for NOT_FOUND.
If the caller sends only id and organizationId (both optional fields omitted), data is {} and .set({}) may produce invalid SQL or a no-op. Additionally, if no row matches the WHERE clause, updated is undefined but is returned without error.
🛡️ Proposed fix
.mutation(async ({ ctx, input }) => {
await verifyOrgMembership(ctx.session.user.id, input.organizationId);
const { id, organizationId, ...data } = input;
+ if (Object.keys(data).length === 0) {
+ throw new TRPCError({ code: "BAD_REQUEST", message: "No fields to update" });
+ }
const [updated] = await dbWs
.update(projects)
.set(data)
.where(
and(eq(projects.id, id), eq(projects.organizationId, organizationId)),
)
.returning();
+ if (!updated) {
+ throw new TRPCError({ code: "NOT_FOUND", message: "Project not found" });
+ }
return updated;
}),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| const { id, organizationId, ...data } = input; | |
| const [updated] = await dbWs | |
| .update(projects) | |
| .set(data) | |
| .where( | |
| and(eq(projects.id, id), eq(projects.organizationId, organizationId)), | |
| ) | |
| .returning(); | |
| return updated; | |
| }), | |
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| const { id, organizationId, ...data } = input; | |
| if (Object.keys(data).length === 0) { | |
| throw new TRPCError({ code: "BAD_REQUEST", message: "No fields to update" }); | |
| } | |
| const [updated] = await dbWs | |
| .update(projects) | |
| .set(data) | |
| .where( | |
| and(eq(projects.id, id), eq(projects.organizationId, organizationId)), | |
| ) | |
| .returning(); | |
| if (!updated) { | |
| throw new TRPCError({ code: "NOT_FOUND", message: "Project not found" }); | |
| } | |
| return updated; | |
| }), |
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/project.ts` around lines 64 - 75, The
mutation currently calls .set(data) even when data is empty and returns updated
without checking if a row was found; before calling
dbWs.update(projects).set(data) ensure there is at least one updatable field
(e.g., if Object.keys(data).length === 0) and throw a BAD_REQUEST TRPCError like
"No fields to update", and after performing the update check if the returned
updated is undefined and if so throw a NOT_FOUND TRPCError (use the same id and
organizationId context); reference verifyOrgMembership, the local variables
id/organizationId/data, the dbWs.update(projects).set(data)...returning() call,
and the updated result when adding these guards.
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); |
There was a problem hiding this comment.
Missing project-organization ownership validation — potential authorization bypass.
verifyOrgMembership confirms the user belongs to the organization, but there's no check that input.projectId actually belongs to input.organizationId. The secrets table has separate FKs for organization_id and project_id without a composite constraint tying them together. A user who is a member of Org B could pass a projectId belonging to Org A with organizationId of Org B, and the insert would succeed — writing secrets scoped to a project they don't own.
The same gap exists in getDecrypted (Line 121). The delete procedure is safe because it filters on secrets.organizationId directly.
Add a check that the project belongs to the given organization before proceeding:
Proposed fix
.mutation(async ({ ctx, input }) => {
await verifyOrgMembership(ctx.session.user.id, input.organizationId);
+
+ const project = await db.query.projects.findFirst({
+ where: and(
+ eq(projects.id, input.projectId),
+ eq(projects.organizationId, input.organizationId),
+ ),
+ });
+ if (!project) {
+ throw new TRPCError({
+ code: "NOT_FOUND",
+ message: "Project not found in organization",
+ });
+ }Apply the same guard to getDecrypted.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| const project = await db.query.projects.findFirst({ | |
| where: and( | |
| eq(projects.id, input.projectId), | |
| eq(projects.organizationId, input.organizationId), | |
| ), | |
| }); | |
| if (!project) { | |
| throw new TRPCError({ | |
| code: "NOT_FOUND", | |
| message: "Project not found in organization", | |
| }); | |
| } |
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/secrets.ts` around lines 28 - 29, Add an
explicit check that the provided projectId belongs to the provided
organizationId before performing the mutation or getDecrypted read: after
verifyOrgMembership(ctx.session.user.id, input.organizationId) (in the secrets
create mutation) and at the start of the getDecrypted procedure, query the
projects table (or add a helper like verifyProjectOrganization(projectId,
organizationId)) to confirm that input.projectId is associated with
input.organizationId; if not, throw an authorization error and halt; this
ensures the insert/read cannot be made for a project in a different org even
when the user is a member of the provided organizationId.
| const existing = await db.query.secrets.findMany({ | ||
| where: and( | ||
| eq(secrets.projectId, input.projectId), | ||
| eq(secrets.organizationId, input.organizationId), | ||
| ), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the secrets.ts file to see the actual code at lines 47-52 and around line 76
head -n 80 packages/trpc/src/router/project/secrets.ts | tail -n +40Repository: superset-sh/superset
Length of output: 1169
🏁 Script executed:
# Search for db and dbWs definitions/exports
fd --type f --extension ts --extension tsx | xargs rg "export const db(Ws)?" -A 2 | head -n 50Repository: superset-sh/superset
Length of output: 373
🏁 Script executed:
# Look at the entire secrets.ts file structure (check file size first)
wc -l packages/trpc/src/router/project/secrets.tsRepository: superset-sh/superset
Length of output: 111
🏁 Script executed:
# Look at the client.ts file to understand what sql is
cat -n packages/db/src/client.ts | head -n 30Repository: superset-sh/superset
Length of output: 779
🏁 Script executed:
# Search for imports and context
rg "drizzle|drizzleWs" packages/db/src/client.ts -B 5 -A 5Repository: superset-sh/superset
Length of output: 618
🏁 Script executed:
# Check how MAX_SECRETS_PER_PROJECT is used across the file
rg "MAX_SECRETS_PER_PROJECT" packages/trpc/src/router/project/secrets.ts -B 2 -A 2Repository: superset-sh/superset
Length of output: 473
🏁 Script executed:
# Search for comments or configuration indicating read-replica setup
rg "read.replica|replica|read-only" packages/db/src/ -iRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check how db vs dbWs are used throughout the codebase to understand the pattern
rg "await db\." packages/trpc/src/ | head -20Repository: superset-sh/superset
Length of output: 2195
🏁 Script executed:
# Check how dbWs is used
rg "await dbWs\." packages/trpc/src/ | head -20Repository: superset-sh/superset
Length of output: 437
🏁 Script executed:
# Look for any documentation or comments about Neon setup
find . -name "*.md" -o -name "*.txt" | xargs rg "neon|DATABASE_URL" | head -20Repository: superset-sh/superset
Length of output: 1412
Use consistent database client for the count check and write operation.
Line 47 checks existing.length >= MAX_SECRETS_PER_PROJECT using db, while line 76 writes via dbWs. Although both connect to the same database URL, using different database clients for a check-then-write sequence creates a TOCTOU window where another process could insert secrets between the check and the insert, potentially bypassing the limit. Use dbWs for the count check as well, or document if this is intentionally treated as a soft limit. (Note: both db and dbWs can perform reads and writes; they differ in transport mechanism—HTTP vs WebSocket—rather than representing a read-replica/primary architecture.)
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/secrets.ts` around lines 47 - 52, The count
check uses db.query.secrets.findMany (assigning to existing) while the insert
uses dbWs, creating a TOCTOU risk; change the read that enforces
MAX_SECRETS_PER_PROJECT to use the same client (dbWs.query.secrets.findMany or
equivalent) so the check and subsequent write use the same transport,
referencing the existing variable name existing, the constant
MAX_SECRETS_PER_PROJECT, and the write code that uses dbWs to keep the operation
atomic from the client-transport perspective (or explicitly document this as an
intentional soft limit if you choose not to change it).
| delete: protectedProcedure | ||
| .input( | ||
| z.object({ id: z.string().uuid(), organizationId: z.string().uuid() }), | ||
| ) | ||
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | ||
| await dbWs | ||
| .delete(secrets) | ||
| .where( | ||
| and( | ||
| eq(secrets.id, input.id), | ||
| eq(secrets.organizationId, input.organizationId), | ||
| ), | ||
| ); | ||
| return { success: true }; | ||
| }), |
There was a problem hiding this comment.
delete silently succeeds when the target secret doesn't exist.
If the id is invalid or already deleted, zero rows are affected but the procedure returns { success: true }. Consider checking the affected row count or using .returning() to confirm deletion, then throwing NOT_FOUND when nothing was deleted.
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/secrets.ts` around lines 97 - 112, The
delete protectedProcedure currently always returns { success: true } even when
no row is deleted; modify the mutation to capture the result of
dbWs.delete(secrets).where(...).returning({id: secrets.id}) (or check the
affected row count from the executed query) and if the returned array is empty
throw a TRPCError with code 'NOT_FOUND' (after verifyOrgMembership). Ensure you
only return { success: true } when a row was actually deleted and add an import
for TRPCError if not present.
beeef7e to
5a93802
Compare
5a93802 to
4459711
Compare
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/renderer/stores/settings-state.ts (1)
4-7:⚠️ Potential issue | 🟡 MinorStale comment: still references "workspace".
The doc comment says "project/workspace are dynamic" but workspace has been removed from the union type.
Suggested fix
/** * Settings sections available in the settings view. - * General sections are the main categories, project/workspace are dynamic. + * General sections are the main categories, project is dynamic. */
🤖 Fix all issues with AI agents
In `@apps/api/src/env.ts`:
- Line 44: The SECRETS_ENCRYPTION_KEY zod validation is too lax
(z.string().min(1)) even though the key is base64 and must decode to exactly 32
bytes; update the validation for SECRETS_ENCRYPTION_KEY to require the correct
base64 length by using z.string().min(44) (and consider adding a max(44) or a
custom refinement if you want to enforce exact length), and make the identical
change to the other env module that also defines SECRETS_ENCRYPTION_KEY so both
places validate the base64 32-byte key consistently.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts`:
- Around line 36-37: The path split logic in useNeonProjectLink (where you
compute repoName via local.mainRepoPath.split("/").pop()) fails on Windows
because it only splits on forward slashes; change the extraction to handle both
separators by splitting on both "/" and "\" (or use a cross-platform approach
like path.basename) when deriving repoName from local.mainRepoPath, update the
code that checks repoName accordingly, and ensure any places referencing
repoName still behave the same after this normalization.
- Around line 29-60: The effect re-runs unnecessarily because linkMutation (from
useMutation) is not stable; change linkProjects to depend on the stable mutate
function instead of the whole linkMutation object: extract the mutate function
(e.g., const { mutate: linkMutate } = linkMutation) and use linkMutate inside
linkProjects (and in its mutate call), then update useCallback and useEffect
dependency arrays to include linkMutate (and localProjects, cloudProjects)
rather than linkMutation; keep linkingRef as-is to avoid duplicate requests.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx:
- Around line 198-226: handleSave currently upserts each validEntries item
sequentially so a mid-loop failure leaves earlier secrets saved and later ones
not; change handleSave to run all upserts in parallel with Promise.allSettled
against apiTrpcClient.project.secrets.upsert.mutate for each entry (using
entry.key.trim()/entry.value.trim() and sensitive/projectId/organizationId),
then inspect results to separate successes from failures, show a toast that
summarizes partial success (e.g., N added, M failed) and only call onSaved() and
onOpenChange(false) if all upserts succeeded; if some failed, keep the sheet
open and optionally remove the successfully saved entries from
entries/validEntries so retry only attempts failures.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx:
- Around line 64-68: The handleCopy callback currently calls
navigator.clipboard.writeText(secret.value) without error handling; wrap the
async clipboard call in a try/catch inside handleCopy (the function defined in
SecretRow.tsx) to catch rejections and avoid unhandled promise rejections, and
on error optionally log or silently handle it (e.g.,
processLogger/console.error) while still setting the copied state only on
success; ensure the dependency on secret.value remains unchanged.
- Around line 48-62: The catch block in handleDelete currently only
console.errors the failure from apiTrpcClient.project.secrets.delete.mutate and
leaves the user without feedback; add a user-visible error by calling
toast.error(...) inside the catch (including context like the secret.key and the
caught error message), and import toast from "@superset/ui/sonner" at the top of
the file; keep the existing setIsDeleting(false) in the finally block and do not
change onDeleted behavior.
In `@packages/trpc/src/router/project/project.ts`:
- Around line 77-92: The delete mutation (the protectedProcedure block that
calls verifyOrgAdmin and dbWs.delete(projects).where(...)) currently always
returns { success: true } even if no row was deleted; update it to check the
deletion result (e.g., use dbWs.delete(projects).where(...).returning() or
inspect the result count) and only return success when a row was actually
removed—otherwise return a failure (or throw a 404 / not-found error) so callers
can distinguish “no-op” from a real deletion.
🧹 Nitpick comments (8)
apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.ts (1)
649-656: Consider adding"project"to the keywords for consistency.Every other item in the
"project"section includes"project"as a keyword (e.g., lines 584, 591, 608, 629). Without it, a user searching "project" won't see the Environment Variables entry in results.Suggested fix
keywords: [ + "project", "environment", "variables", "secrets", "env", "cloud", "sandbox", ],packages/trpc/src/lib/secrets-validation.ts (1)
21-35: Consider checking length before running the regex.Currently the regex runs on the full input before the length guard. Swapping the order avoids running a regex match against an arbitrarily long string and gives users the more actionable "too long" error first.
♻️ Suggested reorder
export function validateSecretKey( key: string, ): { valid: true } | { valid: false; error: string } { const normalized = key.toUpperCase(); + if (key.length > MAX_KEY_LENGTH) + return { + valid: false, + error: `Key must be <= ${MAX_KEY_LENGTH} characters`, + }; if (!KEY_PATTERN.test(key)) return { valid: false, error: "Key must match [A-Za-z_][A-Za-z0-9_]*" }; - if (key.length > MAX_KEY_LENGTH) - return { - valid: false, - error: `Key must be <= ${MAX_KEY_LENGTH} characters`, - }; if (RESERVED_KEYS.has(normalized)) return { valid: false, error: `${normalized} is a reserved key` }; return { valid: true }; }apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/general/page.tsx (1)
12-20: Manually-constructed query keys may drift from tRPC-generated keys.The query keys on lines 12-20 are hand-crafted to match what
@trpc/react-querygenerates internally. If tRPC changes its key format across versions, these will silently stop matching and the loader won't populate the cache for the component'suseQueryhooks.Consider adding a comment noting this coupling, or extracting the key construction into a shared helper.
apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/SecretsSettings.tsx (1)
100-102:refreshKeyremount causes full loading flash on every save.Incrementing
refreshKeyforces a complete unmount/remount ofEnvironmentVariablesList, which resets its internal state (includingisLoading = true). This means the user sees the "Loading..." placeholder every time they add or edit a secret.Consider passing a
refetchcallback or lifting the fetch logic up so the list can refresh in-place without a full remount.Also applies to: 72-72, 117-124
apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx (1)
48-49:confirm()is a native browser dialog — consider using the customalertpattern.The rest of the UI uses
alert.destructive(...)from@superset/ui/atoms/Alert(as seen inAddSecretSheet.tsxline 76). Usingwindow.confirm()here is inconsistent and blocks the main thread.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx (2)
95-105:updateEntryvalue parameter typedstring | booleanbutSecretEntryfields are allstring.The
booleanin the union is unnecessary and could mask type errors at call sites. Narrow tostring.♻️ Proposed fix
const updateEntry = ( index: number, field: keyof SecretEntry, - value: string | boolean, + value: string, ) => {
32-35: Module-level mutable counter for entry IDs.
entryIdCounteris module-level mutable state that never resets and is shared across allAddSecretSheetinstances. This works because IDs only need uniqueness for React keys, but it's unconventional. If you ever need deterministic IDs (e.g., for testing), considercrypto.randomUUID()oruseId()instead.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/EnvironmentVariablesList.tsx (1)
61-163: Data fetching is manual — consider using tRPC'suseQueryor React Query directly.
EnvironmentVariablesListmanages its ownsecretsstate,isLoading, and manualfetchSecretswithuseEffect. This reimplements what React Query (via the tRPC wrapper) provides out of the box — automatic loading/error states, refetching, cache invalidation, and stale-while-revalidate.Using
electronTrpc.project.secrets.getDecrypted.useQuery(...)(or the API tRPC equivalent if this must go through the cloud API) would eliminate the manual state management and the multiple issues identified around loading/error handling.
| STRIPE_PRO_YEARLY_PRICE_ID: z.string(), | ||
| SLACK_BILLING_WEBHOOK_URL: z.string().url(), | ||
| STREAMS_URL: z.string().url(), | ||
| SECRETS_ENCRYPTION_KEY: z.string().min(1), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "crypto.ts" -o -name "crypto.js" | head -20Repository: superset-sh/superset
Length of output: 97
🏁 Script executed:
rg "SECRETS_ENCRYPTION_KEY" -A 3 -B 3Repository: superset-sh/superset
Length of output: 1498
🏁 Script executed:
rg "AES.*256.*GCM|createCipheriv|createDecipheriv" -A 2 -B 2 --type tsRepository: superset-sh/superset
Length of output: 2732
🏁 Script executed:
python3 <<'EOF'
import base64
# 32 bytes as base64
test_bytes = b'x' * 32
encoded = base64.b64encode(test_bytes).decode('ascii')
print(f"32 bytes in base64: {len(encoded)} characters")
print(f"Example: {encoded}")
EOFRepository: superset-sh/superset
Length of output: 151
SECRETS_ENCRYPTION_KEY validation is too permissive and suggested fix has incorrect length.
Since the key is base64-encoded and must decode to exactly 32 bytes, the minimum length should be 44 characters (32 bytes base64-encoded), not 32. A string like "abc" would currently pass min(1) validation but fail at runtime.
Additionally, the same issue exists in packages/trpc/src/env.ts.
Suggested fix
- SECRETS_ENCRYPTION_KEY: z.string().min(1),
+ SECRETS_ENCRYPTION_KEY: z.string().min(44),Apply the same fix to both apps/api/src/env.ts and packages/trpc/src/env.ts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SECRETS_ENCRYPTION_KEY: z.string().min(1), | |
| SECRETS_ENCRYPTION_KEY: z.string().min(44), |
🤖 Prompt for AI Agents
In `@apps/api/src/env.ts` at line 44, The SECRETS_ENCRYPTION_KEY zod validation is
too lax (z.string().min(1)) even though the key is base64 and must decode to
exactly 32 bytes; update the validation for SECRETS_ENCRYPTION_KEY to require
the correct base64 length by using z.string().min(44) (and consider adding a
max(44) or a custom refinement if you want to enforce exact length), and make
the identical change to the other env module that also defines
SECRETS_ENCRYPTION_KEY so both places validate the base64 32-byte key
consistently.
| const linkProjects = useCallback(() => { | ||
| if (!localProjects || !cloudProjects) return; | ||
|
|
||
| for (const local of localProjects) { | ||
| if (local.neonProjectId || !local.githubOwner) continue; | ||
| if (linkingRef.current.has(local.id)) continue; | ||
|
|
||
| const repoName = local.mainRepoPath.split("/").pop(); | ||
| if (!repoName) continue; | ||
|
|
||
| const match = cloudProjects.find( | ||
| (cloud) => | ||
| cloud.repoOwner === local.githubOwner && cloud.repoName === repoName, | ||
| ); | ||
|
|
||
| if (match) { | ||
| linkingRef.current.add(local.id); | ||
| linkMutation.mutate( | ||
| { id: local.id, neonProjectId: match.id }, | ||
| { | ||
| onError: () => { | ||
| linkingRef.current.delete(local.id); | ||
| }, | ||
| }, | ||
| ); | ||
| } | ||
| } | ||
| }, [localProjects, cloudProjects, linkMutation]); | ||
|
|
||
| useEffect(() => { | ||
| linkProjects(); | ||
| }, [linkProjects]); |
There was a problem hiding this comment.
linkMutation in dependency array causes unnecessary re-execution of the effect.
The linkMutation object returned by useMutation is not referentially stable — it gets a new identity whenever mutation state changes (e.g., isLoading, isSuccess). Including it in the useCallback dependency array means linkProjects is recreated on every state change, which re-triggers the useEffect. While linkingRef prevents duplicate mutations for the same project, this still causes redundant iteration over all projects on every mutation state transition.
Use only the stable mutate function reference instead:
Proposed fix
- const linkProjects = useCallback(() => {
+ const mutate = linkMutation.mutate;
+
+ const linkProjects = useCallback(() => {
if (!localProjects || !cloudProjects) return;
for (const local of localProjects) {
if (local.neonProjectId || !local.githubOwner) continue;
if (linkingRef.current.has(local.id)) continue;
const repoName = local.mainRepoPath.split("/").pop();
if (!repoName) continue;
const match = cloudProjects.find(
(cloud) =>
cloud.repoOwner === local.githubOwner && cloud.repoName === repoName,
);
if (match) {
linkingRef.current.add(local.id);
- linkMutation.mutate(
+ mutate(
{ id: local.id, neonProjectId: match.id },
{
onError: () => {
linkingRef.current.delete(local.id);
},
},
);
}
}
- }, [localProjects, cloudProjects, linkMutation]);
+ }, [localProjects, cloudProjects, mutate]);🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts`
around lines 29 - 60, The effect re-runs unnecessarily because linkMutation
(from useMutation) is not stable; change linkProjects to depend on the stable
mutate function instead of the whole linkMutation object: extract the mutate
function (e.g., const { mutate: linkMutate } = linkMutation) and use linkMutate
inside linkProjects (and in its mutate call), then update useCallback and
useEffect dependency arrays to include linkMutate (and localProjects,
cloudProjects) rather than linkMutation; keep linkingRef as-is to avoid
duplicate requests.
| const repoName = local.mainRepoPath.split("/").pop(); | ||
| if (!repoName) continue; |
There was a problem hiding this comment.
Path splitting breaks on Windows — backslash separators are not handled.
mainRepoPath on Windows will use backslashes (e.g., C:\Users\dev\my-repo). Splitting on "/" alone will return the entire path as a single element, so repoName will be the full path, never matching a cloud project.
Proposed fix
- const repoName = local.mainRepoPath.split("/").pop();
+ const repoName = local.mainRepoPath.split(/[\\/]/).pop();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const repoName = local.mainRepoPath.split("/").pop(); | |
| if (!repoName) continue; | |
| const repoName = local.mainRepoPath.split(/[\\/]/).pop(); | |
| if (!repoName) continue; |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/useNeonProjectLink/useNeonProjectLink.ts`
around lines 36 - 37, The path split logic in useNeonProjectLink (where you
compute repoName via local.mainRepoPath.split("/").pop()) fails on Windows
because it only splits on forward slashes; change the extraction to handle both
separators by splitting on both "/" and "\" (or use a cross-platform approach
like path.basename) when deriving repoName from local.mainRepoPath, update the
code that checks repoName accordingly, and ensure any places referencing
repoName still behave the same after this normalization.
| const handleSave = async () => { | ||
| const validEntries = entries.filter((e) => e.key.trim() && e.value.trim()); | ||
| if (validEntries.length === 0) return; | ||
|
|
||
| setIsSaving(true); | ||
| try { | ||
| for (const entry of validEntries) { | ||
| await apiTrpcClient.project.secrets.upsert.mutate({ | ||
| projectId, | ||
| organizationId, | ||
| key: entry.key.trim(), | ||
| value: entry.value.trim(), | ||
| sensitive, | ||
| }); | ||
| } | ||
| toast.success( | ||
| validEntries.length === 1 | ||
| ? `Added ${validEntries[0].key.trim()}` | ||
| : `Added ${validEntries.length} environment variables`, | ||
| ); | ||
| onSaved(); | ||
| onOpenChange(false); | ||
| } catch (err) { | ||
| console.error("[secrets/upsert] Failed to save:", err); | ||
| toast.error("Failed to save environment variables"); | ||
| } finally { | ||
| setIsSaving(false); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Partial save on mid-sequence failure — some secrets saved, others lost.
The sequential for loop (lines 204-212) calls upsert one at a time. If the 3rd of 5 entries fails, entries 1-2 are saved, 3-5 are not, and the sheet shows an error toast. However, the sheet closes only on full success (line 219), so the user can retry — and since it's an upsert, re-saving already-saved entries is idempotent. Still, the UX could be confusing: the user retries with all 5 entries visible, but 2 were already committed.
Consider filtering out already-saved entries on retry, or using Promise.allSettled to attempt all and report partial results.
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx
around lines 198 - 226, handleSave currently upserts each validEntries item
sequentially so a mid-loop failure leaves earlier secrets saved and later ones
not; change handleSave to run all upserts in parallel with Promise.allSettled
against apiTrpcClient.project.secrets.upsert.mutate for each entry (using
entry.key.trim()/entry.value.trim() and sensitive/projectId/organizationId),
then inspect results to separate successes from failures, show a toast that
summarizes partial success (e.g., N added, M failed) and only call onSaved() and
onOpenChange(false) if all upserts succeeded; if some failed, keep the sheet
open and optionally remove the successfully saved entries from
entries/validEntries so retry only attempts failures.
| const handleDelete = useCallback(async () => { | ||
| if (!confirm(`Delete environment variable "${secret.key}"?`)) return; | ||
| setIsDeleting(true); | ||
| try { | ||
| await apiTrpcClient.project.secrets.delete.mutate({ | ||
| id: secret.id, | ||
| organizationId, | ||
| }); | ||
| onDeleted(); | ||
| } catch (err) { | ||
| console.error("[secrets/delete] Failed to delete:", err); | ||
| } finally { | ||
| setIsDeleting(false); | ||
| } | ||
| }, [secret.id, secret.key, organizationId, onDeleted]); |
There was a problem hiding this comment.
Delete failure is logged but not surfaced to the user.
When apiTrpcClient.project.secrets.delete.mutate fails, the error is logged to the console (line 58) but the user sees no feedback — the row just stops showing the deleting state. Add a toast.error(...) to inform the user.
🛡️ Proposed fix
} catch (err) {
console.error("[secrets/delete] Failed to delete:", err);
+ toast.error(`Failed to delete "${secret.key}"`);
} finally {(Add import { toast } from "@superset/ui/sonner"; at the top.)
As per coding guidelines, "Never swallow errors silently; at minimum log errors with context before rethrowing or handling them explicitly."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleDelete = useCallback(async () => { | |
| if (!confirm(`Delete environment variable "${secret.key}"?`)) return; | |
| setIsDeleting(true); | |
| try { | |
| await apiTrpcClient.project.secrets.delete.mutate({ | |
| id: secret.id, | |
| organizationId, | |
| }); | |
| onDeleted(); | |
| } catch (err) { | |
| console.error("[secrets/delete] Failed to delete:", err); | |
| } finally { | |
| setIsDeleting(false); | |
| } | |
| }, [secret.id, secret.key, organizationId, onDeleted]); | |
| const handleDelete = useCallback(async () => { | |
| if (!confirm(`Delete environment variable "${secret.key}"?`)) return; | |
| setIsDeleting(true); | |
| try { | |
| await apiTrpcClient.project.secrets.delete.mutate({ | |
| id: secret.id, | |
| organizationId, | |
| }); | |
| onDeleted(); | |
| } catch (err) { | |
| console.error("[secrets/delete] Failed to delete:", err); | |
| toast.error(`Failed to delete "${secret.key}"`); | |
| } finally { | |
| setIsDeleting(false); | |
| } | |
| }, [secret.id, secret.key, organizationId, onDeleted]); |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx
around lines 48 - 62, The catch block in handleDelete currently only
console.errors the failure from apiTrpcClient.project.secrets.delete.mutate and
leaves the user without feedback; add a user-visible error by calling
toast.error(...) inside the catch (including context like the secret.key and the
caught error message), and import toast from "@superset/ui/sonner" at the top of
the file; keep the existing setIsDeleting(false) in the finally block and do not
change onDeleted behavior.
| const handleCopy = useCallback(async () => { | ||
| await navigator.clipboard.writeText(secret.value); | ||
| setCopied(true); | ||
| setTimeout(() => setCopied(false), 1500); | ||
| }, [secret.value]); |
There was a problem hiding this comment.
navigator.clipboard.writeText can reject — no error handling.
The clipboard API can fail (e.g., in certain Electron contexts or when the document isn't focused). Wrap in try/catch to avoid an unhandled promise rejection.
🛡️ Proposed fix
const handleCopy = useCallback(async () => {
- await navigator.clipboard.writeText(secret.value);
- setCopied(true);
- setTimeout(() => setCopied(false), 1500);
+ try {
+ await navigator.clipboard.writeText(secret.value);
+ setCopied(true);
+ setTimeout(() => setCopied(false), 1500);
+ } catch (err) {
+ console.error("[secrets/copy] Clipboard write failed:", err);
+ }
}, [secret.value]);🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx
around lines 64 - 68, The handleCopy callback currently calls
navigator.clipboard.writeText(secret.value) without error handling; wrap the
async clipboard call in a try/catch inside handleCopy (the function defined in
SecretRow.tsx) to catch rejections and avoid unhandled promise rejections, and
on error optionally log or silently handle it (e.g.,
processLogger/console.error) while still setting the copied state only on
success; ensure the dependency on secret.value remains unchanged.
| delete: protectedProcedure | ||
| .input( | ||
| z.object({ id: z.string().uuid(), organizationId: z.string().uuid() }), | ||
| ) | ||
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgAdmin(ctx.session.user.id, input.organizationId); | ||
| await dbWs | ||
| .delete(projects) | ||
| .where( | ||
| and( | ||
| eq(projects.id, input.id), | ||
| eq(projects.organizationId, input.organizationId), | ||
| ), | ||
| ); | ||
| return { success: true }; | ||
| }), |
There was a problem hiding this comment.
Delete silently succeeds when the project doesn't exist.
If no row matches the WHERE clause, the mutation still returns { success: true }, which misleads the caller. Consider checking the deletion result or using .returning() to confirm a row was actually removed.
🛡️ Proposed fix
- await dbWs
+ const [deleted] = await dbWs
.delete(projects)
.where(
and(
eq(projects.id, input.id),
eq(projects.organizationId, input.organizationId),
),
- );
+ )
+ .returning();
+ if (!deleted) {
+ throw new TRPCError({
+ code: "NOT_FOUND",
+ message: "Project not found",
+ });
+ }
return { success: true };🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/project.ts` around lines 77 - 92, The delete
mutation (the protectedProcedure block that calls verifyOrgAdmin and
dbWs.delete(projects).where(...)) currently always returns { success: true }
even if no row was deleted; update it to check the deletion result (e.g., use
dbWs.delete(projects).where(...).returning() or inspect the result count) and
only return success when a row was actually removed—otherwise return a failure
(or throw a 404 / not-found error) so callers can distinguish “no-op” from a
real deletion.
4459711 to
31f751e
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.github/workflows/deploy-preview.yml:
- Line 590: Remove the unused SECRETS_ENCRYPTION_KEY environment variable from
the Admin app deployment job in the workflow: delete the line setting
SECRETS_ENCRYPTION_KEY: ${{ secrets.SECRETS_ENCRYPTION_KEY }} from the Admin
app's env block (the deploy-preview/admin job), ensuring no code in apps/admin
(e.g., apps/admin/src/env.ts) references it; verify by searching for
SECRETS_ENCRYPTION_KEY and confirm only the API app's secrets router uses it.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/secrets/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx:
- Line 148: MAX_FILE_SIZE is declared inside the AddSecretSheet component
causing it to be recreated on every render; move the declaration to module scope
(top of the file) with the other utility functions/constants so it becomes a
single module-level constant (e.g., declare const MAX_FILE_SIZE = 256 * 1024;
alongside existing helpers) and remove the in-component definition so
AddSecretSheet uses the module-level MAX_FILE_SIZE.
In `@packages/trpc/src/router/project/secrets/secrets.ts`:
- Around line 132-140: The per-row mapping in the getDecrypted query uses
decryptSecret(row.encryptedValue) directly so a thrown error on one secret will
abort the whole query; update the rows.map callback (where decryptSecret is
called) to wrap decryption in a try-catch for each row, returning a safe
fallback (e.g., empty string or null) for value and optionally flagging the row
(e.g., add a decryptionError boolean) while still returning the other fields
(id, key, sensitive, createdAt, updatedAt, createdBy) so a single corrupted
secret does not break the entire response.
🧹 Nitpick comments (5)
apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/secrets/components/SecretsSettings/components/EnvironmentVariablesList/components/SecretRow/SecretRow.tsx (1)
64-68:setTimeoutnot cleared on unmount — potential state update on unmounted component.If the component unmounts within the 1500 ms window,
setCopied(false)fires on a stale component. Consider using a ref-based cleanup or clearing the timeout on unmount.Proposed fix
+import { useCallback, useRef, useState, useEffect } from "react"; ... export function SecretRow(...) { + const copyTimeoutRef = useRef<ReturnType<typeof setTimeout>>(); + + useEffect(() => { + return () => { + if (copyTimeoutRef.current) clearTimeout(copyTimeoutRef.current); + }; + }, []); + const handleCopy = useCallback(async () => { await navigator.clipboard.writeText(secret.value); setCopied(true); - setTimeout(() => setCopied(false), 1500); + copyTimeoutRef.current = setTimeout(() => setCopied(false), 1500); }, [secret.value]);apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/secrets/components/SecretsSettings/components/EnvironmentVariablesList/EnvironmentVariablesList.tsx (1)
19-27: TheSecretinterface is duplicated across components.This same shape is defined here and implicitly in
SecretRow. Consider extracting a shared type (e.g., in atypes.tsco-located with the secrets feature) to keep them in sync.packages/trpc/src/router/project/secrets/utils/crypto.ts (2)
39-40: Inconsistent encoding specification betweenupdateandfinal.
decipher.update(ciphertext)returns aBuffer(no output encoding specified), whiledecipher.final("utf8")returns astring. The+operator coerces the Buffer via its default.toString()(which is UTF-8), so it works — but it's clearer to be explicit on both calls.Suggested improvement
- return decipher.update(ciphertext) + decipher.final("utf8"); + return Buffer.concat([decipher.update(ciphertext), decipher.final()]).toString("utf8");
7-14:getKey()re-reads and re-allocates the key buffer on every call.For a hot path (e.g., decrypting many secrets in
getDecrypted), consider caching the derived key lazily. Not a correctness issue at current scale, but worth noting.Optional caching pattern
+let _cachedKey: Buffer | null = null; + function getKey(): Buffer { + if (_cachedKey) return _cachedKey; const raw = process.env.SECRETS_ENCRYPTION_KEY; if (!raw) throw new Error("SECRETS_ENCRYPTION_KEY not set"); const key = Buffer.from(raw, "base64"); if (key.length !== 32) throw new Error("SECRETS_ENCRYPTION_KEY must be 32 bytes"); + _cachedKey = key; return key; }apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/cloud/secrets/components/SecretsSettings/SecretsSettings.tsx (1)
50-58:suggestedMatchtriggers recomputation on the entireprojectobject.The dependency array uses
[project, cloudProjects]. Sinceprojectis a query result object that may be a new reference on every render, this memo may recompute more often than necessary. Consider narrowing to the specific fields used:Suggested improvement
}, [ - project, - cloudProjects, + project?.neonProjectId, + project?.mainRepoPath, + project?.githubOwner, + cloudProjects, ]);
| STRIPE_PRO_MONTHLY_PRICE_ID: ${{ secrets.STRIPE_PRO_MONTHLY_PRICE_ID }} | ||
| STRIPE_PRO_YEARLY_PRICE_ID: ${{ secrets.STRIPE_PRO_YEARLY_PRICE_ID }} | ||
| SLACK_BILLING_WEBHOOK_URL: ${{ secrets.SLACK_BILLING_WEBHOOK_URL }} | ||
| SECRETS_ENCRYPTION_KEY: ${{ secrets.SECRETS_ENCRYPTION_KEY }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the admin app references SECRETS_ENCRYPTION_KEY in its source
echo "=== Searching in admin app ==="
rg 'SECRETS_ENCRYPTION_KEY' --type=ts --type=js -g 'apps/admin/**' -l
echo ""
echo "=== Searching in packages ==="
rg 'SECRETS_ENCRYPTION_KEY' --type=ts --type=js -g 'packages/**' -l
echo ""
echo "=== Full search across codebase ==="
rg 'SECRETS_ENCRYPTION_KEY' --type=ts --type=js -lRepository: superset-sh/superset
Length of output: 346
🏁 Script executed:
#!/bin/bash
# Check Admin app's dependencies and imports
echo "=== Admin app package.json ==="
cat apps/admin/package.json | grep -A 20 -B 5 '"dependencies"'
echo ""
echo "=== Admin app env.ts (if exists) ==="
if [ -f apps/admin/src/env.ts ]; then
cat apps/admin/src/env.ts | head -50
else
echo "No env.ts found in admin app"
fi
echo ""
echo "=== Admin app imports of `@superset-sh/trpc` ==="
rg '@superset-sh/trpc|packages/trpc' apps/admin/ --type=ts --type=js -l
echo ""
echo "=== Check deploy-preview.yml around line 590 ==="
sed -n '580,610p' .github/workflows/deploy-preview.ymlRepository: superset-sh/superset
Length of output: 4734
Remove SECRETS_ENCRYPTION_KEY from Admin app deployment environment.
The Admin app's env schema (apps/admin/src/env.ts) does not include SECRETS_ENCRYPTION_KEY, and no code in the Admin app references it. The key is only used by the API app's secrets management router. Passing unused secrets violates the least-privilege principle.
🤖 Prompt for AI Agents
In @.github/workflows/deploy-preview.yml at line 590, Remove the unused
SECRETS_ENCRYPTION_KEY environment variable from the Admin app deployment job in
the workflow: delete the line setting SECRETS_ENCRYPTION_KEY: ${{
secrets.SECRETS_ENCRYPTION_KEY }} from the Admin app's env block (the
deploy-preview/admin job), ensuring no code in apps/admin (e.g.,
apps/admin/src/env.ts) references it; verify by searching for
SECRETS_ENCRYPTION_KEY and confirm only the API app's secrets router uses it.
| }); | ||
| }, []); | ||
|
|
||
| const MAX_FILE_SIZE = 256 * 1024; // 256 KB |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
MAX_FILE_SIZE should be a module-level constant.
This constant is defined inside the component body, causing it to be re-created on every render. Move it to module scope alongside the other utility functions.
Proposed fix
+const MAX_FILE_SIZE = 256 * 1024; // 256 KB
+
export function AddSecretSheet({- const MAX_FILE_SIZE = 256 * 1024; // 256 KB
-
const validateAndReadFile = useCallback(As per coding guidelines, "Extract hardcoded magic numbers, strings, and enums to named constants at module top instead of leaving them inline in logic."
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/secrets/components/SecretsSettings/components/AddSecretSheet/AddSecretSheet.tsx
at line 148, MAX_FILE_SIZE is declared inside the AddSecretSheet component
causing it to be recreated on every render; move the declaration to module scope
(top of the file) with the other utility functions/constants so it becomes a
single module-level constant (e.g., declare const MAX_FILE_SIZE = 256 * 1024;
alongside existing helpers) and remove the in-component definition so
AddSecretSheet uses the module-level MAX_FILE_SIZE.
| return rows.map((row) => ({ | ||
| id: row.id, | ||
| key: row.key, | ||
| value: row.sensitive ? "" : decryptSecret(row.encryptedValue), | ||
| sensitive: row.sensitive, | ||
| createdAt: row.createdAt, | ||
| updatedAt: row.updatedAt, | ||
| createdBy: row.createdBy ?? null, | ||
| })); |
There was a problem hiding this comment.
A single corrupted secret breaks the entire getDecrypted query.
If decryptSecret throws on any row (e.g., corrupted ciphertext, key rotation), the unhandled error propagates and the user sees zero secrets. Consider wrapping decryption in a try-catch per row so the remaining secrets are still returned.
Proposed fix
return rows.map((row) => ({
id: row.id,
key: row.key,
- value: row.sensitive ? "" : decryptSecret(row.encryptedValue),
+ value: row.sensitive
+ ? ""
+ : (() => {
+ try {
+ return decryptSecret(row.encryptedValue);
+ } catch (err) {
+ console.error(`[secrets/decrypt] Failed to decrypt key="${row.key}":`, err);
+ return "";
+ }
+ })(),
sensitive: row.sensitive,
createdAt: row.createdAt,
updatedAt: row.updatedAt,
createdBy: row.createdBy ?? null,
}));🤖 Prompt for AI Agents
In `@packages/trpc/src/router/project/secrets/secrets.ts` around lines 132 - 140,
The per-row mapping in the getDecrypted query uses
decryptSecret(row.encryptedValue) directly so a thrown error on one secret will
abort the whole query; update the rows.map callback (where decryptSecret is
called) to wrap decryption in a try-catch for each row, returning a safe
fallback (e.g., empty string or null) for value and optionally flagging the row
(e.g., add a decryptionError boolean) while still returning the other fields
(id, key, sensitive, createdAt, updatedAt, createdBy) so a single corrupted
secret does not break the entire response.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/secrets/components/SecretsSettings/SecretsSettings.tsx:
- Around line 59-66: The useEffect closes over linkToNeon but lists
linkToNeon.mutate as a dependency and lacks an in-flight guard; change the
dependency to linkToNeon (not linkToNeon.mutate) in the useEffect dependency
array and add a guard so the effect only calls linkToNeon.mutate when
suggestedMatch is truthy and the mutation is not already in-flight (e.g., check
linkToNeon.isLoading / linkToNeon.isMutating or equivalent) while still passing
id: projectId and neonProjectId: suggestedMatch.id to linkToNeon.mutate; keep
references to suggestedMatch and projectId in the dependency array.
| useEffect(() => { | ||
| if (suggestedMatch) { | ||
| linkToNeon.mutate({ | ||
| id: projectId, | ||
| neonProjectId: suggestedMatch.id, | ||
| }); | ||
| } | ||
| }, [suggestedMatch, linkToNeon.mutate, projectId]); |
There was a problem hiding this comment.
Incorrect useEffect dependency: linkToNeon.mutate instead of linkToNeon.
Biome correctly flags this: the effect closes over linkToNeon but the dependency list specifies the more-specific linkToNeon.mutate. This can cause stale closure bugs if the mutation object reference updates but .mutate doesn't (or vice-versa). Additionally, there's no guard against re-firing while a mutation is already in-flight.
🔧 Proposed fix
useEffect(() => {
- if (suggestedMatch) {
+ if (suggestedMatch && !linkToNeon.isPending) {
linkToNeon.mutate({
id: projectId,
neonProjectId: suggestedMatch.id,
});
}
-}, [suggestedMatch, linkToNeon.mutate, projectId]);
+}, [suggestedMatch, linkToNeon, projectId]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| if (suggestedMatch) { | |
| linkToNeon.mutate({ | |
| id: projectId, | |
| neonProjectId: suggestedMatch.id, | |
| }); | |
| } | |
| }, [suggestedMatch, linkToNeon.mutate, projectId]); | |
| useEffect(() => { | |
| if (suggestedMatch && !linkToNeon.isPending) { | |
| linkToNeon.mutate({ | |
| id: projectId, | |
| neonProjectId: suggestedMatch.id, | |
| }); | |
| } | |
| }, [suggestedMatch, linkToNeon, projectId]); |
🧰 Tools
🪛 Biome (2.3.14)
[error] 59-59: This hook specifies a dependency more specific than its captures: linkToNeon.mutate
This capture is more generic than...
...this dependency.
(lint/correctness/useExhaustiveDependencies)
[error] 59-59: This hook does not specify its dependency on linkToNeon.
This dependency is being used here, but is not specified in the hook dependency list.
React relies on hook dependencies to determine when to re-compute Effects.
Failing to specify dependencies can result in Effects not updating correctly when state changes.
These "stale closures" are a common source of surprising bugs.
Unsafe fix: Add the missing dependency to the list.
(lint/correctness/useExhaustiveDependencies)
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/project/`$projectId/cloud/secrets/components/SecretsSettings/SecretsSettings.tsx
around lines 59 - 66, The useEffect closes over linkToNeon but lists
linkToNeon.mutate as a dependency and lacks an in-flight guard; change the
dependency to linkToNeon (not linkToNeon.mutate) in the useEffect dependency
array and add a guard so the effect only calls linkToNeon.mutate when
suggestedMatch is truthy and the mutation is not already in-flight (e.g., check
linkToNeon.isLoading / linkToNeon.isMutating or equivalent) while still passing
id: projectId and neonProjectId: suggestedMatch.id to linkToNeon.mutate; keep
references to suggestedMatch and projectId in the dependency array.
Move environment variables route from secrets/ to cloud/secrets/, add Cloud collapsible section in sidebar, and add env vars to settings search.
…and creator info Overhaul the secrets/environment variables UI to match the reference design: - Add createdByUserId to secrets schema with user relation - Return creator info (name, avatar) from getDecrypted query - Redesign SecretRow with circled icons, centered value reveal, date + avatar - Add sort dropdown (Last Updated, Name, Type) to EnvironmentVariablesList - Click-to-copy revealed values with hover tooltip - Show "Empty" for blank values, neutral badge colors for sensitive - Add toast notifications on edit success/failure - Fix migration ordering for DROP CONSTRAINT before DROP TABLE CASCADE
…auto-linking in SecretsSettings
be06d92 to
7eaa4b1
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/renderer/stores/settings-state.ts (1)
4-7:⚠️ Potential issue | 🟡 MinorStale JSDoc: still references "workspace".
The comment on line 6 mentions "project/workspace are dynamic" but workspace has been removed from
SettingsSection.Proposed fix
/** * Settings sections available in the settings view. - * General sections are the main categories, project/workspace are dynamic. + * General sections are the main categories, project is dynamic. */
🤖 Fix all issues with AI agents
In `@packages/trpc/src/router/workspace/workspace.ts`:
- Around line 24-38: The mutation currently calls verifyOrgMembership(...) but
does not ensure input.projectId belongs to input.organizationId, allowing
cross-org project references; before inserting via
dbWs.insert(workspaces).values(...).returning(), query the projects table (e.g.,
select from projects where id = input.projectId) and assert its organizationId
matches input.organizationId (throw an unauthorized/error if not), then proceed
with the insert; update the mutation to perform this ownership check using the
same DB client used for workspaces so the check is atomic and uses symbols
verifyOrgMembership, dbWs, workspaces and the projects table/selector.
- Around line 36-37: The variable workspace returned after calling .returning()
can be undefined if the returned array is empty; update the code that uses
workspace (the variable named workspace returned from the query that calls
.returning()) to guard against undefined—e.g., check if workspace is falsy and
either throw a descriptive error or return a safe default value before returning
workspace so callers don't receive undefined; ensure the check is colocated with
the .returning() result handling and use the same variable name workspace in the
guard.
🧹 Nitpick comments (3)
apps/desktop/src/renderer/routes/_authenticated/settings/components/SettingsSidebar/ProjectsSettings.tsx (1)
83-127: Consider extracting the repeated nav-link class logic.The active/inactive className pattern appears three times (Lines 86–91, 97–101, 113–117) with only minor variations. A small helper would reduce duplication and make future style tweaks easier:
const navLinkClass = (active: boolean) => cn( "flex items-center gap-2 px-2 py-1 text-sm w-full text-left rounded-md transition-colors", active ? "bg-accent text-accent-foreground" : "text-muted-foreground hover:bg-accent/50 hover:text-accent-foreground", );The Cloud trigger variant drops the
bg-accentwhen active — that one difference can be handled with an optional parameter or kept inline.apps/desktop/src/renderer/routes/_authenticated/settings/project/$projectId/components/ProjectSettings/ProjectSettings.tsx (1)
27-27: Rename theSettingsSectioncomponent to avoid confusion with theSettingsSectiontype insettings-state.ts.The component
SettingsSectionin ProjectSettings and its barrel export share a name with the typeSettingsSection(a string union of app settings sections like "account", "organization", etc.). While TypeScript's separate type/value namespaces prevent compile errors, this naming overlap can cause developer confusion and complicate auto-import suggestions. The type is used throughout settings modules for state management, while the component is a UI panel wrapper. Renaming the component to something likeSettingsSectionPanelorSettingsPanelCardwould clarify their distinct purposes.packages/trpc/src/router/workspace/workspace.ts (1)
40-55:deletesilently succeeds for non-existent workspaces and also lacks project-org authorization.Similarly to
create, the delete mutation should verify the workspace exists under the correct organization. Currently it returns{ success: true }even if no rows were deleted. Consider checking the delete result or using.returning()to confirm a row was actually removed.
| .mutation(async ({ ctx, input }) => { | ||
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | ||
| const [workspace] = await dbWs | ||
| .insert(workspaces) | ||
| .values({ | ||
| projectId: input.projectId, | ||
| organizationId: input.organizationId, | ||
| name: input.name, | ||
| type: input.type, | ||
| config: input.config, | ||
| createdByUserId: ctx.session.user.id, | ||
| }) | ||
| .returning(); | ||
| return workspace; | ||
| }), |
There was a problem hiding this comment.
Missing authorization: projectId is not verified to belong to the given organizationId.
A member of organization A could pass a projectId belonging to organization B. The DB foreign keys validate each column independently (projects.id exists, organizations.id exists) but don't enforce that the project belongs to the specified organization.
Add a check that the project belongs to the organization before inserting, e.g.:
Proposed fix
.mutation(async ({ ctx, input }) => {
await verifyOrgMembership(ctx.session.user.id, input.organizationId);
+
+ const project = await dbWs.query.projects.findFirst({
+ where: and(
+ eq(projects.id, input.projectId),
+ eq(projects.organizationId, input.organizationId),
+ ),
+ });
+ if (!project) {
+ throw new TRPCError({
+ code: "NOT_FOUND",
+ message: "Project not found in this organization",
+ });
+ }
+
const [workspace] = await dbWs📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| const [workspace] = await dbWs | |
| .insert(workspaces) | |
| .values({ | |
| projectId: input.projectId, | |
| organizationId: input.organizationId, | |
| name: input.name, | |
| type: input.type, | |
| config: input.config, | |
| createdByUserId: ctx.session.user.id, | |
| }) | |
| .returning(); | |
| return workspace; | |
| }), | |
| .mutation(async ({ ctx, input }) => { | |
| await verifyOrgMembership(ctx.session.user.id, input.organizationId); | |
| const project = await dbWs.query.projects.findFirst({ | |
| where: and( | |
| eq(projects.id, input.projectId), | |
| eq(projects.organizationId, input.organizationId), | |
| ), | |
| }); | |
| if (!project) { | |
| throw new TRPCError({ | |
| code: "NOT_FOUND", | |
| message: "Project not found in this organization", | |
| }); | |
| } | |
| const [workspace] = await dbWs | |
| .insert(workspaces) | |
| .values({ | |
| projectId: input.projectId, | |
| organizationId: input.organizationId, | |
| name: input.name, | |
| type: input.type, | |
| config: input.config, | |
| createdByUserId: ctx.session.user.id, | |
| }) | |
| .returning(); | |
| return workspace; | |
| }), |
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/workspace/workspace.ts` around lines 24 - 38, The
mutation currently calls verifyOrgMembership(...) but does not ensure
input.projectId belongs to input.organizationId, allowing cross-org project
references; before inserting via
dbWs.insert(workspaces).values(...).returning(), query the projects table (e.g.,
select from projects where id = input.projectId) and assert its organizationId
matches input.organizationId (throw an unauthorized/error if not), then proceed
with the insert; update the mutation to perform this ownership check using the
same DB client used for workspaces so the check is atomic and uses symbols
verifyOrgMembership, dbWs, workspaces and the projects table/selector.
| .returning(); | ||
| return workspace; |
There was a problem hiding this comment.
workspace may be undefined if .returning() yields an empty array.
While unlikely, destructuring from an empty array silently returns undefined to the caller. Consider adding a guard.
Proposed fix
const [workspace] = await dbWs
.insert(workspaces)
.values({ ... })
.returning();
+ if (!workspace) {
+ throw new TRPCError({
+ code: "INTERNAL_SERVER_ERROR",
+ message: "Failed to create workspace",
+ });
+ }
return workspace;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .returning(); | |
| return workspace; | |
| const [workspace] = await dbWs | |
| .insert(workspaces) | |
| .values({ ... }) | |
| .returning(); | |
| if (!workspace) { | |
| throw new TRPCError({ | |
| code: "INTERNAL_SERVER_ERROR", | |
| message: "Failed to create workspace", | |
| }); | |
| } | |
| return workspace; |
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/workspace/workspace.ts` around lines 36 - 37, The
variable workspace returned after calling .returning() can be undefined if the
returned array is empty; update the code that uses workspace (the variable named
workspace returned from the query that calls .returning()) to guard against
undefined—e.g., check if workspace is falsy and either throw a descriptive error
or return a safe default value before returning workspace so callers don't
receive undefined; ensure the check is colocated with the .returning() result
handling and use the same variable name workspace in the guard.
Summary
createdByUserIdcolumn to secrets schema with user relation, return creator info (name, avatar) ingetDecryptedqueryTest plan
bun run lint:fix— no issuesbun run typecheck --filter=@superset/desktop --filter=@superset/trpc— passesbun test— 1227 pass, 0 faildrizzle-kit generateif schema changes need fresh migrationSummary by CodeRabbit
Release Notes
New Features
Refactor