feat(cli): v1 — command surface, distribution pipeline, release channel#3898
feat(cli): v1 — command surface, distribution pipeline, release channel#3898saddlepaddle merged 24 commits intomainfrom
Conversation
CLI v1 ships its full intended surface (auth/organization/projects/hosts/workspaces/tasks/automations/host/update — 27 commands) along with the backend procedures and frontend changes those depend on. Key bits: - Cloud: host.list, v2Workspace.list, task.byIdOrSlug, task.list w/ filters, consolidated task.create (drop createFromUi + old all-IDs variant), drop branch from task schemas, automation.create workspace-only mode (derives v2ProjectId), jwtProcedure session-token fallback. - Web: sign-in middleware (proxy.ts) now stashes path+params in a short-lived cookie before redirecting to /sign-in. Pages call consumePendingAuthParams() to recover them post-auth. Fixes lost query params for /cli/authorize and /oauth/consent. - Host service: project.list returns repo metadata. - CLI: drop --api-url (build-time constant), drop SUPERSET_DEVICE/--device, rename ~/superset → ~/.superset (honor SUPERSET_HOME_DIR), rename auth check → auth status, rename devices → hosts, drop host install (out of scope for v1), local-vs-relay routing via resolveHostTarget(), new commands (projects list, automations logs, update). cli-framework: agent-mode JSON only triggers on non-empty env values; --quiet now beats agent-mode auto-JSON. - Renderer + mobile: switched from task.createFromUi → task.create; dropped dead onInsert handlers on the tasks Electric collection. Verified end-to-end against local API+web (auth login full cookie flow, full task CRUD, hosts/workspaces list, host status with cloud hostName lookup) and against prod for the host-service-loopback paths (auth status, organization list, host status with degraded hostName, automations list, projects list, workspaces create+delete via --host). Audit doc: plans/cli-v1-audit.md (gitignored).
Expand the CLI distribution build to cover darwin-x64, linux-x64, darwin-arm64, and linux-arm64 — the four targets we want to ship. - build-dist.ts: derive node archive name from the target tuple instead of the prior darwin-arm64-vs-linux-x64 ternary; add darwin-x64 and linux-arm64 to TARGET_NATIVE_PACKAGES so the right @libsql/* and @parcel/watcher-* native bindings get bundled. - build-cli.yml: matrix gains macos-13 (darwin-x64) and ubuntu-24.04-arm (linux-arm64). Each target runs on its native runner so node-pty's install-time compile produces the right .node binary — node-pty's npm tarball ships only macOS/Windows prebuilds, so cross-build from macOS to Linux can't produce a working bundle. - build-cli.yml: also runs on PRs and main/cli-v1 pushes that touch CLI sources, plus a `--version` / `--help` smoke step so a broken build fails CI immediately rather than at release time.
…th status → whoami Removes the confusing `host` (verb) vs `hosts` (noun) collision. `host` held only start/stop/status — three local-service controls — while `hosts` listed remote machines in the org. They sounded the same on the command line and meant entirely different things. - `superset host start` → `superset start` - `superset host stop` → `superset stop` - `superset host status` → `superset status` - `superset auth status` → `superset auth whoami` The `host` group is gone. `hosts` keeps its meaning (list remote hosts). The auth rename follows npm/vercel/hcloud — `whoami` reads more naturally for "who am I logged in as" than `status` does, and avoids the implicit asymmetry of having `superset status` mean "host service state" while `superset auth status` means "auth state". Identity now gets its own verb.
The CLI bakes RELAY_URL, CLOUD_API_URL, and SUPERSET_WEB_URL at compile time via cli.config.ts. CI was passing the first two but relying on the default for the third — works for prod by coincidence, but a default that drifts (e.g., to a localhost placeholder) would silently bake a broken URL into release artifacts. Set it explicitly. Also extend the smoke step to require() each native addon module through the bundled Node runtime. The earlier --version/--help check doesn't touch better-sqlite3, node-pty, @parcel/watcher, or libsql, so a missing or ABI-mismatched .node binary would only surface when a user ran `superset start`.
…secrets/vars Two consistency fixes: 1. Rename `CLOUD_API_URL` → `SUPERSET_API_URL` to match the existing `SUPERSET_*` prefix used for shared host-service / CLI env vars (e.g. `SUPERSET_HOME_DIR`, `SUPERSET_VERSION`). Touches the CLI, host-service, and the desktop main process; the desktop->host-service spawn handshake passes the new name through. 2. Stop hardcoding the prod URLs in build-cli.yml and ci.yml. Read from the existing `RELAY_URL` repo secret and the new `SUPERSET_API_URL` / `SUPERSET_WEB_URL` repo variables. A staging or self-hosted CI run can now point its builds elsewhere by changing the var, not the workflow. `RELAY_URL` stays unprefixed for now — renaming it touches Fly runtime config across api / relay / desktop / host-service and is its own PR.
…nnel Replace the listing-and-filtering update flow with a fixed download URL backed by a rolling `cli-latest` GH release/tag. Why: the prior code hit `/releases/latest`, which returns the newest published release of any kind across the whole repo. A desktop release shipped after a CLI release would shadow CLI on that endpoint and break `superset update`. Even the listing-with-filter workaround (just shipped) was more complexity than the problem deserves — and it still relies on heuristics that drift over time. Now: build-cli.yml's release job recreates a `cli-latest` release on every CLI tag, copying the same tarballs plus a `version.txt` carrying the semver. The update command fetches `version.txt` to detect a new version and downloads the matching tarball directly from `/releases/download/cli-latest/`. No filtering, no version guessing, no global-latest endpoint, immune to whatever else lands in the repo.
Lets users install a specific CLI version (or roll back) without waiting for the rolling cli-latest channel: superset update --to 0.1.2 superset update --to cli-v0.1.2 # cli-v prefix tolerated superset update --to 0.1.0 --check # dry-run Pinned installs skip the version-detection step and download directly from /releases/download/cli-v<version>/. Useful for teams that want release-train stability while we ship rapidly. Named the flag --to (not --version) because cli-framework reserves --version as the global "print CLI version" flag.
… --to → --version cli-framework was unconditionally short-circuiting --version / -v, both in the parser and the runner, which meant a command could never declare its own --version option. The CLI's update command had to use --to as a workaround. Two narrow fixes restore the convention: - parser.ts: only treat `--version` as a built-in shortcut when the command's option map doesn't define one. A command that declares `version: string()` now parses normally, including `--version=0.1.2` and `--version 0.1.2` value forms. - runner.ts: only print the framework version banner for `--version` when no command resolved. `superset --version` still prints the CLI banner; `superset update --version 0.1.2` routes to the command. Renames the update flag back to `--version` to match deno upgrade, nvm install, etc.
…arer Better-auth's apiKey plugin reads keys from the x-api-key header. The Authorization: Bearer header is for OAuth JWTs and sessions; api keys sent there get rejected as invalid bearers. Symptom: `superset auth whoami --api-key sk_live_…` produced the generic "Session expired" error, because the framework's tRPC error handler rewrites any UNAUTHORIZED → "Session expired" on the way out. Detect by prefix and route api keys to x-api-key. JWT/session tokens keep using Authorization: Bearer.
- New apps/docs/content/docs/cli/ product (getting-started, cli-reference, env-vars) with multi-product sidebar switcher. - <Command> + <CommandReturns> + <HumanOutput> + <JsonOutput> MDX components render every entry with the same shape: option/argument tables, fenced ts code blocks for output, alias badge, human-mode hints, --quiet hint. - TOC nests every command group under "Commands" h2. - start/stop/status/update sit at the top level (matches the new CLI surface); auth/organization/projects/hosts/workspaces/tasks/automations follow as groups.
Two related fixes that together get linux-x64 and linux-arm64 building clean across Ubuntu 22.04 / 24.04 and Debian 12: 1. Bun's install-script runner has a cache-materialization race when node-pty falls through to `node-gyp rebuild` on Linux (no Linux prebuilds in the upstream tarball). On the failing path, transitive deps like `tar` and `exponential-backoff` aren't materialized into `node_modules/` by the time the install script needs them, and the whole install aborts. Workaround: skip install scripts on Linux runners (`bun install --frozen --ignore-scripts`) and rebuild the three native addons explicitly afterwards. node-pty rebuilds via `node-gyp` directly to sidestep its `prepare` script, which expects `tsc` on PATH. 2. `build-dist`'s native-binary fixup deleted node-pty's `build/Release/` directory unconditionally, expecting the bundled `prebuilds/<target>/pty.node` to take over. That works on macOS (prebuilds shipped) but not on Linux (no prebuilds; the freshly compiled `build/Release/pty.node` is the only valid binary). Skip the deletion when target platform is linux. Verified locally in Docker: - linux-x64 + linux-arm64 builds + smoke - linux-x64 binary loads better-sqlite3 / node-pty / @parcel/watcher / libsql cleanly inside ubuntu:22.04, debian:12, ubuntu:24.04
New @superset/mcp-v2 package, fully separate from v1's @superset/mcp.
Mounts at /api/v2/agent/mcp; v1 remains untouched at /api/agent/mcp.
Tools (17, mirroring the new CLI surface):
- tasks: list, get, create, update, delete
- automations: list, get, create, update, delete, pause, resume, run, logs
- workspaces: list, create, delete
- projects: list, hosts: list
Auth resolves via two strict paths (no silent fallthrough):
- API key (sk_*) → auth.api.verifyApiKey, org from metadata.organizationId
- OAuth 2.1 access token → verifyAccessToken with audience including
/api/v2/agent/mcp; org from customAccessTokenClaims claim
Both produce the same McpContext consumed by every tool. Tools wrap
tRPC procedures via the server-side caller (trpc.createCaller) so the
same business logic, integration sync, and slug retry the web/CLI use
applies to MCP-originated mutations.
defineTool() wraps every handler with try/catch that returns
{ content, isError: true } on errors and standardises
{ structuredContent, content } on success. No more v1-style direct DB
calls or _registeredTools private SDK access.
Adds v2Project.list procedure (was missing). Adds /api/v2/agent/mcp to
the OAuth provider's validAudiences.
darwin-x64 has been queued indefinitely on macos-13 runners on every cli-v1 push (likely macOS-minutes billing or plan tier — runner picks were never assigned). Apple Silicon shipped 2020; Intel Macs are a shrinking population. Drop the target rather than block releases on it. Homebrew formula renderer: add an `on_arm` block under `on_linux` for linux-arm64 so `brew install` works on Graviton / Asahi / Pi users. Previously the formula only emitted darwin-arm64 + linux-x64 blocks. When SUPER-515 lands (eliminate native addons → cross-compile from one Linux runner), darwin-x64 can come back trivially.
GitHub's /releases/latest doesn't filter by tag prefix, so a published
non-prerelease cli-v* release would shadow desktop's auto-updater (which
reads /releases/latest/download/latest-{mac,linux}.yml). Marking both the
per-version release and the rolling cli-latest release as --prerelease
keeps them off /releases/latest.
This is a workaround, not a design choice. plans/release-channels-spec.md
covers the proper fix (desktop-latest rolling pointer mirroring cli-latest).
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR introduces MCP v2 integration and comprehensive CLI v1 overhaul. Changes include a new Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/host-service/src/terminal/env-strip.ts (1)
16-24:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winKeep stripping legacy
CLOUD_API_URLduring the transition.Dropping the old key from the denylist can leak runtime-injected API endpoint env into PTY sessions in mixed-version setups. Keep both keys until legacy producers are fully gone.
Suggested patch
const HOST_SERVICE_RUNTIME_KEYS = new Set([ "AUTH_TOKEN", + "CLOUD_API_URL", "SUPERSET_API_URL", "DESKTOP_VITE_PORT", "HOST_CLIENT_ID",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/host-service/src/terminal/env-strip.ts` around lines 16 - 24, The HOST_SERVICE_RUNTIME_KEYS denylist removed the legacy key and may leak runtime-injected API endpoints into PTY sessions; add "CLOUD_API_URL" back into the HOST_SERVICE_RUNTIME_KEYS Set (alongside AUTH_TOKEN, SUPERSET_API_URL, etc.) so the legacy key remains stripped during the transition and keep it until all legacy producers are removed.
🧹 Nitpick comments (8)
apps/web/src/proxy.ts (1)
13-15: ⚡ Quick winCentralize pending-auth cookie constants.
PENDING_COOKIE_NAME/TTL are protocol-level values shared withpendingAuthRedirect.ts; keeping them duplicated risks drift and silent auth-restore breakage. Move them to one shared module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/proxy.ts` around lines 13 - 15, Extract PENDING_COOKIE_NAME and PENDING_COOKIE_TTL_SECONDS into a single shared module (e.g., export from a new module like pendingAuthConstants or re-export from an existing shared constants file), replace the local declarations in apps/web/src/proxy.ts (remove the consts) and import the constants where used, and update pendingAuthRedirect.ts to import the same shared symbols instead of its own copies; ensure both modules import the same exported names (PENDING_COOKIE_NAME, PENDING_COOKIE_TTL_SECONDS) and run tests/lint to confirm no duplicate definitions remain.packages/cli/src/commands/update/command.ts (1)
66-92: ⚡ Quick winConsider adding a timeout to the download fetch.
The
fetchcall at line 67 has no timeout, which could cause the update to hang indefinitely on slow or stalled connections. The health check elsewhere uses a 2-second timeout; a longer timeout (e.g., 5 minutes) would be appropriate here given the file size.💡 Proposed fix
async function downloadAndExtract(url: string, destDir: string): Promise<void> { - const response = await fetch(url); + const controller = new AbortController(); + const timeout = setTimeout(() => controller.abort(), 5 * 60 * 1000); + const response = await fetch(url, { signal: controller.signal }); + clearTimeout(timeout); if (!response.ok || !response.body) { throw new CLIError(`Download failed: ${response.status}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/update/command.ts` around lines 66 - 92, The downloadAndExtract function currently calls fetch without a timeout; add an AbortController, pass its signal into fetch(url, { signal }), and start a timer (e.g., 5 minutes) that calls controller.abort() if exceeded; clear the timer once fetch completes successfully. Ensure you catch an aborted request and rethrow a CLIError with a clear message (e.g., "Download timed out") before proceeding to create the tar process and extraction. Keep changes scoped to downloadAndExtract and the fetch call so response.body handling and the existing pipeline/tar logic remain unchanged.packages/mcp-v2/src/in-memory.ts (2)
15-19: 💤 Low valueDocstring claims "no transport monkey-patching" but the code patches
send.Lines 73-83 override
clientTransport.sendto injectauthInfo, which is transport patching. Consider updating the docstring to accurately reflect the implementation, or restructure if the SDK provides a cleaner way to pass auth context.📝 Suggested docstring fix
/** * Build an in-memory MCP client/server pair for server-side agent integrations * (the Slack agent, the automations dispatcher, etc.). Auth context is wired - * directly through the SDK's documented `authInfo` option on each request — - * no transport monkey-patching. + * by wrapping the client transport's `send` method to inject `authInfo` on + * each outgoing message. *🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp-v2/src/in-memory.ts` around lines 15 - 19, The docstring is inaccurate: the implementation overrides clientTransport.send to inject authInfo (transport patching). Update the top-level docstring in packages/mcp-v2/src/in-memory.ts to remove the phrase "no transport monkey-patching" and explicitly state that clientTransport.send is patched to attach authInfo (or alternatively, refactor the code to pass authInfo via the SDK if available); reference the overridden method name clientTransport.send and the function that builds the in-memory client/server pair so reviewers can locate and update the docstring or refactor accordingly.
73-83: 💤 Low valueClarify the dual token values:
authInfo.tokenvsmcpContext.bearerToken.
authInfo.tokenis set to"internal"whilemcpContext.bearerTokencontains the actual JWT. Downstream handlers accessingauthInfo.tokenwill see"internal", but those usinggetMcpContextFromExtrawill find the real token. If this is intentional (internal clients don't need token validation), a brief comment would help future maintainers understand the design.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/mcp-v2/src/in-memory.ts` around lines 73 - 83, The code sets authInfo.token to the literal "internal" while embedding the real JWT in mcpContext.bearerToken (used by getMcpContextFromExtra), which is confusing; either propagate the real token into authInfo.token or add a clear comment documenting the deliberate split (that internal client flows bypass token validation and use "internal" while downstream handlers should call getMcpContextFromExtra to obtain the JWT). Update the clientTransport.send override (referencing clientTransport.send, authInfo.token, mcpContext.bearerToken, and getMcpContextFromExtra) accordingly so future maintainers understand the intent or so that downstream code sees the actual JWT depending on the chosen approach.plans/release-channels-spec.md (1)
99-108: 💤 Low valueBootstrap example uses hardcoded
desktop-v1.7.2— update before executing.The manual bootstrap command references a specific version that will become stale. When executing step 1, replace
desktop-v1.7.2with the actual current desktop release tag. Consider adding a note or using a placeholder like<current-desktop-tag>to make this clearer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plans/release-channels-spec.md` around lines 99 - 108, The bootstrap example hardcodes the release tag `desktop-v1.7.2`, which will become stale; update the example in the block that runs gh release download / gh release create so it uses a placeholder like `<current-desktop-tag>` (or documents how to fetch the latest tag) instead of `desktop-v1.7.2`, and clarify that the user must replace that placeholder before running the gh release download / gh release create commands that reference `desktop-v1.7.2` and `desktop-latest`.packages/cli/src/commands/auth/login/command.ts (1)
100-108: ⚡ Quick winDuplicate
user.meAPI call — reuse the earlier result.
api.user.me.query()is called on line 39 to display the user name, and again on line 103 to get the user ID for the return payload. Store the result from the first call and reuse it:♻️ Proposed fix to avoid duplicate API call
try { - const user = await api.user.me.query(); + var user = await api.user.me.query(); p.log.info(`${user.name} (${user.email})`); } catch (error) { + user = null; p.log.warn( `Could not fetch user profile: ${error instanceof Error ? error.message : String(error)}`, ); }Then at line 103:
return { data: chosen ? { - userId: (await api.user.me.query()).id, + userId: user?.id ?? "unknown", organizationId: chosen.id, organizationName: chosen.name, } : { loggedIn: true }, };Alternatively, hoist the user fetch outside the try/catch if the ID is required for the return value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/auth/login/command.ts` around lines 100 - 108, The code currently calls api.user.me.query() twice (once earlier to display the user name and again in the return payload), so capture the initial user result into a local variable (e.g., const me = await api.user.me.query()) where it is first awaited and reuse that variable when building the return object (replace the second api.user.me.query().id with me.id); if the ID must be available outside the try/catch, hoist the user fetch to the outer scope before the try/catch and reuse that variable when constructing the returned data for the chosen branch.packages/cli/CLI_SPEC_CURRENT.md (1)
340-350: 💤 Low valueTable formatting issues in options section.
Static analysis flags malformed tables around lines 342-343. The pipe characters in the "Accepted but currently ignored" rows may be causing column count mismatches. This won't affect rendering in most viewers but could cause issues in strict Markdown parsers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/CLI_SPEC_CURRENT.md` around lines 340 - 350, The table rows for the CLI options (e.g., `--status`, `--priority`, `--assignee-me`/`-m`, `--creator-me`, `--search`/`-s`, `--limit`, `--offset`) are being parsed as malformed due to unescaped pipe characters in the Description cell; ensure each row has exactly two pipe separators and escape any literal "|" inside the description (use backslash-escaping or replace with HTML entity `|`), or wrap the entire description in backticks so the Markdown parser treats it as a single cell; also keep the default values (`50`, `0`) clearly inside the description text for `--limit` and `--offset`.apps/docs/src/components/Command/Command.tsx (1)
176-208: ⚡ Quick winSplit these helper components into their own files.
CommandReturns,HumanOutput, andJsonOutputmake this a multi-component TSX file. Please move them into their own component folders/files and keepapps/docs/src/components/Command/index.tsas the barrel.As per coding guidelines,
**/*.{tsx,ts}: "Do not create multi-component files; use one component per file." and**/*.tsx: "Use one folder per component with structure:ComponentName/ComponentName.tsx+index.tsfor barrel export."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/docs/src/components/Command/Command.tsx` around lines 176 - 208, Split the multi-component file by moving each component (CommandReturns, HumanOutput, JsonOutput) into its own folder named after the component with a ComponentName.tsx and an index.ts that re-exports the component; remove those components from the original Command.tsx and update the Command folder's index.ts barrel to re-export them (e.g., export { CommandReturns } from './CommandReturns'; export { HumanOutput } from './HumanOutput'; export { JsonOutput } from './JsonOutput'); ensure each new file preserves the original props interfaces and named exports and update any imports across the codebase that referenced the in-file components to import from the Command barrel (index.ts).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/docs/content/docs/cli/env-vars.mdx`:
- Around line 3-14: Add the missing SUPERSET_API_URL environment variable to the
environment variables table: insert a new table row with `SUPERSET_API_URL` as
the Variable and a concise Description like "Base API URL used by the CLI to
contact Superset (e.g. https://superset.example.com); equivalent to `--api-url`"
so it appears alongside `SUPERSET_API_KEY` and `SUPERSET_HOME_DIR`; update the
table near the top of the doc where the other env vars are defined (look for the
markdown table containing `SUPERSET_API_KEY` and `SUPERSET_HOME_DIR`) to keep
the docs in sync with the CLI.
In `@apps/docs/content/docs/cli/getting-started.mdx`:
- Around line 6-8: The sentence "The Superset CLI (`superset`) is a single
static binary" is misleading; update that wording to reflect that `superset` is
provided as the primary CLI binary alongside additional runtime/addon artifacts
produced by the release pipeline. Edit the line containing the phrase "The
Superset CLI (`superset`) is a single static binary" to something like "The
Superset CLI (`superset`) is provided as the primary CLI binary, with additional
runtime and addon artifacts produced by the release pipeline" (or similar
concise phrasing) so install/runtime expectations are accurate.
In
`@apps/docs/src/app/`(docs)/components/Sidebar/components/SidebarContent/SidebarContent.tsx:
- Around line 135-142: When meta.rootPath is undefined, the code currently calls
parseSectionsFromSeparators(tree.children) and may miss separators emitted into
the fallback tree; update the logic in the block that builds the default product
(the branch checking meta.rootPath === undefined) to use tree.fallback?.children
as a fallback source: compute a childrenSource = tree.children ??
tree.fallback?.children (or prefer tree.children but fall back to
tree.fallback?.children) and pass that into parseSectionsFromSeparators and the
firstItem lookup before pushing the product (refer to
parseSectionsFromSeparators, tree.children, tree.fallback, firstItem, and the
products.push call).
In `@apps/web/src/app/utils/pendingAuthRedirect/pendingAuthRedirect.ts`:
- Around line 29-40: The code currently accesses parsed.path without validating
the parsed object shape and only deletes the cookie on JSON parse failure,
leaving stale cookies when the path mismatches; update the logic in
pendingAuthRedirect to validate that the parsed value is a proper
StoredPendingRedirect (has string path and params fields) before accessing
properties, and ensure cookieStore.delete(COOKIE_NAME) is called both when JSON
is invalid and when parsed.path !== expectedPath so stale cookies are cleared;
use the existing StoredPendingRedirect type, COOKIE_NAME, cookieStore.delete,
expectedPath, parsed.path and parsed.params to locate and implement the checks
and deletion.
In `@packages/cli-framework/src/parser.ts`:
- Around line 81-88: The help/version shortcut logic currently only checks
optionsByFlag.has("--help") / has("--version") so short flags declared by a
command (e.g., "-h" or "-v") get overridden; update the guard in the parser
(where arg is compared and options._help/options._version set) to also check
ownership of the short forms by calling optionsByFlag.has("-h") for help and
optionsByFlag.has("-v") for version (i.e., require neither the long nor the
short flag be owned before setting _help/_version), ensuring the parser respects
command-declared "-h"/"-v" flags.
In `@packages/cli/DISTRIBUTION.md`:
- Around line 37-40: The fenced code block containing "superset start" is
missing a language tag (causing MD040); edit the fenced block that surrounds the
literal superset start line and change the opening fence from ``` to include a
language like ```text or ```bash so the block becomes e.g. ```text (or ```bash
if runnable) to satisfy markdownlint.
- Around line 198-200: Update the checklist in DISTRIBUTION.md to remove or mark
as completed the three items for the start/stop/status CLI commands: change the
lines "- [ ] `start` command — spawn host service, pass env, health check", "- [
] `stop` command — read manifest, SIGTERM, cleanup", and "- [ ] `status` command
— read manifest, check PID, health check port" to either be checked ("- [x]") or
remove them entirely so the docs no longer claim those commands are TODO now
that they are shipped.
In `@packages/cli/scripts/build-dist.ts`:
- Around line 252-256: Add a fail-fast validation in
packages/cli/scripts/build-dist.ts to ensure the Linux node-pty binary exists
before creating the tarball: after the step that may skip/expect a compiled
binary (check around the code that handles build/Release/ removal and
packaging), verify that build/Release/pty.node exists on Linux (process.platform
=== 'linux') and throw or exit with an error logged via the existing logger if
missing; this prevents packaging a broken tarball when CI skipped npm rebuild.
In `@packages/cli/src/commands/workspaces/delete/command.ts`:
- Around line 19-40: Wrap each workspace deletion inside a try/catch within the
for (const id of ids) loop so failures don’t hide prior successes: for each id,
resolve hostId (using ctx.api.v2Workspace.getFromHost.query and
resolveHostTarget) and call target.client.workspace.delete.mutate inside try; on
success push id into deleted (existing variable), on error push id and error
info into a failed array and continue; after the loop, if failed.length > 0,
throw or return a summary that includes both deleted and failed arrays (with
error messages) so callers can see which deletes succeeded and which failed.
In `@packages/cli/src/lib/api-client.ts`:
- Around line 24-28: The headers construction currently only treats keys
starting with "sk_live_" as x-api-key; change the condition to detect any secret
key prefix "sk_" instead of just "sk_live_" so all sk_ keys use { "x-api-key":
opts.bearer } rather than a Bearer token; update the check on opts.bearer (used
when building the headers variable) to opts.bearer.startsWith("sk_") and ensure
opts.bearer is a string (guard against null/undefined) before calling
startsWith.
In `@packages/mcp-v2/src/server.ts`:
- Around line 5-10: The public option McpServerOptions.onToolCall is declared
but ignored in createMcpServer (parameter named _options) so either wire it into
the server or remove it; modify createMcpServer to accept options (remove the
underscore), and pass options.onToolCall into the McpServer instance (or set
server.onToolCall = options.onToolCall) so that registered tools invoke the
callback during tool execution (ensure tool registration/execute paths in
McpServer call that callback). If you prefer to keep the API minimal, remove
onToolCall from McpServerOptions and any references instead of passing it
through.
In `@packages/mcp-v2/src/tools/tasks/get.ts`:
- Around line 12-15: The idOrSlug Zod string schema currently allows
whitespace-only values; update the validator to reject those by applying .trim()
(or equivalently refine to check trimmed length) before .min(1) so inputs like "
" fail validation; modify the idOrSlug declaration (the z.string() chain) to
z.string().trim().min(1).describe("Task UUID (e.g. tsk_…) or slug (e.g.
fix-login-bug).") so downstream byIdOrSlug calls never receive whitespace-only
strings.
In `@packages/trpc/src/router/automation/automation.ts`:
- Around line 199-207: When input.v2WorkspaceId is provided you currently call
verifyWorkspaceInOrg and only set v2ProjectId when it's missing; instead
validate that an existing v2ProjectId matches the workspace.projectId to prevent
inconsistent pairs: call verifyWorkspaceInOrg(organizationId,
input.v2WorkspaceId) (as already done), then if v2ProjectId is falsy assign
workspace.projectId, otherwise assert v2ProjectId === workspace.projectId and
throw or return a clear validation error if they differ (referencing
v2ProjectId, input.v2WorkspaceId, verifyWorkspaceInOrg, and
workspace.projectId).
In `@packages/trpc/src/router/task/schema.ts`:
- Line 36: The zod schema for the `limit` field in
packages/trpc/src/router/task/schema.ts currently sets limit:
z.number().int().positive().max(500).default(50), which conflicts with the CLI
spec (CLI_SPEC_TARGET.md) that requires a max of 200; change the schema's max
from 500 to 200 to match the CLI spec (i.e., update the `limit` validator to
.max(200)). Ensure the `limit` field still keeps .int().positive().default(50)
and run tests/validation to confirm no other code assumes 500.
In `@packages/trpc/src/router/task/task.ts`:
- Around line 195-197: The ilike filter uses raw input to build the pattern
(ilike(tasks.title, `%${input.search}%`)), which allows % and _ to act as
wildcards; create and use an escapedSearch value where you first escape
backslashes then replace % -> \% and _ -> \_ so those characters are treated
literally, then build the pattern as `%${escapedSearch}%` and pass that to ilike
(keeping filters.push(ilike(tasks.title, pattern))). Ensure the escape behavior
is consistent with your DB/ORM (use the same escape character in any ESCAPE
clause or ORM option if required).
In `@packages/trpc/src/trpc.ts`:
- Around line 97-99: The empty catch after calling verifyJWT blindly swallows
every error and hides operational failures; update the catch to only handle
expected token-validation failures (e.g., JsonWebTokenError/TokenExpiredError or
a library-specific InvalidToken error returned by verifyJWT) and re-throw or
propagate any other unexpected errors so transient or service issues are not
masked; locate the try around verifyJWT in this file and change the catch to
inspect the thrown error (by instanceof or error.name) and only suppress known
invalid-token cases while rethrowing the rest, optionally adding a brief log
when rethrowing.
In `@plans/cli-distribution-scope.md`:
- Around line 31-32: The plan contains inconsistent CLI command names for
authentication—some examples use "auth status", others "auth check", and one
references "auth whoami"; pick a single canonical command (e.g., "auth check")
and replace all occurrences so examples and acceptance checks match; update
every instance of "auth status" and "auth whoami" to the chosen alias across the
document (references: the examples and checks around the occurrences noted,
e.g., the snippets currently showing "auth status", "auth check", and "auth
whoami") to ensure the release-gate runs are deterministic.
- Around line 42-43: Remove all inline draft notes using // (e.g., the comment
preceding "superset automations list") and clean up spelling/grammar throughout
the document (fix occurrences of "differnt", "undertanding" and similar
misspellings), so the checklist reads as a polished release checklist; ensure no
stray inline comments remain and that each checklist item is written as a clear,
operator-facing sentence rather than a draft note.
In `@plans/cli-v1-audit.md`:
- Around line 276-307: The audit text incorrectly describes the update flow;
update the CLI-5.1 and Distribution notes to state that the update command now
reads the rolling "cli-latest" GitHub Release (not /releases/latest) and that
the build/install matrix includes linux-arm64 in addition to darwin-arm64 and
linux-x64; ensure mentions of the command (superset update) and build-time
define (SUPERSET_VERSION / env.VERSION) and the target-detection logic reflect
the new behavior (reading cli-latest and supporting linux-arm64) and remove or
correct the outdated statement about only detecting darwin-arm64 + linux-x64.
- Around line 215-230: The docs still reference env.CLOUD_API_URL, getApiUrl(),
createApiClient(), and the command "auth status" while the shipped surface uses
SUPERSET_API_URL and the command "auth whoami"; also the heading is reversed
(the removed env should be CLOUD_API_URL not SUPERSET_API_URL). Update the text
in this section to: state that SUPERSET_API_URL is the live build-time constant
(remove references to env.CLOUD_API_URL), update mentions of getApiUrl() →
getApiUrl() usage notes and createApiClient() → createApiClient() signatures to
match the PR changes, rename any documented command examples and headings from
"auth status" to "auth whoami", and correct the heading to say CLOUD_API_URL was
removed (not SUPERSET_API_URL).
---
Outside diff comments:
In `@packages/host-service/src/terminal/env-strip.ts`:
- Around line 16-24: The HOST_SERVICE_RUNTIME_KEYS denylist removed the legacy
key and may leak runtime-injected API endpoints into PTY sessions; add
"CLOUD_API_URL" back into the HOST_SERVICE_RUNTIME_KEYS Set (alongside
AUTH_TOKEN, SUPERSET_API_URL, etc.) so the legacy key remains stripped during
the transition and keep it until all legacy producers are removed.
---
Nitpick comments:
In `@apps/docs/src/components/Command/Command.tsx`:
- Around line 176-208: Split the multi-component file by moving each component
(CommandReturns, HumanOutput, JsonOutput) into its own folder named after the
component with a ComponentName.tsx and an index.ts that re-exports the
component; remove those components from the original Command.tsx and update the
Command folder's index.ts barrel to re-export them (e.g., export {
CommandReturns } from './CommandReturns'; export { HumanOutput } from
'./HumanOutput'; export { JsonOutput } from './JsonOutput'); ensure each new
file preserves the original props interfaces and named exports and update any
imports across the codebase that referenced the in-file components to import
from the Command barrel (index.ts).
In `@apps/web/src/proxy.ts`:
- Around line 13-15: Extract PENDING_COOKIE_NAME and PENDING_COOKIE_TTL_SECONDS
into a single shared module (e.g., export from a new module like
pendingAuthConstants or re-export from an existing shared constants file),
replace the local declarations in apps/web/src/proxy.ts (remove the consts) and
import the constants where used, and update pendingAuthRedirect.ts to import the
same shared symbols instead of its own copies; ensure both modules import the
same exported names (PENDING_COOKIE_NAME, PENDING_COOKIE_TTL_SECONDS) and run
tests/lint to confirm no duplicate definitions remain.
In `@packages/cli/CLI_SPEC_CURRENT.md`:
- Around line 340-350: The table rows for the CLI options (e.g., `--status`,
`--priority`, `--assignee-me`/`-m`, `--creator-me`, `--search`/`-s`, `--limit`,
`--offset`) are being parsed as malformed due to unescaped pipe characters in
the Description cell; ensure each row has exactly two pipe separators and escape
any literal "|" inside the description (use backslash-escaping or replace with
HTML entity `|`), or wrap the entire description in backticks so the
Markdown parser treats it as a single cell; also keep the default values (`50`,
`0`) clearly inside the description text for `--limit` and `--offset`.
In `@packages/cli/src/commands/auth/login/command.ts`:
- Around line 100-108: The code currently calls api.user.me.query() twice (once
earlier to display the user name and again in the return payload), so capture
the initial user result into a local variable (e.g., const me = await
api.user.me.query()) where it is first awaited and reuse that variable when
building the return object (replace the second api.user.me.query().id with
me.id); if the ID must be available outside the try/catch, hoist the user fetch
to the outer scope before the try/catch and reuse that variable when
constructing the returned data for the chosen branch.
In `@packages/cli/src/commands/update/command.ts`:
- Around line 66-92: The downloadAndExtract function currently calls fetch
without a timeout; add an AbortController, pass its signal into fetch(url, {
signal }), and start a timer (e.g., 5 minutes) that calls controller.abort() if
exceeded; clear the timer once fetch completes successfully. Ensure you catch an
aborted request and rethrow a CLIError with a clear message (e.g., "Download
timed out") before proceeding to create the tar process and extraction. Keep
changes scoped to downloadAndExtract and the fetch call so response.body
handling and the existing pipeline/tar logic remain unchanged.
In `@packages/mcp-v2/src/in-memory.ts`:
- Around line 15-19: The docstring is inaccurate: the implementation overrides
clientTransport.send to inject authInfo (transport patching). Update the
top-level docstring in packages/mcp-v2/src/in-memory.ts to remove the phrase "no
transport monkey-patching" and explicitly state that clientTransport.send is
patched to attach authInfo (or alternatively, refactor the code to pass authInfo
via the SDK if available); reference the overridden method name
clientTransport.send and the function that builds the in-memory client/server
pair so reviewers can locate and update the docstring or refactor accordingly.
- Around line 73-83: The code sets authInfo.token to the literal "internal"
while embedding the real JWT in mcpContext.bearerToken (used by
getMcpContextFromExtra), which is confusing; either propagate the real token
into authInfo.token or add a clear comment documenting the deliberate split
(that internal client flows bypass token validation and use "internal" while
downstream handlers should call getMcpContextFromExtra to obtain the JWT).
Update the clientTransport.send override (referencing clientTransport.send,
authInfo.token, mcpContext.bearerToken, and getMcpContextFromExtra) accordingly
so future maintainers understand the intent or so that downstream code sees the
actual JWT depending on the chosen approach.
In `@plans/release-channels-spec.md`:
- Around line 99-108: The bootstrap example hardcodes the release tag
`desktop-v1.7.2`, which will become stale; update the example in the block that
runs gh release download / gh release create so it uses a placeholder like
`<current-desktop-tag>` (or documents how to fetch the latest tag) instead of
`desktop-v1.7.2`, and clarify that the user must replace that placeholder before
running the gh release download / gh release create commands that reference
`desktop-v1.7.2` and `desktop-latest`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e8627476-11af-4294-ab39-bb46ac1054ac
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (120)
.github/workflows/build-cli.yml.github/workflows/bump-homebrew.yml.github/workflows/ci.ymlapps/api/package.jsonapps/api/src/app/api/v2/agent/[transport]/route.tsapps/desktop/HOST_SERVICE_BOUNDARIES.mdapps/desktop/src/main/host-service/env.tsapps/desktop/src/main/host-service/index.tsapps/desktop/src/main/lib/host-service-coordinator.tsapps/desktop/src/renderer/routes/_authenticated/_dashboard/tasks/components/TasksView/components/TasksTopBar/components/CreateTaskDialog/CreateTaskDialog.tsxapps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.tsapps/docs/content/docs/automations.mdxapps/docs/content/docs/cli/cli-reference.mdxapps/docs/content/docs/cli/env-vars.mdxapps/docs/content/docs/cli/getting-started.mdxapps/docs/content/docs/cli/meta.jsonapps/docs/src/app/(docs)/[[...slug]]/components/DocsPageLayout/index.tsapps/docs/src/app/(docs)/[[...slug]]/page.tsxapps/docs/src/app/(docs)/components/Sidebar/Sidebar.tsxapps/docs/src/app/(docs)/components/Sidebar/components/SidebarContent/SidebarContent.tsxapps/docs/src/app/(docs)/components/Sidebar/components/SidebarContent/index.tsapps/docs/src/components/Command/Command.tsxapps/docs/src/components/Command/index.tsapps/docs/src/mdx-components.tsxapps/mobile/lib/collections/collections.tsapps/web/src/app/cli/authorize/page.tsxapps/web/src/app/oauth/consent/page.tsxapps/web/src/app/utils/pendingAuthRedirect/index.tsapps/web/src/app/utils/pendingAuthRedirect/pendingAuthRedirect.tsapps/web/src/proxy.tspackages/auth/src/server.tspackages/cli-framework/src/parser.tspackages/cli-framework/src/runner.tspackages/cli/CLI_SPEC.mdpackages/cli/CLI_SPEC_CURRENT.mdpackages/cli/CLI_SPEC_TARGET.mdpackages/cli/DISTRIBUTION.mdpackages/cli/TODO.mdpackages/cli/cli.config.tspackages/cli/package.jsonpackages/cli/scripts/build-dist.tspackages/cli/src/commands/auth/login/command.tspackages/cli/src/commands/auth/whoami/command.tspackages/cli/src/commands/automations/create/command.tspackages/cli/src/commands/automations/update/command.tspackages/cli/src/commands/devices/list/command.tspackages/cli/src/commands/devices/meta.tspackages/cli/src/commands/host/install/command.tspackages/cli/src/commands/host/meta.tspackages/cli/src/commands/hosts/list/command.tspackages/cli/src/commands/hosts/meta.tspackages/cli/src/commands/middleware.tspackages/cli/src/commands/projects/list/command.tspackages/cli/src/commands/projects/meta.tspackages/cli/src/commands/start/command.tspackages/cli/src/commands/status/command.tspackages/cli/src/commands/stop/command.tspackages/cli/src/commands/tasks/create/command.tspackages/cli/src/commands/tasks/delete/command.tspackages/cli/src/commands/tasks/get/command.tspackages/cli/src/commands/tasks/list/command.tspackages/cli/src/commands/tasks/update/command.tspackages/cli/src/commands/update/command.tspackages/cli/src/commands/workspaces/create/command.tspackages/cli/src/commands/workspaces/delete/command.tspackages/cli/src/commands/workspaces/list/command.tspackages/cli/src/lib/api-client.tspackages/cli/src/lib/auth.tspackages/cli/src/lib/command.tspackages/cli/src/lib/config.tspackages/cli/src/lib/env.tspackages/cli/src/lib/host-target/index.tspackages/cli/src/lib/host-target/resolveHostTarget.tspackages/cli/src/lib/host/spawn.tspackages/cli/src/lib/resolve-auth.tspackages/host-service/src/env.tspackages/host-service/src/serve.tspackages/host-service/src/terminal/env-strip.tspackages/host-service/src/terminal/env.test.tspackages/host-service/src/trpc/router/project/project.tspackages/mcp-v2/package.jsonpackages/mcp-v2/src/auth.tspackages/mcp-v2/src/caller.tspackages/mcp-v2/src/context-utils.tspackages/mcp-v2/src/define-tool.tspackages/mcp-v2/src/in-memory.tspackages/mcp-v2/src/index.tspackages/mcp-v2/src/server.tspackages/mcp-v2/src/tools/automations/create.tspackages/mcp-v2/src/tools/automations/delete.tspackages/mcp-v2/src/tools/automations/get.tspackages/mcp-v2/src/tools/automations/list.tspackages/mcp-v2/src/tools/automations/logs.tspackages/mcp-v2/src/tools/automations/pause.tspackages/mcp-v2/src/tools/automations/resume.tspackages/mcp-v2/src/tools/automations/run.tspackages/mcp-v2/src/tools/automations/update.tspackages/mcp-v2/src/tools/hosts/list.tspackages/mcp-v2/src/tools/projects/list.tspackages/mcp-v2/src/tools/register.tspackages/mcp-v2/src/tools/tasks/create.tspackages/mcp-v2/src/tools/tasks/delete.tspackages/mcp-v2/src/tools/tasks/get.tspackages/mcp-v2/src/tools/tasks/list.tspackages/mcp-v2/src/tools/tasks/update.tspackages/mcp-v2/src/tools/workspaces/create.tspackages/mcp-v2/src/tools/workspaces/delete.tspackages/mcp-v2/src/tools/workspaces/list.tspackages/mcp-v2/tsconfig.jsonpackages/trpc/src/router/automation/automation.tspackages/trpc/src/router/automation/schema.tspackages/trpc/src/router/host/host.tspackages/trpc/src/router/task/schema.tspackages/trpc/src/router/task/task.tspackages/trpc/src/router/v2-project/v2-project.tspackages/trpc/src/router/v2-workspace/v2-workspace.tspackages/trpc/src/trpc.tsplans/cli-distribution-scope.mdplans/cli-v1-audit.mdplans/release-channels-spec.md
💤 Files with no reviewable changes (8)
- apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts
- packages/cli/src/commands/devices/meta.ts
- packages/cli/src/commands/host/meta.ts
- packages/cli/src/lib/command.ts
- apps/mobile/lib/collections/collections.ts
- packages/cli/src/commands/auth/whoami/command.ts
- packages/cli/src/commands/devices/list/command.ts
- packages/cli/src/commands/host/install/command.ts
| description: Complete reference for environment variables that control Superset CLI behavior. | ||
| --- | ||
|
|
||
| The Superset CLI supports the following environment variables to control | ||
| its behavior. Set them in your shell before running `superset`, or | ||
| export them from your shell profile (`~/.zshrc`, `~/.bashrc`) or your | ||
| CI environment to apply them across every session. | ||
|
|
||
| | Variable | Description | | ||
| | --- | --- | | ||
| | `SUPERSET_API_KEY` | API key (`sk_live_…` / `sk_test_…`) used in place of stored OAuth login. Equivalent to `--api-key`. | | ||
| | `SUPERSET_HOME_DIR` | Directory the CLI uses for `config.json` and the host service tree. Default: `~/.superset`. Shared with the desktop app. | |
There was a problem hiding this comment.
“Complete reference” is missing SUPERSET_API_URL.
This page currently omits SUPERSET_API_URL, which is now part of the CLI env surface in this PR. Please add it to avoid drift between docs and runtime behavior.
Suggested doc patch
| Variable | Description |
| --- | --- |
| `SUPERSET_API_KEY` | API key (`sk_live_…` / `sk_test_…`) used in place of stored OAuth login. Equivalent to `--api-key`. |
+| `SUPERSET_API_URL` | Base URL for Superset API requests. Default: `https://api.superset.sh`. |
| `SUPERSET_HOME_DIR` | Directory the CLI uses for `config.json` and the host service tree. Default: `~/.superset`. Shared with the desktop app. |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description: Complete reference for environment variables that control Superset CLI behavior. | |
| --- | |
| The Superset CLI supports the following environment variables to control | |
| its behavior. Set them in your shell before running `superset`, or | |
| export them from your shell profile (`~/.zshrc`, `~/.bashrc`) or your | |
| CI environment to apply them across every session. | |
| | Variable | Description | | |
| | --- | --- | | |
| | `SUPERSET_API_KEY` | API key (`sk_live_…` / `sk_test_…`) used in place of stored OAuth login. Equivalent to `--api-key`. | | |
| | `SUPERSET_HOME_DIR` | Directory the CLI uses for `config.json` and the host service tree. Default: `~/.superset`. Shared with the desktop app. | | |
| description: Complete reference for environment variables that control Superset CLI behavior. | |
| --- | |
| The Superset CLI supports the following environment variables to control | |
| its behavior. Set them in your shell before running `superset`, or | |
| export them from your shell profile (`~/.zshrc`, `~/.bashrc`) or your | |
| CI environment to apply them across every session. | |
| | Variable | Description | | |
| | --- | --- | | |
| | `SUPERSET_API_KEY` | API key (`sk_live_…` / `sk_test_…`) used in place of stored OAuth login. Equivalent to `--api-key`. | | |
| | `SUPERSET_API_URL` | Base URL for Superset API requests. Default: `https://api.superset.sh`. | | |
| | `SUPERSET_HOME_DIR` | Directory the CLI uses for `config.json` and the host service tree. Default: `~/.superset`. Shared with the desktop app. | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/docs/content/docs/cli/env-vars.mdx` around lines 3 - 14, Add the missing
SUPERSET_API_URL environment variable to the environment variables table: insert
a new table row with `SUPERSET_API_URL` as the Variable and a concise
Description like "Base API URL used by the CLI to contact Superset (e.g.
https://superset.example.com); equivalent to `--api-url`" so it appears
alongside `SUPERSET_API_KEY` and `SUPERSET_HOME_DIR`; update the table near the
top of the doc where the other env vars are defined (look for the markdown table
containing `SUPERSET_API_KEY` and `SUPERSET_HOME_DIR`) to keep the docs in sync
with the CLI.
| The Superset CLI (`superset`) is a single static binary that drives the same | ||
| backend as the desktop app. Use it to manage workspaces, tasks, automations, | ||
| and the local host service from your terminal or from CI. |
There was a problem hiding this comment.
Clarify packaging wording to match actual distribution artifacts.
Line 6 says the CLI is “a single static binary,” but the release pipeline includes additional runtime/addon artifacts. Please adjust wording to avoid misleading install/runtime expectations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/docs/content/docs/cli/getting-started.mdx` around lines 6 - 8, The
sentence "The Superset CLI (`superset`) is a single static binary" is
misleading; update that wording to reflect that `superset` is provided as the
primary CLI binary alongside additional runtime/addon artifacts produced by the
release pipeline. Edit the line containing the phrase "The Superset CLI
(`superset`) is a single static binary" to something like "The Superset CLI
(`superset`) is provided as the primary CLI binary, with additional runtime and
addon artifacts produced by the release pipeline" (or similar concise phrasing)
so install/runtime expectations are accurate.
| if (meta.rootPath === undefined) { | ||
| const sections = parseSectionsFromSeparators(tree.children); | ||
| const firstItem = sections[0]?.items[0]; | ||
| products.push({ | ||
| ...meta, | ||
| url: firstItem?.href ?? "/", | ||
| sections, | ||
| }); |
There was a problem hiding this comment.
Fall back to pageTree.fallback for the default docs product.
Line 136 only parses tree.children, even though the rest of buildProducts() already treats tree.fallback?.children as a valid source of truth. If the docs separators/pages are emitted into the fallback tree, the "docs" product ends up with no sections and the sidebar goes blank on non-CLI pages.
Suggested fix
if (meta.rootPath === undefined) {
- const sections = parseSectionsFromSeparators(tree.children);
+ const primarySections = parseSectionsFromSeparators(tree.children);
+ const sections =
+ primarySections.length > 0
+ ? primarySections
+ : parseSectionsFromSeparators(tree.fallback?.children ?? []);
const firstItem = sections[0]?.items[0];
products.push({
...meta,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (meta.rootPath === undefined) { | |
| const sections = parseSectionsFromSeparators(tree.children); | |
| const firstItem = sections[0]?.items[0]; | |
| products.push({ | |
| ...meta, | |
| url: firstItem?.href ?? "/", | |
| sections, | |
| }); | |
| if (meta.rootPath === undefined) { | |
| const primarySections = parseSectionsFromSeparators(tree.children); | |
| const sections = | |
| primarySections.length > 0 | |
| ? primarySections | |
| : parseSectionsFromSeparators(tree.fallback?.children ?? []); | |
| const firstItem = sections[0]?.items[0]; | |
| products.push({ | |
| ...meta, | |
| url: firstItem?.href ?? "/", | |
| sections, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/docs/src/app/`(docs)/components/Sidebar/components/SidebarContent/SidebarContent.tsx
around lines 135 - 142, When meta.rootPath is undefined, the code currently
calls parseSectionsFromSeparators(tree.children) and may miss separators emitted
into the fallback tree; update the logic in the block that builds the default
product (the branch checking meta.rootPath === undefined) to use
tree.fallback?.children as a fallback source: compute a childrenSource =
tree.children ?? tree.fallback?.children (or prefer tree.children but fall back
to tree.fallback?.children) and pass that into parseSectionsFromSeparators and
the firstItem lookup before pushing the product (refer to
parseSectionsFromSeparators, tree.children, tree.fallback, firstItem, and the
products.push call).
| let parsed: StoredPendingRedirect; | ||
| try { | ||
| parsed = JSON.parse(cookie.value) as StoredPendingRedirect; | ||
| } catch { | ||
| cookieStore.delete(COOKIE_NAME); | ||
| return null; | ||
| } | ||
|
|
||
| if (parsed.path !== expectedPath) return null; | ||
|
|
||
| cookieStore.delete(COOKIE_NAME); | ||
| return parsed.params; |
There was a problem hiding this comment.
Validate parsed cookie payload and clear on path mismatch.
Line 37 leaves stale cookies behind, and unvalidated parsed JSON can still crash on property access (parsed.path) if payload shape is wrong.
Suggested fix
export async function consumePendingAuthParams(
expectedPath: string,
): Promise<Record<string, string> | null> {
const cookieStore = await cookies();
const cookie = cookieStore.get(COOKIE_NAME);
if (!cookie) return null;
- let parsed: StoredPendingRedirect;
+ let parsed: unknown;
try {
- parsed = JSON.parse(cookie.value) as StoredPendingRedirect;
+ parsed = JSON.parse(cookie.value);
} catch {
cookieStore.delete(COOKIE_NAME);
return null;
}
- if (parsed.path !== expectedPath) return null;
+ if (
+ !parsed ||
+ typeof parsed !== "object" ||
+ !("path" in parsed) ||
+ !("params" in parsed) ||
+ typeof (parsed as { path: unknown }).path !== "string" ||
+ typeof (parsed as { params: unknown }).params !== "object" ||
+ (parsed as { params: unknown }).params === null ||
+ !Object.values((parsed as { params: Record<string, unknown> }).params).every(
+ (v) => typeof v === "string",
+ )
+ ) {
+ cookieStore.delete(COOKIE_NAME);
+ return null;
+ }
+
+ const safeParsed = parsed as StoredPendingRedirect;
+ if (safeParsed.path !== expectedPath) {
+ cookieStore.delete(COOKIE_NAME);
+ return null;
+ }
cookieStore.delete(COOKIE_NAME);
- return parsed.params;
+ return safeParsed.params;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/app/utils/pendingAuthRedirect/pendingAuthRedirect.ts` around
lines 29 - 40, The code currently accesses parsed.path without validating the
parsed object shape and only deletes the cookie on JSON parse failure, leaving
stale cookies when the path mismatches; update the logic in pendingAuthRedirect
to validate that the parsed value is a proper StoredPendingRedirect (has string
path and params fields) before accessing properties, and ensure
cookieStore.delete(COOKIE_NAME) is called both when JSON is invalid and when
parsed.path !== expectedPath so stale cookies are cleared; use the existing
StoredPendingRedirect type, COOKIE_NAME, cookieStore.delete, expectedPath,
parsed.path and parsed.params to locate and implement the checks and deletion.
| if ((arg === "--help" || arg === "-h") && !optionsByFlag.has("--help")) { | ||
| options._help = true; | ||
| continue; | ||
| } | ||
| if (arg === "--version" || arg === "-v") { | ||
| if ( | ||
| (arg === "--version" || arg === "-v") && | ||
| !optionsByFlag.has("--version") | ||
| ) { |
There was a problem hiding this comment.
Respect short-flag ownership for help/version shortcuts
At Line 81 and Line 86, the guard only checks --help / --version. If a command declares -h or -v (without the long form), parsing still short-circuits to _help / _version and skips the command’s own option.
Suggested fix
- if ((arg === "--help" || arg === "-h") && !optionsByFlag.has("--help")) {
+ const hasCustomHelp =
+ optionsByFlag.has("--help") || optionsByFlag.has("-h");
+ if ((arg === "--help" || arg === "-h") && !hasCustomHelp) {
options._help = true;
continue;
}
- if (
- (arg === "--version" || arg === "-v") &&
- !optionsByFlag.has("--version")
- ) {
+ const hasCustomVersion =
+ optionsByFlag.has("--version") || optionsByFlag.has("-v");
+ if ((arg === "--version" || arg === "-v") && !hasCustomVersion) {
options._version = true;
continue;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ((arg === "--help" || arg === "-h") && !optionsByFlag.has("--help")) { | |
| options._help = true; | |
| continue; | |
| } | |
| if (arg === "--version" || arg === "-v") { | |
| if ( | |
| (arg === "--version" || arg === "-v") && | |
| !optionsByFlag.has("--version") | |
| ) { | |
| const hasCustomHelp = | |
| optionsByFlag.has("--help") || optionsByFlag.has("-h"); | |
| if ((arg === "--help" || arg === "-h") && !hasCustomHelp) { | |
| options._help = true; | |
| continue; | |
| } | |
| const hasCustomVersion = | |
| optionsByFlag.has("--version") || optionsByFlag.has("-v"); | |
| if ((arg === "--version" || arg === "-v") && !hasCustomVersion) { | |
| options._version = true; | |
| continue; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli-framework/src/parser.ts` around lines 81 - 88, The help/version
shortcut logic currently only checks optionsByFlag.has("--help") /
has("--version") so short flags declared by a command (e.g., "-h" or "-v") get
overridden; update the guard in the parser (where arg is compared and
options._help/options._version set) to also check ownership of the short forms
by calling optionsByFlag.has("-h") for help and optionsByFlag.has("-v") for
version (i.e., require neither the long nor the short flag be owned before
setting _help/_version), ensuring the parser respects command-declared "-h"/"-v"
flags.
| } catch { | ||
| // Fall through to session-token resolution. | ||
| } |
There was a problem hiding this comment.
Empty catch swallows all JWT verification errors — consider narrowing.
The catch block silently swallows any error from verifyJWT, including unexpected failures (network issues, service unavailable, malformed responses). This could mask operational problems and silently fall back to session auth when the JWT was actually valid but verification failed transiently.
Consider catching only expected "invalid token" errors and re-throwing unexpected failures:
🛡️ Proposed fix to narrow error handling
- } catch {
- // Fall through to session-token resolution.
- }
+ } catch (error) {
+ // Only fall through for invalid/expired token errors.
+ // Re-throw unexpected failures (network, service errors) so they surface.
+ const isInvalidToken =
+ error instanceof Error &&
+ (error.message.includes("invalid") ||
+ error.message.includes("expired") ||
+ error.message.includes("malformed"));
+ if (!isInvalidToken) {
+ throw error;
+ }
+ // Fall through to session-token resolution.
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/trpc/src/trpc.ts` around lines 97 - 99, The empty catch after
calling verifyJWT blindly swallows every error and hides operational failures;
update the catch to only handle expected token-validation failures (e.g.,
JsonWebTokenError/TokenExpiredError or a library-specific InvalidToken error
returned by verifyJWT) and re-throw or propagate any other unexpected errors so
transient or service issues are not masked; locate the try around verifyJWT in
this file and change the catch to inspect the thrown error (by instanceof or
error.name) and only suppress known invalid-token cases while rethrowing the
rest, optionally adding a brief log when rethrowing.
| superset auth status | ||
|
|
There was a problem hiding this comment.
Unify the auth command name across the plan and acceptance checks.
Line 31 uses auth status, while Lines 204 and 226 use auth check, and Line 84 discusses auth whoami. Pick one final command/alias contract and make all command examples consistent so release-gate runs are deterministic.
Also applies to: 84-85, 204-205, 226-227
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plans/cli-distribution-scope.md` around lines 31 - 32, The plan contains
inconsistent CLI command names for authentication—some examples use "auth
status", others "auth check", and one references "auth whoami"; pick a single
canonical command (e.g., "auth check") and replace all occurrences so examples
and acceptance checks match; update every instance of "auth status" and "auth
whoami" to the chosen alias across the document (references: the examples and
checks around the occurrences noted, e.g., the snippets currently showing "auth
status", "auth check", and "auth whoami") to ensure the release-gate runs are
deterministic.
| // Can we validate that it's ergonomic for an agent to read and update the markdown content for an automation? it'll be a common action probably | ||
| superset automations list |
There was a problem hiding this comment.
Remove inline // notes and fix spelling before using this as a release checklist.
The inline comments (for example on Lines 42, 63, 83, 86, 89, 100, 112) read like draft notes and include typos (differnt, undertanding), which makes the checklist ambiguous for operators.
Also applies to: 63-64, 83-90, 100-113
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plans/cli-distribution-scope.md` around lines 42 - 43, Remove all inline
draft notes using // (e.g., the comment preceding "superset automations list")
and clean up spelling/grammar throughout the document (fix occurrences of
"differnt", "undertanding" and similar misspellings), so the checklist reads as
a polished release checklist; ensure no stray inline comments remain and that
each checklist item is written as a clear, operator-facing sentence rather than
a draft note.
| ### CLI-3.3 — Drop `--api-url` flag, `apiUrl` config, `SUPERSET_API_URL` env — ✅ fixed | ||
|
|
||
| `env.CLOUD_API_URL` (build-time constant) is now the sole source. | ||
| `getApiUrl(config)` → `getApiUrl()`. `config.apiUrl` deleted. | ||
| `createApiClient(config, opts)` → `createApiClient(opts)`. | ||
|
|
||
| ### CLI-3.4 — `device` → `host` terminology — ✅ fixed | ||
|
|
||
| `--device` flags renamed to `--host` (automations create/update). Global | ||
| `--device` option removed from `cli.config.ts`. `DeviceConfig`, | ||
| `readDeviceConfig`, `ctx.deviceId` all removed. `SUPERSET_DEVICE` env var | ||
| gone. | ||
|
|
||
| ### CLI-3.5 — `auth check` → `auth status` — ✅ fixed | ||
|
|
||
| Directory renamed. Output dropped `apiUrl` field (per spec). |
There was a problem hiding this comment.
Update these renamed symbols to the PR’s final surface.
This section still documents env.CLOUD_API_URL as the live constant and auth status as the renamed command, but this PR’s shipped surface uses SUPERSET_API_URL and auth whoami. The heading at Line 215 is backwards too: the removed env is CLOUD_API_URL, not SUPERSET_API_URL.
Suggested wording
-### CLI-3.3 — Drop `--api-url` flag, `apiUrl` config, `SUPERSET_API_URL` env — ✅ fixed
+### CLI-3.3 — Drop `--api-url` flag, `apiUrl` config, `CLOUD_API_URL` env — ✅ fixed
-`env.CLOUD_API_URL` (build-time constant) is now the sole source.
+`env.SUPERSET_API_URL` (build-time constant) is now the sole source.
`getApiUrl(config)` → `getApiUrl()`. `config.apiUrl` deleted.
`createApiClient(config, opts)` → `createApiClient(opts)`.
-### CLI-3.5 — `auth check` → `auth status` — ✅ fixed
+### CLI-3.5 — `auth status` → `auth whoami` — ✅ fixed📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### CLI-3.3 — Drop `--api-url` flag, `apiUrl` config, `SUPERSET_API_URL` env — ✅ fixed | |
| `env.CLOUD_API_URL` (build-time constant) is now the sole source. | |
| `getApiUrl(config)` → `getApiUrl()`. `config.apiUrl` deleted. | |
| `createApiClient(config, opts)` → `createApiClient(opts)`. | |
| ### CLI-3.4 — `device` → `host` terminology — ✅ fixed | |
| `--device` flags renamed to `--host` (automations create/update). Global | |
| `--device` option removed from `cli.config.ts`. `DeviceConfig`, | |
| `readDeviceConfig`, `ctx.deviceId` all removed. `SUPERSET_DEVICE` env var | |
| gone. | |
| ### CLI-3.5 — `auth check` → `auth status` — ✅ fixed | |
| Directory renamed. Output dropped `apiUrl` field (per spec). | |
| ### CLI-3.3 — Drop `--api-url` flag, `apiUrl` config, `CLOUD_API_URL` env — ✅ fixed | |
| `env.SUPERSET_API_URL` (build-time constant) is now the sole source. | |
| `getApiUrl(config)` → `getApiUrl()`. `config.apiUrl` deleted. | |
| `createApiClient(config, opts)` → `createApiClient(opts)`. | |
| ### CLI-3.4 — `device` → `host` terminology — ✅ fixed | |
| `--device` flags renamed to `--host` (automations create/update). Global | |
| `--device` option removed from `cli.config.ts`. `DeviceConfig`, | |
| `readDeviceConfig`, `ctx.deviceId` all removed. `SUPERSET_DEVICE` env var | |
| gone. | |
| ### CLI-3.5 — `auth status` → `auth whoami` — ✅ fixed | |
| Directory renamed. Output dropped `apiUrl` field (per spec). |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plans/cli-v1-audit.md` around lines 215 - 230, The docs still reference
env.CLOUD_API_URL, getApiUrl(), createApiClient(), and the command "auth status"
while the shipped surface uses SUPERSET_API_URL and the command "auth whoami";
also the heading is reversed (the removed env should be CLOUD_API_URL not
SUPERSET_API_URL). Update the text in this section to: state that
SUPERSET_API_URL is the live build-time constant (remove references to
env.CLOUD_API_URL), update mentions of getApiUrl() → getApiUrl() usage notes and
createApiClient() → createApiClient() signatures to match the PR changes, rename
any documented command examples and headings from "auth status" to "auth
whoami", and correct the heading to say CLOUD_API_URL was removed (not
SUPERSET_API_URL).
| ### CLI-5.1 — `superset update` mechanism — ✅ fixed | ||
|
|
||
| New `superset update` command at | ||
| `packages/cli/src/commands/update/command.ts`: | ||
|
|
||
| - Detects target (`darwin-arm64`, `linux-x64`). | ||
| - Fetches latest `cli-v*` release from GitHub | ||
| (`/repos/superset-sh/superset/releases/latest`). | ||
| - Downloads matching `superset-<target>.tar.gz` asset. | ||
| - Extracts to a tempdir; verifies the new layout has `bin/superset`. | ||
| - Atomic-replaces the install root: rename current → `.bak`, move new | ||
| in, on failure roll back; on success delete `.bak`. | ||
| - `--check` flag prints version comparison without installing. | ||
| - `--force` re-installs even when on the latest version. | ||
| - Refuses to run from a dev build (`SUPERSET_VERSION="0.0.0-dev"`). | ||
| - Build-time `SUPERSET_VERSION` define added to cli.config.ts; exposed | ||
| via `env.VERSION`. | ||
| - Install root is `dirname(dirname(process.execPath))` matching the | ||
| build-dist layout (`bin/superset`, `lib/`, `share/migrations/`). | ||
|
|
||
| Caveats explicitly out of scope: signature/checksum verification (covered | ||
| by CLI-5.3 below), Homebrew-installed binaries (CLI-5.2 — those should | ||
| update via `brew upgrade`). | ||
|
|
||
| ### CLI-5.2 — Distribution channels — design committed | ||
|
|
||
| - **GitHub release tarballs**: ✅ canonical channel. `build-cli.yml` | ||
| produces them; `superset update` consumes them. | ||
| - **Homebrew**: ✅ secondary. `bump-homebrew.yml` already wired up; users | ||
| install via `brew install superset/tap/superset` and update via | ||
| `brew upgrade`. `superset update` on a brew-installed binary should | ||
| detect that and tell the user to use brew (future CLI tweak). |
There was a problem hiding this comment.
Bring the update-channel notes in line with the shipped distribution flow.
This section still says superset update uses /releases/latest and only detects darwin-arm64 + linux-x64. The PR summary says the CLI now reads the rolling cli-latest GitHub Release specifically to avoid /releases/latest, and the build matrix includes linux-arm64 too. As written, the audit documents the old mechanism.
Suggested wording
-- Detects target (`darwin-arm64`, `linux-x64`).
-- Fetches latest `cli-v*` release from GitHub
- (`/repos/superset-sh/superset/releases/latest`).
+- Detects target (`darwin-arm64`, `linux-x64`, `linux-arm64`).
+- Reads the rolling `cli-latest` GitHub Release directly instead of
+ `/releases/latest`.
-- **GitHub release tarballs**: ✅ canonical channel. `build-cli.yml`
- produces them; `superset update` consumes them.
+- **GitHub release tarballs**: ✅ canonical channel. `build-cli.yml`
+ publishes the rolling `cli-latest` release that `superset update`
+ consumes directly.🧰 Tools
🪛 LanguageTool
[style] ~307-~307: Consider using the more polite verb “ask” (“tell” implies ordering/instructing someone).
Context: ...stalled binary should detect that and tell the user to use brew (future CLI tweak)...
(TELL_ASK)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plans/cli-v1-audit.md` around lines 276 - 307, The audit text incorrectly
describes the update flow; update the CLI-5.1 and Distribution notes to state
that the update command now reads the rolling "cli-latest" GitHub Release (not
/releases/latest) and that the build/install matrix includes linux-arm64 in
addition to darwin-arm64 and linux-x64; ensure mentions of the command (superset
update) and build-time define (SUPERSET_VERSION / env.VERSION) and the
target-detection logic reflect the new behavior (reading cli-latest and
supporting linux-arm64) and remove or correct the outdated statement about only
detecting darwin-arm64 + linux-x64.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
packages/trpc/src/router/automation/automation.ts (1)
191-199:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate
v2ProjectIdagainst the provided workspace before insert.When both IDs are sent, Line 196 only backfills missing
v2ProjectId; it does not reject mismatches. That allows inconsistent project/workspace ownership metadata to be stored.Suggested fix
if (input.v2WorkspaceId) { const workspace = await verifyWorkspaceInOrg( organizationId, input.v2WorkspaceId, ); + if (v2ProjectId && v2ProjectId !== workspace.projectId) { + throw new TRPCError({ + code: "BAD_REQUEST", + message: + "v2ProjectId must match the project of the provided v2WorkspaceId", + }); + } if (!v2ProjectId) { v2ProjectId = workspace.projectId; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/trpc/src/router/automation/automation.ts` around lines 191 - 199, When input.v2WorkspaceId is provided, currently verifyWorkspaceInOrg(...) is used only to backfill v2ProjectId; instead, after obtaining workspace.projectId compare it to the supplied v2ProjectId and reject if they differ. In the block that calls verifyWorkspaceInOrg(organizationId, input.v2WorkspaceId) (the code handling input.v2WorkspaceId and v2ProjectId), if v2ProjectId is already set and v2ProjectId !== workspace.projectId throw an appropriate error (e.g., validation/authorization error) so mismatched workspace/project pairs cannot be stored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/mcp-v2/src/auth.ts`:
- Around line 131-136: Normalize the apiUrl once at the top of the function
before building the JWKS/issuer/audience values: trim any trailing slash from
the incoming apiUrl (e.g., create normalizedApiUrl = apiUrl.replace(/\/+$/, ''))
and then use normalizedApiUrl when constructing jwksUrl
(`${normalizedApiUrl}/api/auth/jwks`), issuer (normalizedApiUrl) and audience
entries (normalizedApiUrl, `${normalizedApiUrl}/`,
`${normalizedApiUrl}/api/v2/agent/mcp`) so verifyAccessToken(...) uses
consistent, slash-normalized values; update the code around the
verifyAccessToken(...) call to reference the normalized variable instead of raw
apiUrl.
In `@packages/mcp-v2/src/define-tool.ts`:
- Around line 69-82: The success telemetry is emitted before the response is
serialized, so if successResult(data) throws (e.g., JSON.stringify fails) the
call is logged as both success and failure; change the flow to call
successResult(result) first to validate/produce the response, then emit the
success telemetry (the existing telemetry emit routine), and finally return the
response; locate the usage that currently emits telemetry then calls
successResult and reorder them so successResult runs before emitting telemetry
(ensure any thrown errors still trigger the existing failure telemetry path).
In `@packages/mcp-v2/src/tools/automations/create.ts`:
- Around line 38-47: Add a local precheck that throws a clear, immediate error
when neither v2ProjectId nor v2WorkspaceId is provided before calling the API:
in packages/mcp-v2/src/tools/automations/create.ts locate the function that
reads the parsed input (referencing the v2ProjectId and v2WorkspaceId symbols)
and add an if-check like `if (!v2ProjectId && !v2WorkspaceId) throw new
Error("Provide either v2ProjectId or v2WorkspaceId")` so callers fail fast at
the tool level; apply the same precheck in the other analogous handler block
that processes the same inputs (the block covering the other occurrence of
v2ProjectId/v2WorkspaceId).
---
Duplicate comments:
In `@packages/trpc/src/router/automation/automation.ts`:
- Around line 191-199: When input.v2WorkspaceId is provided, currently
verifyWorkspaceInOrg(...) is used only to backfill v2ProjectId; instead, after
obtaining workspace.projectId compare it to the supplied v2ProjectId and reject
if they differ. In the block that calls verifyWorkspaceInOrg(organizationId,
input.v2WorkspaceId) (the code handling input.v2WorkspaceId and v2ProjectId), if
v2ProjectId is already set and v2ProjectId !== workspace.projectId throw an
appropriate error (e.g., validation/authorization error) so mismatched
workspace/project pairs cannot be stored.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6433d02a-940d-4daf-99dd-27cb82834e7b
📒 Files selected for processing (22)
apps/api/src/app/api/v2/agent/[transport]/route.tsapps/desktop/src/renderer/routes/_authenticated/_dashboard/automations/$automationId/components/AutomationBody/AutomationBody.tsxapps/docs/content/docs/cli/cli-reference.mdxpackages/cli/src/commands/automations/get/command.tspackages/cli/src/commands/automations/prompt/command.tspackages/cli/src/commands/automations/update/command.tspackages/mcp-v2/src/auth.tspackages/mcp-v2/src/define-tool.tspackages/mcp-v2/src/in-memory.tspackages/mcp-v2/src/index.tspackages/mcp-v2/src/server.tspackages/mcp-v2/src/tools/automations/create.tspackages/mcp-v2/src/tools/automations/get.tspackages/mcp-v2/src/tools/automations/get_prompt.tspackages/mcp-v2/src/tools/automations/list.tspackages/mcp-v2/src/tools/automations/set_prompt.tspackages/mcp-v2/src/tools/automations/update.tspackages/mcp-v2/src/tools/register.tspackages/mcp-v2/src/tools/tasks/create.tspackages/mcp-v2/src/tools/tasks/update.tspackages/trpc/src/router/automation/automation.tspackages/trpc/src/router/automation/schema.ts
✅ Files skipped from review due to trivial changes (2)
- packages/mcp-v2/src/tools/tasks/create.ts
- apps/docs/content/docs/cli/cli-reference.mdx
🚧 Files skipped from review as they are similar to previous changes (9)
- packages/mcp-v2/src/tools/automations/list.ts
- packages/mcp-v2/src/tools/tasks/update.ts
- packages/mcp-v2/src/index.ts
- packages/mcp-v2/src/tools/register.ts
- packages/mcp-v2/src/server.ts
- packages/cli/src/commands/automations/update/command.ts
- packages/mcp-v2/src/in-memory.ts
- packages/trpc/src/router/automation/schema.ts
- packages/mcp-v2/src/tools/automations/update.ts
| payload = (await verifyAccessToken(token, { | ||
| jwksUrl: `${apiUrl}/api/auth/jwks`, | ||
| verifyOptions: { | ||
| issuer: apiUrl, | ||
| audience: [apiUrl, `${apiUrl}/`, `${apiUrl}/api/v2/agent/mcp`], | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let me explore the file structure and locate the auth.ts file
find . -type f -name "auth.ts" | head -20Repository: superset-sh/superset
Length of output: 297
🏁 Script executed:
# Read the auth.ts file to see the current implementation
cat -n packages/mcp-v2/src/auth.tsRepository: superset-sh/superset
Length of output: 7582
🏁 Script executed:
# Check how apiUrl is used in the entire file and related functions
rg "apiUrl" packages/mcp-v2/src/auth.ts -n -A 2Repository: superset-sh/superset
Length of output: 481
🏁 Script executed:
# Search for calls to resolveMcpContext to understand how apiUrl is provided
rg "resolveMcpContext" packages/mcp-v2/src/ -B 2 -A 2Repository: superset-sh/superset
Length of output: 552
🏁 Script executed:
# Check where apiUrl might come from - likely environment variables
rg "apiUrl|API_URL" packages/mcp-v2/src/ -n | head -30Repository: superset-sh/superset
Length of output: 428
🏁 Script executed:
# Check if similar pattern exists in other auth.ts files (e.g., packages/mcp/src/auth.ts)
cat -n packages/mcp/src/auth.ts | head -200Repository: superset-sh/superset
Length of output: 225
🏁 Script executed:
# Check other auth.ts files for similar patterns
rg "verifyAccessToken|jwksUrl|issuer" packages/mcp/src/auth.ts -n -B 2 -A 2Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check how resolveMcpContext is called - where does apiUrl come from?
rg "resolveMcpContext" packages/mcp-v2/src/ -B 3 -A 3 --type tsRepository: superset-sh/superset
Length of output: 793
🏁 Script executed:
# Look for environment variable or configuration usage
rg "process.env|getEnv|config" packages/mcp-v2/src/ -n | grep -i urlRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Search across entire codebase for resolveMcpContext usage
rg "resolveMcpContext" --type ts --type tsx -B 5 -A 5Repository: superset-sh/superset
Length of output: 92
🏁 Script executed:
# Check the mcp-v2 package structure to find usage
git ls-files packages/mcp-v2 | head -20Repository: superset-sh/superset
Length of output: 880
🏁 Script executed:
# Look for actual service/handler files that would call resolveMcpContext
find packages/mcp-v2 -type f \( -name "*.ts" -o -name "*.tsx" \) | xargs ls -laRepository: superset-sh/superset
Length of output: 2697
🏁 Script executed:
# Search for resolveMcpContext calls across the codebase
rg "resolveMcpContext" --type ts -B 5 -A 5Repository: superset-sh/superset
Length of output: 3015
🏁 Script executed:
# Look at server.ts which likely uses resolveMcpContext
cat -n packages/mcp-v2/src/server.tsRepository: superset-sh/superset
Length of output: 777
🏁 Script executed:
# Check context-utils.ts as it might be related to context handling
cat -n packages/mcp-v2/src/context-utils.tsRepository: superset-sh/superset
Length of output: 927
🏁 Script executed:
# Examine the actual usage in the API route
cat -n apps/api/src/app/api/v2/agent/[transport]/route.ts | head -80Repository: superset-sh/superset
Length of output: 2496
🏁 Script executed:
# Check if NEXT_PUBLIC_API_URL is documented anywhere with potential trailing slash issues
rg "NEXT_PUBLIC_API_URL" --type ts -B 2 -A 2 | head -40Repository: superset-sh/superset
Length of output: 2425
🏁 Script executed:
# Verify that the suggested fix handles all apiUrl usages correctly
rg "apiUrl" packages/mcp-v2/src/auth.ts -nRepository: superset-sh/superset
Length of output: 260
Normalize apiUrl before building JWKS/issuer/audience values.
Using apiUrl raw can break OAuth verification when the env has a trailing slash. If NEXT_PUBLIC_API_URL=https://api.example.com/, the issuer becomes https://api.example.com/ (with slash), causing JWT verification to fail when the token's issuer claim is https://api.example.com (without slash). Additionally, jwksUrl becomes https://api.example.com//api/auth/jwks (double slash). Normalize once at function entry and reuse.
💡 Suggested fix
export async function resolveMcpContext(
req: Request,
apiUrl: string,
): Promise<McpContext> {
+ const normalizedApiUrl = apiUrl.replace(/\/+$/, "");
const token = extractBearer(req);
if (!token) {
throw new McpUnauthorizedError("Missing bearer token");
}
@@
} else if (looksLikeJwt(token)) {
({ userId, organizationId, clientLabel } = await resolveOAuth(
token,
- apiUrl,
+ normalizedApiUrl,
));
source = "oauth";
} else {
throw new McpUnauthorizedError("Unrecognized token format");
} async function resolveOAuth(
token: string,
apiUrl: string,
): Promise<{
@@
payload = (await verifyAccessToken(token, {
jwksUrl: `${apiUrl}/api/auth/jwks`,
verifyOptions: {
issuer: apiUrl,
audience: [apiUrl, `${apiUrl}/`, `${apiUrl}/api/v2/agent/mcp`],
},
})) as Record<string, unknown>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/mcp-v2/src/auth.ts` around lines 131 - 136, Normalize the apiUrl
once at the top of the function before building the JWKS/issuer/audience values:
trim any trailing slash from the incoming apiUrl (e.g., create normalizedApiUrl
= apiUrl.replace(/\/+$/, '')) and then use normalizedApiUrl when constructing
jwksUrl (`${normalizedApiUrl}/api/auth/jwks`), issuer (normalizedApiUrl) and
audience entries (normalizedApiUrl, `${normalizedApiUrl}/`,
`${normalizedApiUrl}/api/v2/agent/mcp`) so verifyAccessToken(...) uses
consistent, slash-normalized values; update the code around the
verifyAccessToken(...) call to reference the normalized variable instead of raw
apiUrl.
| function successResult(data: unknown): CallToolResult { | ||
| return { | ||
| structuredContent: | ||
| data && typeof data === "object" && !Array.isArray(data) | ||
| ? (data as Record<string, unknown>) | ||
| : { result: data }, | ||
| content: [ | ||
| { | ||
| type: "text" as const, | ||
| text: JSON.stringify(data, null, 2), | ||
| }, | ||
| ], | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and read the file to understand the structure
fd -t f "define-tool.ts" packages/Repository: superset-sh/superset
Length of output: 99
🏁 Script executed:
# Get the file size first to determine if we need to read in sections
wc -l packages/mcp-v2/src/define-tool.tsRepository: superset-sh/superset
Length of output: 103
🏁 Script executed:
# Read the entire file to see the context around lines 69-82 and 119-132
cat -n packages/mcp-v2/src/define-tool.tsRepository: superset-sh/superset
Length of output: 4872
Move response serialization before success telemetry emission.
successResult(result) can throw on line 131 if JSON.stringify encounters BigInt, circular references, or non-serializable data. Currently, success telemetry is emitted (lines 122–130) before the serialization attempt (line 131), causing a single invocation to be logged as both success and failure if successResult() throws.
Call successResult() first to validate serialization succeeds, then emit success telemetry, then return the response.
Suggested fix
const startedAt = Date.now();
try {
const result = await def.handler(args, ctx);
+ const response = successResult(result);
emitToolCall(server, {
toolName: def.name,
userId: ctx.userId,
organizationId: ctx.organizationId,
source: ctx.source,
clientLabel: ctx.clientLabel,
durationMs: Date.now() - startedAt,
success: true,
});
- return successResult(result);
+ return response;
} catch (e) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/mcp-v2/src/define-tool.ts` around lines 69 - 82, The success
telemetry is emitted before the response is serialized, so if
successResult(data) throws (e.g., JSON.stringify fails) the call is logged as
both success and failure; change the flow to call successResult(result) first to
validate/produce the response, then emit the success telemetry (the existing
telemetry emit routine), and finally return the response; locate the usage that
currently emits telemetry then calls successResult and reorder them so
successResult runs before emitting telemetry (ensure any thrown errors still
trigger the existing failure telemetry path).
| v2ProjectId: z | ||
| .string() | ||
| .uuid() | ||
| .optional() | ||
| .describe("Project UUID. Provide this OR v2WorkspaceId."), | ||
| v2WorkspaceId: z | ||
| .string() | ||
| .uuid() | ||
| .nullish() | ||
| .describe("Workspace UUID to reuse. Provide this OR v2ProjectId."), |
There was a problem hiding this comment.
Enforce required targeting IDs in the tool before calling the API.
The tool description requires at least one of v2ProjectId or v2WorkspaceId, but this is not checked locally. Add a precheck so callers get an immediate, tool-level error instead of a downstream API validation failure.
Suggested fix
handler: async (input, ctx) => {
+ if (!input.v2ProjectId && !input.v2WorkspaceId) {
+ throw new Error("Provide v2ProjectId or v2WorkspaceId.");
+ }
const caller = createMcpCaller(ctx);
return caller.automation.create(input);
},Also applies to: 69-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/mcp-v2/src/tools/automations/create.ts` around lines 38 - 47, Add a
local precheck that throws a clear, immediate error when neither v2ProjectId nor
v2WorkspaceId is provided before calling the API: in
packages/mcp-v2/src/tools/automations/create.ts locate the function that reads
the parsed input (referencing the v2ProjectId and v2WorkspaceId symbols) and add
an if-check like `if (!v2ProjectId && !v2WorkspaceId) throw new Error("Provide
either v2ProjectId or v2WorkspaceId")` so callers fail fast at the tool level;
apply the same precheck in the other analogous handler block that processes the
same inputs (the block covering the other occurrence of
v2ProjectId/v2WorkspaceId).
30aed8b to
311f1d9
Compare
… ci.yml Mirrors the desktop pattern: - build-cli.yml — reusable workflow_call. Takes a JSON `targets` matrix input and runs the full distribution build (Node runtime + native modules + tarball + smoke test). No triggers of its own. - release-cli.yml — fires on cli-v* tag push (and workflow_dispatch as manual escape hatch). Calls build-cli.yml with the full 3-target matrix, then runs the release + cli-latest rolling-pointer steps. Release job is gated to tag pushes, so workflow_dispatch builds the matrix without publishing. - ci.yml — adds a build-cli job alongside the existing desktop build job. Cheap source-level build (bun turbo run build --filter=@superset/cli) on every PR/main push, matching how the desktop build job works.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/build-cli.yml (1)
36-48: 💤 Low valueConsider making the node-pty path resolution more robust.
The glob
node_modules/.bun/node-pty@*/node_modules/node-ptyassumes a single version match. While--frozenshould ensure this, if the glob matches zero or multiple directories,ls -dwill fail or return multiple paths, potentially causingcdto fail silently or unpredictably.♻️ Optional: Add explicit single-match validation
bun install --frozen --ignore-scripts - PTY_DIR=$(ls -d node_modules/.bun/node-pty@*/node_modules/node-pty) + PTY_DIR=$(ls -d node_modules/.bun/node-pty@*/node_modules/node-pty 2>/dev/null | head -n1) + if [[ -z "$PTY_DIR" ]]; then + echo "::error::node-pty directory not found" + exit 1 + fi (cd "$PTY_DIR" && npx --yes node-gyp rebuild)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build-cli.yml around lines 36 - 48, The PTY_DIR resolution using the glob "node_modules/.bun/node-pty@*/node_modules/node-pty" (assigned to PTY_DIR) can return zero or multiple matches causing unpredictable behavior; update the install step to explicitly check the glob results immediately after evaluating the glob (the PTY_DIR assignment and subsequent (cd "$PTY_DIR" && npx --yes node-gyp rebuild)) and fail fast with a clear error if there are no matches or more than one match, otherwise continue with the single matched path; ensure the check emits a helpful error message and exits non‑zero so the workflow cannot proceed with an ambiguous PTY_DIR.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/build-cli.yml:
- Around line 36-48: The PTY_DIR resolution using the glob
"node_modules/.bun/node-pty@*/node_modules/node-pty" (assigned to PTY_DIR) can
return zero or multiple matches causing unpredictable behavior; update the
install step to explicitly check the glob results immediately after evaluating
the glob (the PTY_DIR assignment and subsequent (cd "$PTY_DIR" && npx --yes
node-gyp rebuild)) and fail fast with a clear error if there are no matches or
more than one match, otherwise continue with the single matched path; ensure the
check emits a helpful error message and exits non‑zero so the workflow cannot
proceed with an ambiguous PTY_DIR.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ad33a3ae-acce-47c3-86b3-497f00167f76
📒 Files selected for processing (3)
.github/workflows/build-cli.yml.github/workflows/ci.yml.github/workflows/release-cli.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci.yml
# Conflicts: # apps/web/src/app/cli/authorize/page.tsx # apps/web/src/app/oauth/consent/page.tsx # apps/web/src/proxy.ts # packages/trpc/src/router/automation/automation.ts # packages/trpc/src/router/automation/schema.ts # packages/trpc/src/router/task/task.ts # packages/trpc/src/trpc.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/db/src/schema/auth.ts`:
- Around line 190-191: The schema currently defines subjectType as
text("subject_type") which allows invalid OIDC values; change this to enforce
only "public" or "pairwise" by adding a DB-level CHECK or enum and adding
application-level validation where clients are created/updated. Replace the
free-text column definition (subjectType / "subject_type") with a constrained
type (Postgres ENUM or text with CHECK subject_type IN ('public','pairwise')) in
the schema/migration, and update the client creation/update validation logic
(e.g., the functions that accept client metadata such as
createClient/updateClient or the repository that inserts into the auth table) to
reject anything other than "public" or "pairwise" before persisting. Ensure
tests are updated to cover rejection of invalid subject_type values.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 65ce2f62-f7bc-4a01-a70d-73275beaf415
📒 Files selected for processing (4)
apps/web/src/app/cli/authorize/page.tsxpackages/db/src/schema/auth.tspackages/db/src/schema/schema.tspackages/trpc/src/router/automation/automation.ts
✅ Files skipped from review due to trivial changes (1)
- apps/web/src/app/cli/authorize/page.tsx
| requirePKCE: boolean("require_pkce"), | ||
| subjectType: text("subject_type"), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the auth.ts schema file around lines 190-191
sed -n '185,195p' packages/db/src/schema/auth.tsRepository: superset-sh/superset
Length of output: 411
🏁 Script executed:
# Check if migration files exist for this schema change
ls -la packages/db/drizzle/ | head -20Repository: superset-sh/superset
Length of output: 1599
🏁 Script executed:
# Look for recent migration files that might correspond to this change
find packages/db/drizzle -type f -name "*.sql" | sort -r | head -10Repository: superset-sh/superset
Length of output: 647
🏁 Script executed:
# Check the most recent oauth-related migration
cat packages/db/drizzle/0030_better_auth_1_5_upgrade.sqlRepository: superset-sh/superset
Length of output: 1086
🏁 Script executed:
# Check earlier oauth migration
cat packages/db/drizzle/0014_better_auth_mcp_oauth.sqlRepository: superset-sh/superset
Length of output: 4162
🏁 Script executed:
# Check the entire auth.ts file to see patterns and enums
wc -l packages/db/src/schema/auth.tsRepository: superset-sh/superset
Length of output: 99
🏁 Script executed:
# Look for enum usage in auth.ts
rg "enum\(|boolean\(|text\(" packages/db/src/schema/auth.ts | head -30Repository: superset-sh/superset
Length of output: 1242
🏁 Script executed:
# Read the entire auth.ts file to understand the full context
cat -n packages/db/src/schema/auth.tsRepository: superset-sh/superset
Length of output: 12314
🏁 Script executed:
# Check if there are any migration files that mention subjectType
grep -r "subject_type" packages/db/drizzle/Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check if there are any other constraint examples in the migrations or schema
grep -r "CHECK\|CONSTRAINT" packages/db/drizzle/*.sql | head -20Repository: superset-sh/superset
Length of output: 3962
Constrain subjectType to valid OIDC values ("public" or "pairwise").
subject_type is standards-defined client metadata per OpenID Connect, which only allows public and pairwise as valid values. Leaving this as unconstrained text allows invalid client metadata to be persisted. Add a PostgreSQL CHECK constraint or application-level validation to enforce the allowed values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/db/src/schema/auth.ts` around lines 190 - 191, The schema currently
defines subjectType as text("subject_type") which allows invalid OIDC values;
change this to enforce only "public" or "pairwise" by adding a DB-level CHECK or
enum and adding application-level validation where clients are created/updated.
Replace the free-text column definition (subjectType / "subject_type") with a
constrained type (Postgres ENUM or text with CHECK subject_type IN
('public','pairwise')) in the schema/migration, and update the client
creation/update validation logic (e.g., the functions that accept client
metadata such as createClient/updateClient or the repository that inserts into
the auth table) to reject anything other than "public" or "pairwise" before
persisting. Ensure tests are updated to cover rejection of invalid subject_type
values.
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
- workspaces create: requires --host <id> OR --local (no implicit default).
- workspaces list/delete: optional --host or --local filter.
- projects list: switched from per-host to cloud v2Project.list (projects
are organization-scoped; the host concept lives on workspaces).
New helpers in lib/host-target/resolveHostFlags.ts:
- resolveHostFilter({host, local}): returns hostId | undefined; errors
when both flags set.
- requireHostTarget({host, local}): same but errors when neither set.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/lib/host-target/resolveHostFlags.ts`:
- Around line 17-25: In resolveHostFilter, explicitly reject an empty --host
value before resolving or checking mutual exclusion: if flags.host is defined
but is an empty string (e.g., flags.host.trim() === ""), throw a CLIError
indicating an invalid/empty host id; then keep the existing mutual-exclusion
check between flags.host and flags.local and the existing return logic
(getHostId() when flags.local, otherwise return flags.host). This ensures --host
'' is validated consistently and yields a deterministic CLI error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f5646d7a-af0d-48cd-b0e9-df7d672979e9
📒 Files selected for processing (6)
packages/cli/src/commands/projects/list/command.tspackages/cli/src/commands/workspaces/create/command.tspackages/cli/src/commands/workspaces/delete/command.tspackages/cli/src/commands/workspaces/list/command.tspackages/cli/src/lib/host-target/index.tspackages/cli/src/lib/host-target/resolveHostFlags.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/cli/src/commands/projects/list/command.ts
| export function resolveHostFilter(flags: HostFlags): string | undefined { | ||
| if (flags.host && flags.local) { | ||
| throw new CLIError( | ||
| "Pass either --host or --local, not both", | ||
| "Use --local for this machine, --host <id> for a specific host.", | ||
| ); | ||
| } | ||
| if (flags.local) return getHostId(); | ||
| return flags.host ?? undefined; |
There was a problem hiding this comment.
Reject empty --host values before resolving.
These checks are truthy-based, so --host '' slips past the mutual-exclusion guard and then behaves differently per caller: resolveHostFilter() can return "", requireHostTarget() turns it into “Target host required”, and --host '' --local silently prefers --local. Validating presence explicitly and rejecting blank host IDs here gives callers one deterministic CLI error.
Suggested fix
export function resolveHostFilter(flags: HostFlags): string | undefined {
- if (flags.host && flags.local) {
+ const host = flags.host?.trim();
+
+ if (host !== undefined && host.length === 0) {
+ throw new CLIError(
+ "Host id cannot be empty",
+ "Use --local for this machine, or --host <id> for a specific host.",
+ );
+ }
+
+ if (host !== undefined && flags.local) {
throw new CLIError(
"Pass either --host or --local, not both",
"Use --local for this machine, --host <id> for a specific host.",
);
}
if (flags.local) return getHostId();
- return flags.host ?? undefined;
+ return host;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/lib/host-target/resolveHostFlags.ts` around lines 17 - 25,
In resolveHostFilter, explicitly reject an empty --host value before resolving
or checking mutual exclusion: if flags.host is defined but is an empty string
(e.g., flags.host.trim() === ""), throw a CLIError indicating an invalid/empty
host id; then keep the existing mutual-exclusion check between flags.host and
flags.local and the existing return logic (getHostId() when flags.local,
otherwise return flags.host). This ensures --host '' is validated consistently
and yields a deterministic CLI error.
Package has no test files; \`bun test\` exited 1 with "No tests found", which failed the turbo `test` task on every CI run. Removing the script makes turbo skip the package. Re-add when real tests land.
Adds four nullable columns @better-auth/oauth-provider and @better-auth/stripe write to at runtime: - oauth_clients.subject_type, oauth_clients.require_pkce - subscriptions.billing_interval, subscriptions.stripe_schedule_id Without these, the next Stripe webhook (subscription created/updated/ canceled) and any OAuth dynamic-client-registration request will throw "column does not exist". Apply before the next API deploy.
Recorded as integrated via -s ours after batch PRs #455-#464. Taken via individual PRs: - PR 1 (#455): v2 polish 前半 safe set (9 commits) - PR 2 (#456): v2/host-service polish 中盤 (12 commits) - PR 3 (#457): sidebar polish + jwt API (5 commits) - PR 4 (#458): host-service tRPC retry/cache/timeout (3 commits) - PR 5 (#459): v2 diff pane / file pane polish (2 commits) - PR 7 (#462): host-service v2 canonical workspace.create + attachment store (PR1 superset-sh#3893 + PR2 superset-sh#3916) - PR 11 (#463): agents API + onboarding (7 commits + 1 cleanup) - PR 12 (#464): v1→v2 import flow rewrite (11 commits + 2 follow-ups) - PR 13 (#460): host-service shell env probe + typo (2 commits) - PR 16 (#461): marketplace 19 themes (1 commit) Skipped / deferred (recorded as integrated for behind=0): - PR 6: CLI v1 launch (superset-sh#3898 + 30+ CLI/SDK followups) — defer to dedicated migration - PR 9: v2 PR3 (superset-sh#3940) + revert (superset-sh#4017) — net-zero pair - PR 10: pty-daemon (superset-sh#3896, superset-sh#3971, superset-sh#4054) — fork keeps its terminal-host - PR 14: Slack MCP-v2 (superset-sh#4197, superset-sh#4208) — depends on mcp-v2/sdk divergence - PR 15: onboarding remaining (superset-sh#4115, superset-sh#4125, superset-sh#4214, superset-sh#4213, superset-sh#4222, superset-sh#4225) — depends on fork's deleted setup pages Behind: 0 after this merge.
Summary
Ships Superset CLI v1: a small, reliable CLI for authenticated cloud workflows + local host-service lifecycle. 15 commits stretching from initial command-surface work through native build matrix, rolling `cli-latest` release, brew tap automation, and the `--prerelease` workaround.
Plan docs included
Test plan
Out of scope (deferred)
Summary by cubic
Ships Superset CLI v1 with a stable command surface, native builds for darwin-arm64, linux-x64, and linux-arm64, and a rolling update channel. Adds the
@superset/mcp-v2agent API and full CLI docs.New Features
supersetcommands: auth/organization/projects/hosts/workspaces/tasks/automations plus top‑levelstart,stop,status, andupdate.projects list,hosts list,automations prompt, andautomations pause|resume;workspaces create|delete|listtarget hosts; tasks supportbyIdOrSluglookup and filteredlist.superset updatereads rollingcli-latest; supports--version <semver>and--check; smoke tests verify native addons in CI; Homebrew tap auto‑bumps and includes linux‑arm64; releases marked--prerelease.@superset/mcp-v2at/api/v2/agent/mcpwith tools for tasks, automations, workspaces, projects, and hosts; OAuth audience expanded; API keys viax-api-key./cli/authorizeand/oauth/consent. Docs: new CLI product (getting-started,cli-reference,env-vars) with a structuredCommandcomponent.Migration
CLOUD_API_URLtoSUPERSET_API_URLacross desktop, host‑service, and CLI.~/.superset(shared with desktop); honorsSUPERSET_HOME_DIR. API URL is build‑time (SUPERSET_API_URL);--api-urlremoved.host start|stop|status→start|stop|status;auth status→auth whoami;devices→hosts. Useautomations promptto read/write prompts.task.byIdOrSlug; unifiedtask.create(desktop/mobile updated fromcreateFromUi);task.branchremoved.--api-keyorSUPERSET_API_KEYfor API key auth;superset update --version <x.y.z>for pinned installs.Written for commit abaa3e3. Summary will update on new commits. Review in cubic
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
CLI Updates
superset hosttosuperset start/stop/status--deviceto--hostacross commandsdevices listandhost installcommandsDocumentation