Skip to content

fix(desktop): unblock AI branch/workspace naming for OAuth-only users + dev placeholders#3580

Merged
AviPeltz merged 9 commits intomainfrom
fix/desktop-small-model-oauth-fallback
Apr 21, 2026
Merged

fix(desktop): unblock AI branch/workspace naming for OAuth-only users + dev placeholders#3580
AviPeltz merged 9 commits intomainfrom
fix/desktop-small-model-oauth-fallback

Conversation

@AviPeltz
Copy link
Copy Markdown
Collaborator

@AviPeltz AviPeltz commented Apr 20, 2026

Summary

getSmallModel (used by branch-name and workspace-rename AI flows) only resolved Anthropic credentials from the apikey:anthropic slot of mastracode's auth.json. Two failure modes:

  1. OAuth-only users (the default Claude Code login) had no api-key entry, so getSmallModel returned null silently. Workspace creation always fell back to a slugified prompt and showed "A prompt-based title was used because model naming was unavailable".
  2. Stale dev ANTHROPIC_API_KEY (e.g. ANTHROPIC_API_KEY=dummy placeholder in .env) made it worse: env path took precedence, the bad key 401'd, and the same fallback toast hid the real reason.

This was a regression from #3517 (small-model bypassed mastracode to avoid bundling onnxruntime-node, but dropped OAuth support in the process).

What changed

  • OAuth resolution in packages/chat/src/server/shared/small-model/anthropic-oauth.ts — reads Anthropic OAuth from auth.json, refreshes via direct POST to console.anthropic.com/v1/oauth/token when expired (no mastracode import — preserves the bundling fix from refactor(desktop): upgrade mastracode + simplify small-model naming #3517). Atomic write-back via temp file + rename.
  • Resolution order in get-small-model.ts: env api-key → stored api-key → OAuth (new) → OpenAI.
  • Key validation: Anthropic keys must start with sk-ant-api, OpenAI keys with sk-. Dev placeholders like dummy no longer mask the real auth path.
  • Async signature: getSmallModel is now async (because OAuth resolution may need to fetch a refresh). Updated all 3 production call sites + tests.
  • Workspace title prompt: now says "20 characters or less" so sidebar entries stay short.
  • Diagnostic logs: [get-small-model] using Anthropic OAuth / [anthropic-oauth] refresh returned 401: … etc. — future regressions surface in the dev terminal instead of producing a silent fallback toast.

Test plan

  • bun run typecheck — 25/25 packages pass
  • bun test apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts — 6/6 pass
  • Verified end-to-end with the reporter's actual auth.json (OAuth-only) + ANTHROPIC_API_KEY=dummy env: [get-small-model] using Anthropic OAuth → live API call returns a generated title
  • Reviewer with an api-key-only setup confirms env api-key path still wins
  • Reviewer signed in via Claude OAuth confirms branch/workspace naming works without setting any env vars
  • Workspace title sidebar entries stay ≤~25 chars (LLM may overshoot 20 by a few)

Summary by cubic

Unblocks model-based branch and workspace naming for OAuth-only users. getSmallModel is now async and resolves Anthropic via env → stored key → OAuth, then falls back to OpenAI; both naming prompts ask for 20 characters or less and return only the name.

  • Bug Fixes

    • Use mastracode auth storage to resolve Anthropic OAuth and refresh tokens; keep the no-credentials warning.
    • Stricter key validation: Anthropic must be sk-ant-api… and ≥30 chars; OpenAI must be sk-… and ≥30; ignore placeholders and OAuth tokens sent as API keys.
    • Resolve OpenAI keys from both apikey:openai-codex and legacy apikey:openai, preferring apikey:<provider> over the main slot for both providers.
    • Branch and workspace naming now await getSmallModel; prompts include “20 characters or less. Return ONLY the name.”
  • Refactors

    • Replaced custom OAuth/file I/O with createAuthStorage().getApiKey from mastracode; memoized storage instance.
    • Moved provider-id constants to packages/chat/src/server/shared/auth-provider-ids.ts; added tests for key validators and updated call sites/tests for async getSmallModel.

Written for commit d067412. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Added Anthropic OAuth support for resolving small-model credentials and shared provider identifiers.
  • Improvements

    • Workspace and branch name generation now awaits async model resolution.
    • Workspace title generation tightened to 20-character concise titles and returns only the title.
    • Stricter Anthropic/OpenAI API-key validation and clearer credential-selection logging when no usable credentials found.
  • Tests

    • New tests covering OAuth handling, token refresh/persistence, API-key validators, and name-generation scenarios.

… + dev placeholders

`getSmallModel` only resolved Anthropic credentials from `apikey:anthropic`
in mastracode's auth.json. Users authed via Claude OAuth (the default Code
flow) had no api-key entry, so it returned `null` silently and naming
always fell back to a slugified prompt. A stale/dummy `ANTHROPIC_API_KEY`
in `.env` (e.g. the `=dummy` placeholder) made it worse: the env path was
preferred, the bad key 401'd, and the fallback toast hid the real reason.

Resolution order is now: env api-key → stored api-key → OAuth → openai.
Anthropic api-keys are validated to start with `sk-ant-api` and openai
keys with `sk-` so dev placeholders fall through to OAuth instead of
being sent to the API. OAuth is read directly from auth.json with refresh
done via direct HTTP to console.anthropic.com — no mastracode import (it
pulls in onnxruntime-node and broke electron-vite bundling, see #3517).

Also cap the workspace auto-rename prompt at "20 characters or less" so
sidebar titles stay short.

Diagnostic logs (`[get-small-model] using …` / refresh failures) are kept
so future regressions surface in the dev terminal instead of silently
producing the fallback toast.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

getSmallModel() was made asynchronous, adding Anthropic OAuth refresh/persistence (auth.json), API-key validators, and shared provider IDs. Call sites now await the model. New tests cover OAuth parsing/refresh, persistence edge cases, and API-key validators.

Changes

Cohort / File(s) Summary
Anthropic OAuth module & tests
packages/chat/src/server/shared/small-model/anthropic-oauth.ts, packages/chat/src/server/shared/small-model/anthropic-oauth.test.ts
New OAuth resolver: reads auth.json, validates oauth entries, refreshes Anthropic tokens, caches and atomically persists updates; comprehensive tests for parsing, refresh, persistence, timeouts, and error paths.
Small-model resolution, validators & tests
packages/chat/src/server/shared/small-model/get-small-model.ts, packages/chat/src/server/shared/small-model/get-small-model.test.ts
Converted getSmallModel to async; added isAnthropicApiKey/isOpenAIApiKey; multi-provider key resolution (env + stored + OAuth); added warnings and tests for key validators.
Auth provider IDs shared
packages/chat/src/server/shared/auth-provider-ids.ts, packages/chat/src/server/desktop/auth/provider-ids.ts
Introduced shared constants ANTHROPIC_AUTH_PROVIDER_ID, OPENAI_AUTH_PROVIDER_ID, and OPENAI_AUTH_PROVIDER_IDS; desktop module now re-exports from shared file.
Call sites updated to await model
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-branch-name.ts, packages/host-service/src/trpc/router/workspace-creation/utils/ai-branch-name.ts
Changed callers to await getSmallModel() to accommodate async resolution (no signature changes to callers).
Workspace naming changes & tests
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts, apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
Await small model in name generation; tightened generation instructions to ≤20-character plain-title-only output; tests updated to use async mocks (mockResolvedValue*) and adjusted expectations.

Sequence Diagram

sequenceDiagram
    participant Caller as Caller (workspace utils)
    participant GSM as getSmallModel()
    participant FS as FileSystem (auth.json)
    participant OAuth as Anthropic OAuth
    participant OpenAI as OpenAI Env/Store

    Caller->>GSM: await getSmallModel()
    GSM->>GSM: check Anthropic API key (env/store)
    alt Anthropic API key valid
        GSM-->>Caller: return Anthropic client (API key)
    else no Anthropic API key
        GSM->>FS: read auth.json
        FS-->>GSM: auth.json or error
        GSM->>GSM: validate OAuth entry & expiry
        alt OAuth present & fresh
            GSM-->>Caller: return Anthropic client (oauth.access)
        else OAuth missing or expired
            GSM->>OAuth: POST refresh (refresh_token, client_id)
            OAuth-->>GSM: response (access_token, refresh_token, expires_in)
            GSM->>FS: write updated auth.json (atomic temp -> rename)
            FS-->>GSM: write result
            GSM-->>Caller: return Anthropic client (new access token)
        end
    end
    alt No Anthropic credentials found
        GSM->>OpenAI: validate OpenAI API key (env/store)
        alt OpenAI key valid
            GSM-->>Caller: return OpenAI client
        else
            GSM-->>Caller: return null (warn)
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibbled tokens from auth.json tonight,
I hopped and refreshed until access was bright,
Async paws tapping, validators in tow,
Keys trimmed to size and ready to go,
A tiny rabbit, credentials aglow. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.89% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main changes: unblocking AI branch/workspace naming for OAuth-only users and handling dev placeholder API keys.
Description check ✅ Passed The PR description is comprehensive and follows the template structure, including clear summary, related issues context, type of change (bug fix), testing details, and additional notes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/desktop-small-model-oauth-fallback

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Apr 20, 2026

Greptile Summary

This PR fixes two failure modes that silently prevented AI-powered branch/workspace naming for OAuth-only Claude Code users: getSmallModel previously only read API keys from auth.json, returning null for OAuth-only logins, and stale dev placeholder values in the Anthropic env var would shadow the real auth path and trigger a misleading fallback toast.

The fix introduces a new anthropic-oauth.ts module that reads the OAuth token from auth.json, refreshes it when expired via a direct HTTPS call to console.anthropic.com (avoiding the onnxruntime-node bundling issue), and writes the new token back atomically. getSmallModel is made async and the credential resolution order becomes: env API key → stored API key → OAuth → OpenAI. Key-format validation (sk-ant-api / sk-) now filters out dev placeholder values like \"dummy\".

Key concerns:

  • The test assertion for the updated workspace-namer instructions string (\"You generate concise workspace titles.\") is stale after the production string was extended to include the 20-character constraint — the toHaveBeenCalledWith check will fail.
  • The atomic write in writeAnthropicEntry places the temp file in tmpdir() (often /tmp on Linux) before renaming to ~/.local/share/mastracode/auth.json. When these paths span different filesystems, renameSync throws EXDEV, silently preventing token persistence for Linux users and forcing a network refresh on every call.

Confidence Score: 3/5

Two targeted fixes needed before merge: the stale test assertion will cause CI failures, and the cross-device rename bug silently breaks token persistence for Linux users.

The core OAuth logic and credential resolution order are sound and well-structured. However, the stale toHaveBeenCalledWith assertion in the test file is a concrete failure that contradicts the PR's 6/6 pass claim and will break CI. The EXDEV rename issue silently degrades the Linux experience by forcing a token refresh on every invocation. Both are straightforward to fix — once resolved, the PR is ready to merge.

ai-name.test.ts (stale instructions assertion) and anthropic-oauth.ts (writeAnthropicEntry cross-device rename)

Important Files Changed

Filename Overview
packages/chat/src/server/shared/small-model/anthropic-oauth.ts New file implementing OAuth token resolution and refresh for Anthropic; contains a cross-device rename bug on Linux (tmpdir vs home filesystem) and duplicates getAuthJsonPath from get-small-model.ts.
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts Mock updated to async/Promise for getSmallModel, but the toHaveBeenCalledWith assertion for the instructions string is stale — expects the old single-sentence value rather than the updated "20 characters or less…" string.
packages/chat/src/server/shared/small-model/get-small-model.ts Correctly converts getSmallModel to async, adds key-format validation to filter dev placeholders, and threads OAuth resolution between API-key and OpenAI fallback; reads auth.json twice per call when falling through to OAuth.
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts Correctly awaits getSmallModel and updates the workspace-namer instructions to request 20-character titles; no issues.
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-branch-name.ts One-line change to await the now-async getSmallModel; correct and complete.
packages/host-service/src/trpc/router/workspace-creation/utils/ai-branch-name.ts One-line change to await the now-async getSmallModel; correct and complete.

Sequence Diagram

sequenceDiagram
    participant Caller as ai-name / ai-branch-name
    participant GSM as getSmallModel()
    participant Env as process.env
    participant AuthJSON as auth.json
    participant OAuth as getAnthropicOAuthCredential()
    participant Remote as console.anthropic.com

    Caller->>GSM: await getSmallModel()
    GSM->>AuthJSON: readAuthData()
    GSM->>Env: ANTHROPIC_API_KEY
    alt valid sk-ant-api key in env or stored apikey slot
        GSM-->>Caller: createAnthropic(apiKey)(haiku)
    else no valid Anthropic API key
        GSM->>OAuth: await getAnthropicOAuthCredential()
        OAuth->>AuthJSON: readAuthJson() [second read]
        alt token not expired
            OAuth-->>GSM: { accessToken }
        else token expired
            OAuth->>Remote: POST /v1/oauth/token (refresh)
            Remote-->>OAuth: { access_token, refresh_token, expires_in }
            OAuth->>AuthJSON: writeAnthropicEntry() [atomic rename]
            OAuth-->>GSM: { accessToken }
        end
        alt OAuth credential found
            GSM-->>Caller: createAnthropic(authToken)(haiku)
        else no OAuth
            GSM->>Env: OPENAI_API_KEY
            alt valid sk- key
                GSM-->>Caller: createOpenAI(apiKey).chat(gpt-4o-mini)
            else no credentials
                GSM-->>Caller: null (fallback toast)
            end
        end
    end
Loading

Comments Outside Diff (1)

  1. apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts, line 114-121 (link)

    P1 Stale test assertion for updated instructions string

    The toHaveBeenCalledWith assertion on line 119 still expects the old instructions string "You generate concise workspace titles.", but ai-name.ts was updated to pass "You generate concise workspace titles. 20 characters or less. Return ONLY the title, nothing else.". This assertion will fail because Bun's toHaveBeenCalledWith performs strict deep equality on the call arguments.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
    Line: 114-121
    
    Comment:
    **Stale test assertion for updated instructions string**
    
    The `toHaveBeenCalledWith` assertion on line 119 still expects the old instructions string `"You generate concise workspace titles."`, but `ai-name.ts` was updated to pass `"You generate concise workspace titles. 20 characters or less. Return ONLY the title, nothing else."`. This assertion will fail because Bun's `toHaveBeenCalledWith` performs strict deep equality on the call arguments.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
Line: 114-121

Comment:
**Stale test assertion for updated instructions string**

The `toHaveBeenCalledWith` assertion on line 119 still expects the old instructions string `"You generate concise workspace titles."`, but `ai-name.ts` was updated to pass `"You generate concise workspace titles. 20 characters or less. Return ONLY the title, nothing else."`. This assertion will fail because Bun's `toHaveBeenCalledWith` performs strict deep equality on the call arguments.

```suggestion
	expect(generateTitleFromMessageMock).toHaveBeenCalledWith({
		message: "hey boss how are you",
		agentModel: { id: "test-model" },
		agentId: "workspace-namer",
		agentName: "Workspace Namer",
		instructions:
			"You generate concise workspace titles. 20 characters or less. Return ONLY the title, nothing else.",
		tracingContext: { surface: "workspace-auto-name" },
	});
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/chat/src/server/shared/small-model/anthropic-oauth.ts
Line: 84-89

Comment:
**Cross-device `renameSync` will fail on Linux**

The temp file is written to `tmpdir()` (typically `/tmp`) and then renamed to the auth.json path (typically `~/.local/share/mastracode/auth.json`). On Linux, `/tmp` is commonly a `tmpfs` on a separate virtual filesystem from the user's home partition. A cross-device `renameSync` throws `EXDEV: cross-device link not permitted`, which means the token will **never be persisted** for many Linux users — every `getSmallModel()` call that hits an expired token will repeat the network refresh.

The standard fix for atomic writes is to create the temp file in the **same directory** as the destination:

```ts
const dir = join(path, "..");
const tmpPath = join(
  dir,
  `mastracode-auth-${process.pid}-${Date.now()}.json`,
);
writeFileSync(tmpPath, serialized, { mode: 0o600 });
renameSync(tmpPath, path);
```

Using a sibling temp file guarantees the rename is always same-filesystem.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/chat/src/server/shared/small-model/anthropic-oauth.ts
Line: 35-46

Comment:
**`getAuthJsonPath` duplicated between `anthropic-oauth.ts` and `get-small-model.ts`**

Both files contain an identical `getAuthJsonPath()` implementation. As a result, `getSmallModel()` reads `auth.json` once in `readAuthData()` and then again inside `getAnthropicOAuthCredential()``readAuthJson()`. This means the file is read twice per `getSmallModel()` call when falling through to OAuth.

Consider exporting `getAuthJsonPath` (or a shared `readAuthJson`) from `anthropic-oauth.ts` and importing it in `get-small-model.ts`, or passing the already-parsed `authData` into `getAnthropicOAuthCredential` to avoid the double read.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "fix(desktop): unblock AI branch/workspac..." | Re-trigger Greptile

Comment thread packages/chat/src/server/shared/small-model/anthropic-oauth.ts Outdated
Comment thread packages/chat/src/server/shared/small-model/anthropic-oauth.ts Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts (1)

104-121: ⚠️ Potential issue | 🔴 Critical

Update the expected workspace-namer instructions.

Production now passes the new 20-character instruction string, but this assertion still expects the old prompt, so this test will fail.

🐛 Proposed test fix
 		expect(generateTitleFromMessageMock).toHaveBeenCalledWith({
 			message: "hey boss how are you",
 			agentModel: { id: "test-model" },
 			agentId: "workspace-namer",
 			agentName: "Workspace Namer",
-			instructions: "You generate concise workspace titles.",
+			instructions:
+				"You generate concise workspace titles. 20 characters or less. Return ONLY the title, nothing else.",
 			tracingContext: { surface: "workspace-auto-name" },
 		});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts` around
lines 104 - 121, The test expecting the old instructions needs updating: in the
"returns the model-generated title when a model is available" test, update the
expected instructions passed to generateTitleFromMessageMock (inside the
generateWorkspaceNameFromPrompt test) to match the new 20-character production
instruction string—i.e., replace the current instructions value ("You generate
concise workspace titles.") with the new 20-character instruction text used in
production while leaving the other fields (message, agentModel, agentId,
agentName, tracingContext) unchanged.
🧹 Nitpick comments (1)
packages/chat/src/server/shared/small-model/anthropic-oauth.ts (1)

35-56: Share the auth.json path/read helpers.

getAuthJsonPath and auth.json parsing now exist here and in get-small-model.ts; extracting a small local helper avoids drift in credential resolution. As per coding guidelines, If a utility is used once, nest it under the parent's directory. If used 2+ times, promote it to the highest shared parent's directory or top-level components/.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chat/src/server/shared/small-model/anthropic-oauth.ts` around lines
35 - 56, The helper functions getAuthJsonPath and readAuthJson are duplicated
elsewhere (get-small-model.ts); extract and export these single implementations
from anthropic-oauth.ts and remove the duplicate definitions so callers import
the shared helpers instead; specifically, ensure anthropic-oauth.ts exports
getAuthJsonPath and readAuthJson (with the same return types: string and
Record<string, unknown> | null), update get-small-model.ts (and any other file
that previously redefined them) to import these exported functions, and keep the
existing platform(), homedir(), join(), existsSync(), and readFileSync usage and
error-handling behavior unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/chat/src/server/shared/small-model/anthropic-oauth.ts`:
- Around line 97-105: The OAuth refresh fetch to ANTHROPIC_OAUTH_TOKEN_URL can
hang; modify the token-refreshing logic (the block that constructs response
using fetch with refresh_token, refreshToken, and CLAUDE_CODE_OAUTH_CLIENT_ID)
to use an AbortController with a short timeout (e.g., a few seconds), aborting
the request on timeout and clearing the timer, then handle the abort by
returning null instead of awaiting indefinitely; ensure the AbortController
signal is passed to fetch and that the timer is cleaned up to avoid leaks.
- Around line 84-89: The temp file is written to the system tmpdir then renamed,
which can fail with EXDEV across filesystems and also fails if the target
directory doesn't exist; change the logic in anthropic-oauth.ts to create the
target directory (use dirname(path) and mkdirSync(..., { recursive: true })),
create the temporary file in the same directory as the target (e.g., tmpPath =
join(dirname(path), `.${basename(path)}.tmp-${process.pid}-${Date.now()}`)),
writeFileSync(tmpPath, serialized, { mode: 0o600 }), then atomically
renameSync(tmpPath, path); this ensures the rename is on the same filesystem and
the directory exists before the rename.

---

Outside diff comments:
In `@apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts`:
- Around line 104-121: The test expecting the old instructions needs updating:
in the "returns the model-generated title when a model is available" test,
update the expected instructions passed to generateTitleFromMessageMock (inside
the generateWorkspaceNameFromPrompt test) to match the new 20-character
production instruction string—i.e., replace the current instructions value ("You
generate concise workspace titles.") with the new 20-character instruction text
used in production while leaving the other fields (message, agentModel, agentId,
agentName, tracingContext) unchanged.

---

Nitpick comments:
In `@packages/chat/src/server/shared/small-model/anthropic-oauth.ts`:
- Around line 35-56: The helper functions getAuthJsonPath and readAuthJson are
duplicated elsewhere (get-small-model.ts); extract and export these single
implementations from anthropic-oauth.ts and remove the duplicate definitions so
callers import the shared helpers instead; specifically, ensure
anthropic-oauth.ts exports getAuthJsonPath and readAuthJson (with the same
return types: string and Record<string, unknown> | null), update
get-small-model.ts (and any other file that previously redefined them) to import
these exported functions, and keep the existing platform(), homedir(), join(),
existsSync(), and readFileSync usage and error-handling behavior unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: da5cab65-ed9a-477c-9aa6-46a82f96cd24

📥 Commits

Reviewing files that changed from the base of the PR and between 1f2c093 and b0f8577.

📒 Files selected for processing (6)
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-branch-name.ts
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts
  • packages/chat/src/server/shared/small-model/anthropic-oauth.ts
  • packages/chat/src/server/shared/small-model/get-small-model.ts
  • packages/host-service/src/trpc/router/workspace-creation/utils/ai-branch-name.ts

Comment thread packages/chat/src/server/shared/small-model/anthropic-oauth.ts Outdated
Comment thread packages/chat/src/server/shared/small-model/anthropic-oauth.ts Outdated
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 20, 2026

🚀 Preview Deployment

🔗 Preview Links

Service Status Link
Neon Database (Neon) View Branch
Fly.io Electric (Fly.io) View App
Vercel API (Vercel) Open Preview
Vercel Web (Vercel) Open Preview
Vercel Marketing (Vercel) Open Preview
Vercel Admin (Vercel) Open Preview
Vercel Docs (Vercel) Open Preview

Preview updates automatically with new commits

- Fix pre-existing OpenAI provider-id mismatch: mastracode stores keys at
  apikey:openai-codex, not apikey:openai. Settings-saved OpenAI keys now
  reach the small-model path.
- Add length floor (30 chars) to api-key validation. Catches placeholders
  shorter than the prefix check would (e.g. bare "sk-ant-api").
- Drop success-path console.log calls. Keep the no-credentials warn so the
  silent-fallback regression stays visible if it ever returns.
- Tighten writeAnthropicEntry comment to acknowledge the cross-provider
  race window and document the (rare) impact + escape hatch.
- Hard-cap workspace auto-rename output at 20 chars server-side. Prompt
  asks for 20, but LLMs miscount; this guarantees the contract.
- Add provenance comment for CLAUDE_CODE_OAUTH_CLIENT_ID.
- Drop unused AnthropicOAuthHeaders type export; const is `as const` so
  callers get the same type inference.
- Change getSmallModel return type from Promise<unknown | null> to
  Promise<unknown> (null is assignable to unknown).
- Add unit tests:
  - isAnthropicApiKey / isOpenAIApiKey validation (rejects "dummy",
    rejects OAuth tokens sent as api keys, accepts real key shapes).
  - isOAuthEntry shape validation.
  - 20-char truncation behaviour in generateWorkspaceNameFromPrompt.
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 6 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/chat/src/server/shared/small-model/anthropic-oauth.ts">

<violation number="1" location="packages/chat/src/server/shared/small-model/anthropic-oauth.ts:84">
P1: `renameSync` across filesystems throws `EXDEV` on Linux. The temp file is written to `tmpdir()` (often a `tmpfs` mount at `/tmp`) but renamed into the user's home directory — a different filesystem. This silently prevents token persistence, causing a network refresh on every call.

Write the temp file in the same directory as the target so the rename stays on one filesystem.</violation>
</file>

<file name="apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts">

<violation number="1" location="apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts:46">
P2: The 20-character limit is only requested in the LLM prompt, not enforced in code. If the model returns a longer title, it will still be persisted.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/chat/src/server/shared/small-model/anthropic-oauth.ts Outdated
Comment thread apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
packages/chat/src/server/shared/small-model/get-small-model.ts (2)

99-101: Optional: exclude sk-ant- prefixes from isOpenAIApiKey.

sk-ant-api… and sk-ant-oat… both satisfy startsWith("sk-") && length >= 30, so an Anthropic key accidentally placed in OPENAI_API_KEY (or in the apikey:openai-codex slot) will pass validation and then 401 at OpenAI — which is exactly the failure mode this PR set out to prevent on the Anthropic side. A one-line guard keeps the two validators symmetric.

♻️ Proposed refinement
 export function isOpenAIApiKey(key: string): boolean {
-	return key.startsWith("sk-") && key.length >= MIN_API_KEY_LENGTH;
+	return (
+		key.startsWith("sk-") &&
+		!key.startsWith("sk-ant-") &&
+		key.length >= MIN_API_KEY_LENGTH
+	);
 }

And a matching test case in get-small-model.test.ts:

 	it("rejects values without the sk- prefix", () => {
 		expect(isOpenAIApiKey("api-key-aaaaaaaaaaaaaaaaaaaaaaaaaaaaa")).toBe(false);
 	});
+
+	it("rejects Anthropic-shaped keys placed in the OpenAI slot", () => {
+		expect(
+			isOpenAIApiKey("sk-ant-api03-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
+		).toBe(false);
+		expect(
+			isOpenAIApiKey("sk-ant-oat-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
+		).toBe(false);
+	});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chat/src/server/shared/small-model/get-small-model.ts` around lines
99 - 101, The isOpenAIApiKey function wrongly accepts Anthropic-style keys
because it only checks startsWith("sk-") and MIN_API_KEY_LENGTH; update
isOpenAIApiKey to reject Anthropic prefixes by adding a guard that returns false
for keys starting with "sk-ant-" (or any other known Anthropic prefixes like
"sk-ant-oat-"), while still using MIN_API_KEY_LENGTH, and add a unit test in
get-small-model.test.ts to assert that "sk-ant-..." keys do NOT pass
isOpenAIApiKey.

122-159: Resolution flow and diagnostic log look good.

Order (env/stored anthropic API key → OAuth → env/stored openai) matches the JSDoc, and the console.warn on the empty path surfaces auth.json presence and env-var set/unset without logging secret values — exactly what was needed to debug the OAuth-only and placeholder cases called out in the PR description.

One tiny nit (optional): the return type could be tightened from Promise<unknown> to Promise<LanguageModelV2 | null>. The latest releases of @ai-sdk/anthropic (v3.0.70) and @ai-sdk/openai (v3.0.53) export LanguageModelV2, which would eliminate the need for callers to cast. Happy to defer if you want to keep the signature loose while the AI-SDK types stabilize.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/chat/src/server/shared/small-model/get-small-model.ts` around lines
122 - 159, The getSmallModel function currently returns Promise<unknown>; update
its signature to Promise<LanguageModelV2 | null> and import the LanguageModelV2
type from the `@ai-sdk` packages so callers don't need to cast; keep the existing
null return on no-credentials and ensure the function still returns the result
of createAnthropic(...) and createOpenAI(...). Reference: update the return type
on getSmallModel and add the LanguageModelV2 import alongside existing uses of
createAnthropic and createOpenAI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/chat/src/server/shared/small-model/get-small-model.ts`:
- Around line 99-101: The isOpenAIApiKey function wrongly accepts
Anthropic-style keys because it only checks startsWith("sk-") and
MIN_API_KEY_LENGTH; update isOpenAIApiKey to reject Anthropic prefixes by adding
a guard that returns false for keys starting with "sk-ant-" (or any other known
Anthropic prefixes like "sk-ant-oat-"), while still using MIN_API_KEY_LENGTH,
and add a unit test in get-small-model.test.ts to assert that "sk-ant-..." keys
do NOT pass isOpenAIApiKey.
- Around line 122-159: The getSmallModel function currently returns
Promise<unknown>; update its signature to Promise<LanguageModelV2 | null> and
import the LanguageModelV2 type from the `@ai-sdk` packages so callers don't need
to cast; keep the existing null return on no-credentials and ensure the function
still returns the result of createAnthropic(...) and createOpenAI(...).
Reference: update the return type on getSmallModel and add the LanguageModelV2
import alongside existing uses of createAnthropic and createOpenAI.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 00d356e5-813b-48fd-902e-989effef0324

📥 Commits

Reviewing files that changed from the base of the PR and between b0f8577 and 5854b62.

📒 Files selected for processing (6)
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts
  • packages/chat/src/server/shared/small-model/anthropic-oauth.test.ts
  • packages/chat/src/server/shared/small-model/anthropic-oauth.ts
  • packages/chat/src/server/shared/small-model/get-small-model.test.ts
  • packages/chat/src/server/shared/small-model/get-small-model.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.ts
  • apps/desktop/src/lib/trpc/routers/workspaces/utils/ai-name.test.ts
  • packages/chat/src/server/shared/small-model/anthropic-oauth.ts

Covers the previously-untested paths in anthropic-oauth.ts:
- happy path: valid non-expired token returned without refresh
- refresh: expired token + 200 → new entry persisted with refreshed access,
  refresh, and future expiry
- refresh: response without refresh_token → original refresh token preserved
- failure: 4xx, missing access_token, fetch throw → all return null without
  writing
- cross-provider preservation: refreshing anthropic does not clobber the
  openai-codex slot (the race-window concern from the self-review)
- Fix data-loss in writeAnthropicEntry on parse failure: distinguish
  "file missing" from "parse error" in readAuthJson; refuse to overwrite
  auth.json when existing content is unparseable (would otherwise wipe
  every non-anthropic provider slot).
- Cache the in-memory refreshed entry so we don't hammer Anthropic's OAuth
  endpoint when persistence keeps failing (read-only home dir, full disk).
- Move provider-id constants to packages/chat/src/server/shared/
  auth-provider-ids.ts; desktop re-exports for back-compat. getSmallModel
  now iterates OPENAI_AUTH_PROVIDER_IDS (covers both `openai-codex` and
  legacy `openai` slots) instead of hardcoding one.
- Drop unused AuthJsonOAuthEntry export; only isOAuthEntry is consumed
  externally.
- Comment the fs-mock isolation constraint in anthropic-oauth.test.ts.
- Drop the workspace-title server-side .slice(0, 20). The prompt still
  says "20 characters or less"; LLMs miscount but the soft hint is good
  enough — hard truncation produced ugly mid-word cuts.
- New tests:
  - data-loss prevention: corrupt auth.json → no write
  - expires_in default path (3600s) when response omits it
  - leeway boundary: token expiring within REFRESH_LEEWAY_MS triggers refresh
  - cache reuse: write failure once doesn't cause a second refresh fetch
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 5 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/chat/src/server/shared/small-model/auth-storage-io.ts">

<violation number="1" location="packages/chat/src/server/shared/small-model/auth-storage-io.ts:47">
P2: Validate that parsed `auth.json` is a plain object before returning `kind: "ok"`; otherwise downstream writes can throw on `null`/non-object JSON.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/chat/src/server/shared/small-model/auth-storage-io.ts Outdated
@AviPeltz AviPeltz requested a review from Kitenite April 20, 2026 19:30
The custom anthropic-oauth.ts + auth-storage-io.ts that this PR added were
justified by a constraint that doesn't actually exist: mastracode is already
externalized via runtime-dependencies.ts (so electron-vite doesn't try to
bundle it) AND is already imported by 10+ other places in the codebase
(chat-service, host-service's resolveAnthropicCredential, etc). There's no
bundle-size or memory cost to importing it from get-small-model too.

Mastracode's createAuthStorage().getApiKey(providerId) handles:
- OAuth refresh via the Claude Code endpoint
- Atomic write-back to auth.json (EXDEV-safe)
- Refresh token rotation
- All the auth.json schema details

We were reimplementing all of it. ~830 lines deleted across 4 files;
~170-line get-small-model.ts replaces it.

What we keep:
- isAnthropicApiKey / isOpenAIApiKey validators (still needed for the
  ANTHROPIC_API_KEY=dummy fallthrough — the actual original bug)
- ANTHROPIC_OAUTH_HEADERS constant (these go on the inference request,
  not the OAuth flow — mastracode doesn't help here)
- auth-provider-ids.ts shared module
- Memoized AuthStorage instance (matches chat-service.ts pattern)

Verified: typecheck 25/25, full chat suite 123/123, ai-name 6/6, live OAuth
smoke test still produces a real branch name with ANTHROPIC_API_KEY=dummy
in the environment.
…model

If a user logs in with OAuth after saving an API key via Settings, the
main `anthropic` (or `openai-codex`) slot flips to OAuth and the stored
API key in `apikey:<provider>` becomes unreachable from the small-model
path — even though the user explicitly added it.

Check `getStoredApiKey()` first for both Anthropic and OpenAI before
falling back to the main slot. Matches the precedence the previous
custom implementation had and aligns with the docstring.
Branch names show up in compact UI surfaces (sidebar, tabs, status). The
existing "2-4 words" guidance still leaves room for a 60-char branch when
the words are long. Add an explicit char hint to keep them tight.

Soft prompt only — no server-side truncation. Same approach as workspace
title prompt.
@AviPeltz AviPeltz merged commit ae6cf14 into main Apr 21, 2026
13 of 14 checks passed
MocA-Love added a commit to MocA-Love/superset that referenced this pull request Apr 23, 2026
upstream superset-sh#3580 で getSmallModelCandidates() が async getSmallModel() に
一本化されたことに伴い:

- call-small-model.ts: 候補リスト反復から getSmallModel() 委譲に書き直し
- index.ts: getSmallModelCandidates/SmallModelCandidate/SmallModelProviderId
  の re-export を削除、isAnthropicApiKey/isOpenAIApiKey を追加
- runtime.ts: getSmallModel() 呼び出しに await 追加
- runtime.test.ts: モックを Promise.resolve に更新
MocA-Love added a commit to MocA-Love/superset that referenced this pull request Apr 23, 2026
upstream superset-sh#3580 で shared/auth-provider-ids.ts に移行する際
INCEPTION が削られたが、inception.ts と chat-service.ts が
まだ参照しているため復元。
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant