Fix Gmail token refresh and onboarding issues#699
Fix Gmail token refresh and onboarding issues#699zudsniper wants to merge 10 commits intoelie222:mainfrom
Conversation
- Add BETTER_AUTH_SECRET environment variable support with fallback to NEXTAUTH_SECRET - Fix database schema to include Better-Auth specific fields (accessTokenExpiresAt, refreshTokenExpiresAt) - Update auth configuration field mappings to match Better-Auth expectations - Fix saveTokens function to use correct field names - Re-enable AutoLogOut component for proper session cleanup - Update .env.example with both BETTER_AUTH_SECRET and legacy NEXTAUTH_SECRET These changes resolve consistent sign-in issues caused by field mapping mismatches between NextAuth and Better-Auth, particularly the expires_at field conversion errors. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…package - Add @ai-sdk/ui-utils dependency to package.json - Update imports in AI utility files to import stepCountIs from main ai package - Fix imports in utils/ai/example-matches/find-example-matches.ts - Fix imports in utils/ai/assistant/process-user-request.ts - Fix imports in utils/ai/group/create-group.ts - Add stepCountIs to llms/index.ts imports The @ai-sdk/ui-utils package has been deprecated and stepCountIs moved to the main ai package in AI SDK 5.0. This resolves the build error "stepCountIs is not exported from @ai-sdk/ui-utils". 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…ition - Create automatic migration utilities to detect and fix corrupted tokens - Add startup migrations that run automatically on app initialization - Improve encryption error handling with better logging and validation - Add standalone cleanup script for manual token fixes when needed - Clear old tokens that can't be decrypted to force re-authentication - Integrate StartupMigrations component into app layout for automatic execution This ensures migrating users don't need admin intervention - corrupted tokens are automatically detected and cleared, allowing seamless re-authentication on next login. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix chat message persistence using proper initialMessages pattern - Fix streaming display by removing problematic memoization - Fix OpenRouter tool compatibility with fallback model - Fix stats API graceful handling of missing Tinybird config - Remove w.append error by migrating to sendMessage API 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Fix Gmail token refresh logic to properly handle expiration timestamps - Add better error handling and logging for token refresh failures - Fix login redirect to check onboarding status before redirecting - Add debug page for force re-linking Google accounts - Add detailed logging to track token persistence issues 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
|
@zudsniper is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughAdds local deployment docs and scripts, runs startup migrations at server render, re-enables Prompt tab and adds “Sync with Rules” with a new action, refactors chat/message handling and chat transport, tightens auth/Gmail token refresh and encryption guards, adjusts Prisma schema/migration, and updates Docker compose with a cron job and helper scripts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Browser
participant RootLayout
participant StartupMigrations as StartupMigrations (server)
participant Startup as runStartupMigrations()
participant Migr as fixCorruptedTokens()/shouldRunTokenCleanup()
participant DB as Prisma
Browser->>RootLayout: Request app
activate RootLayout
RootLayout->>StartupMigrations: Render (server)
activate StartupMigrations
StartupMigrations->>Startup: runStartupMigrations()
activate Startup
Startup->>Migr: shouldRunTokenCleanup()
Migr->>DB: count/select accounts
DB-->>Migr: counts
alt Cleanup needed
Startup->>Migr: fixCorruptedTokens()
Migr->>DB: scan + update corrupted accounts
DB-->>Migr: result {fixed,total}
else Not needed
Startup-->>Startup: log and continue
end
Startup-->>StartupMigrations: done
deactivate Startup
StartupMigrations-->>RootLayout: null
deactivate StartupMigrations
RootLayout-->>Browser: HTML
deactivate RootLayout
sequenceDiagram
autonumber
participant User
participant RulesPrompt
participant Action as generatePromptFromRulesAction
participant DB as Prisma
participant Editor
User->>RulesPrompt: Click "Sync with Rules"
RulesPrompt->>Action: generatePromptFromRulesAction(emailAccountId)
Action->>DB: fetch enabled rules (+relations)
DB-->>Action: rules[]
Action-->>RulesPrompt: { rulesPrompt }
alt rulesPrompt empty
RulesPrompt-->>User: toast "No active rules found"
else rulesPrompt exists
RulesPrompt->>Editor: setContent(rulesPrompt)
Editor-->>RulesPrompt: updated
RulesPrompt-->>User: toast "Prompt synced"
end
sequenceDiagram
autonumber
participant App
participant GmailClient as gmail/client.ts
participant Google as Google OAuth
participant Logger
App->>GmailClient: ensure token freshness (expiresAt)
alt Not expiring (buffered)
GmailClient->>Logger: info "Token still valid"
GmailClient-->>App: return current tokens
else Expiring/expired
GmailClient->>Logger: info "Refreshing"
GmailClient->>Google: refresh_token grant
alt invalid_grant
Google-->>GmailClient: error invalid_grant
GmailClient->>Logger: error details
GmailClient-->>App: throw SafeError 401 "Reconnect account"
else success
Google-->>GmailClient: new tokens
GmailClient-->>App: updated tokens
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 22
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
apps/web/utils/ai/assistant/process-user-request.ts (2)
64-66: Guard against empty messages to avoid runtime errorAccessing
messages[messages.length - 1]throws whenmessagesis empty. This will crash on first-interaction flows with no prior messages.Apply this diff:
- if (messages[messages.length - 1].role === "assistant") - throw new Error("Assistant message cannot be last"); + if ( + messages.length > 0 && + messages[messages.length - 1].role === "assistant" + ) { + throw new Error("Assistant message cannot be last"); + }
711-737: Handle the “none” category correctly in update_sender_categoryThe tool input allows
"none"as a valid category, but the implementation only looks up actual categories and returns “Category not found” for"none". We should interpret"none"as clearing the sender’s category.Proposed patch:
- const cat = categories.find((c) => c.name === category); - - if (!cat) { - logger.error("Category not found", { - ...loggerOptions, - category, - }); - return { error: "Category not found" }; - } + const isNone = category === "none"; + const cat = isNone ? undefined : categories.find((c) => c.name === category); + + if (!isNone && !cat) { + logger.error("Category not found", { + ...loggerOptions, + category, + }); + return { error: "Category not found" }; + } @@ - try { - await updateCategoryForSender({ - emailAccountId, - sender: existingSender?.email || sender, - categoryId: cat.id, - }); + try { + await updateCategoryForSender({ + emailAccountId, + sender: existingSender?.email || sender, + // Interpret "none" as clearing the category + categoryId: isNone ? null : cat!.id, + });If
updateCategoryForSenderdoesn’t acceptnull, add an overload or a dedicated “clear category” path.Also applies to: 739-745
apps/web/utils/auth.ts (1)
467-471: Don’t write empty string on encryption failure — risk of token loss.
encryptToken(...) || ""can overwrite a valid value with an empty string if encryption fails, leading to “missing token” symptoms and failed refreshes. Prefer skipping the update (undefined) or throwing.Apply this diff:
- if (data.access_token) - data.access_token = encryptToken(data.access_token) || undefined; - if (data.refresh_token) - data.refresh_token = encryptToken(data.refresh_token) || ""; + if (data.access_token) { + const enc = encryptToken(data.access_token); + if (enc) data.access_token = enc; + else { + // Skip update if encryption fails to avoid clobbering with empty string + data.access_token = undefined as unknown as string | undefined; + } + } + if (data.refresh_token) { + const enc = encryptToken(data.refresh_token); + if (enc) data.refresh_token = enc; + else { + data.refresh_token = undefined as unknown as string | undefined; + } + }Optionally, log and capture the encryption failure so it can be investigated.
apps/web/utils/encryption.ts (1)
75-77: Also guard against empty payload (IV + tag only) to avoid returning empty stringsAn input that only contains IV + auth tag (no ciphertext) will decrypt to an empty string. Tokens should never be empty; returning null is safer and consistent with downstream “missing token” handling.
Apply this diff:
// Extract encrypted content (remaining bytes) const encrypted = buffer.subarray(IV_LENGTH + AUTH_TAG_LENGTH); + // Reject tokens with no payload (IV + tag only) + if (encrypted.length === 0) { + logger.warn("Encrypted token has no payload"); + return null; + }apps/web/app/(landing)/welcome/page.tsx (1)
18-21: searchParams type is incorrect in Next.js 15; parse force and question safelyIn Next.js 15 App Router, searchParams is a Promise of string | string[] values. Here it’s typed with
force?: boolean, and later used as a boolean. If a user visits?force=false, the string "false" is truthy, bypassing the intended redirect.Apply these diffs:
-export default async function WelcomePage(props: { - searchParams: Promise<{ question?: string; force?: boolean }>; -}) { +export default async function WelcomePage(props: { + searchParams: Promise<Record<string, string | string[] | undefined>>; +}) { const searchParams = await props.searchParams; - const session = await auth(); + const session = await auth();Introduce parsed values (add near the top, after loading searchParams):
+ const forceParam = Array.isArray(searchParams.force) + ? searchParams.force[0] + : searchParams.force; + const force = forceParam === "true" || forceParam === "1";And update the redirect condition later:
- if (!searchParams.force && user.completedOnboardingAt) + if (!force && user.completedOnboardingAt) redirect(env.NEXT_PUBLIC_APP_HOME_PATH);apps/web/app/api/chat/route.ts (2)
49-52: Scope chat lookup by emailAccountId to prevent IDOR and over-fetchingThe initial findUnique fetch is not filtered by owner/account. You check authorization after fetching, but queries in API routes must be scoped to the authenticated account to avoid unnecessary data retrieval and reduce IDOR risk.
- const chat = - (await getChatById(data.id)) || + const chat = + (await getChatById(data.id, emailAccountId)) || (await createNewChat({ emailAccountId, chatId: data.id }));Apply together with the function changes below.
118-124: Scope getChatById by emailAccountId (ownership check at query level)Aligns with security guidelines: all Prisma queries must include ownership filters.
-async function getChatById(chatId: string) { - const chat = await prisma.chat.findUnique({ - where: { id: chatId }, - include: { messages: true }, - }); - return chat; -} +async function getChatById(chatId: string, emailAccountId: string) { + return prisma.chat.findFirst({ + where: { id: chatId, emailAccountId }, + include: { messages: true }, + }); +}docker-compose.yml.backup (1)
1-68: Remove committed backup file from VCSThis appears to be a generated backup artifact. Keeping backups in-repo causes drift and confusion, and may inadvertently reintroduce old settings when merged.
- Delete docker-compose.yml.backup from the repository.
- Add a .gitignore rule (e.g., docker-compose*.backup or *.backup) to prevent future commits.
Would you like me to craft a patch for that as a follow-up?apps/web/components/assistant-chat/helpers.ts (1)
5-15: Fix potential TypeError when messages is undefinedchat?.messages.map(...) will throw if chat exists but messages is undefined. Use optional chaining on messages and nullish-coalescing for the fallback.
Apply this diff:
export function convertToUIMessages(chat: GetChatResponse): ChatMessage[] { - return ( - chat?.messages.map((message) => ({ + return ( + chat?.messages?.map((message) => ({ id: message.id, role: message.role as UIMessage["role"], parts: message.parts, // metadata: { // createdAt: formatISO(message.createdAt), // }, - })) || [] + })) ?? [] ); }
🧹 Nitpick comments (48)
install_gcloud.sh (6)
1-2: Use env-based shebang and enable strict modePrefer /usr/bin/env to locate bash across environments, and enable strict mode early to fail fast.
Apply this diff:
-#!/bin/bash +#!/usr/bin/env bash + +set -Eeuo pipefail
3-4: Make URL overrideable and immutableAllow an env override for testing/pinning, and mark the variable readonly.
Apply this diff:
-URL=https://dl.google.com/dl/cloudsdk/channels/rapid/install_google_cloud_sdk.bash +URL="${GCLOUD_INSTALL_URL:-https://dl.google.com/dl/cloudsdk/channels/rapid/install_google_cloud_sdk.bash}" +readonly URL
6-8: Clean up temp directory and scope variables locallyEnsure temp files are removed with a trap, and scope variables as local.
Apply this diff:
-function download { - scratch="$(mktemp -d -t tmp.XXXXXXXXXX)" || exit - script_file="$scratch/install_google_cloud_sdk.bash" +function download { + local scratch + scratch="$(mktemp -d -t gcloud.XXXXXXXXXX)" || { echo "mktemp failed" >&2; exit 1; } + trap 'rm -rf "$scratch"' EXIT + local script_file="$scratch/install_google_cloud_sdk.bash"
9-10: Preflight check for curlFail with a clear message if curl isn’t available.
Apply this diff:
echo "Downloading Google Cloud SDK install script: $URL" - curl -# "$URL" > "$script_file" || exit + command -v curl >/dev/null 2>&1 || { echo "curl is required but not installed" >&2; exit 1; } + curl --fail --show-error --location --silent --output "$script_file" "$URL"
13-14: Explicitly invoke bash for the installer and consider supply-chain hardening
- Explicitly running with bash is clearer and avoids relying on executable bits/shebang correctness.
- For additional hardening, consider pinning a specific SDK version (or channel) and verifying a checksum before execution. Executing a freshly downloaded remote script is operationally convenient but increases supply-chain risk.
Apply this diff:
echo "Running install script from: $script_file" - "$script_file" "$@" + bash "$script_file" "$@"Optionally, pin the installer URL via GCLOUD_INSTALL_URL to a known version and verify a SHA256 checksum before execution.
5-6: Optional: Rename to a more descriptive entrypointdownload is generic; install_gcloud communicates intent better.
Apply this diff:
-function download { +function install_gcloud { @@ -download "$@" +install_gcloud "$@"Also applies to: 17-17
apps/web/utils/ai/group/create-group.ts (2)
35-54: Add try/catch inside tool.execute to surface Gmail query errorsqueryBatchMessages hits an external API; add logging and rethrow to aid debugging and avoid silent failures in the AI tool chain.
Apply this diff:
execute: async ({ query }: { query: string | undefined }) => { - const { messages } = await queryBatchMessages(gmail, { - query: `${query || ""} -label:sent`.trim(), - maxResults: 20, - }); - - const results = messages.map((message) => ({ - from: message.headers.from, - subject: message.headers.subject, - snippet: message.snippet, - })); - - return results; + try { + const { messages } = await queryBatchMessages(gmail, { + query: `${query || ""} -label:sent`.trim(), + maxResults: 20, + }); + + const results = messages.map((message) => ({ + from: message.headers.from, + subject: message.headers.subject, + snippet: message.snippet, + })); + + return results; + } catch (error) { + // Surface context to the model-level logs via tool error bubbling + // Logger available in outer scope if needed, but avoid global .with() + throw error; + } },
110-121: Deduplicate and normalize AI-generated criteria to reduce noiseMultiple tool calls can yield duplicates and inconsistent whitespace/casing. Normalize and dedupe for better downstream matching.
Apply this diff:
- const combinedArgs = generateGroupItemsToolCalls.reduce< - z.infer<typeof generateGroupItemsSchema> - >( - (acc, { input }) => { - const typedArgs = input as z.infer<typeof generateGroupItemsSchema>; - return { - senders: [...acc.senders, ...typedArgs.senders], - subjects: [...acc.subjects, ...typedArgs.subjects], - }; - }, - { senders: [], subjects: [] }, - ); + const combinedArgs = generateGroupItemsToolCalls.reduce< + z.infer<typeof generateGroupItemsSchema> + >( + (acc, { input }) => { + const typedArgs = input as z.infer<typeof generateGroupItemsSchema>; + return { + senders: acc.senders.concat( + typedArgs.senders.map((s) => s.trim()).filter(Boolean), + ), + subjects: acc.subjects.concat( + typedArgs.subjects.map((s) => s.trim()).filter(Boolean), + ), + }; + }, + { senders: [], subjects: [] }, + ); + + // Dedupe while preserving order + combinedArgs.senders = Array.from(new Set(combinedArgs.senders)); + combinedArgs.subjects = Array.from(new Set(combinedArgs.subjects));To support lodash-based dedupe instead, add the import (outside this range) and replace the Set lines:
// Add at top-level imports import uniq from "lodash/uniq"; // Replace Set-based dedupe with: combinedArgs.senders = uniq(combinedArgs.senders); combinedArgs.subjects = uniq(combinedArgs.subjects);apps/web/package.json (1)
24-24: Pin version to exact for reproducible builds (consistency with most deps here)Most dependencies are pinned without carets. Consider pinning @ai-sdk/ui-utils to avoid accidental minor changes.
Apply this diff:
- "@ai-sdk/ui-utils": "^1.2.11", + "@ai-sdk/ui-utils": "1.2.11",apps/web/utils/ai/example-matches/find-example-matches.ts (2)
117-127: Deduplicate matches to avoid duplicates from multiple tool callsMultiple emissions can duplicate the same emailId/rule pair. Dedupe before returning to the caller.
Apply this diff:
- const matches = findExampleMatchesToolCalls.reduce< - z.infer<typeof findExampleMatchesSchema>["matches"] - >((acc, { input }) => { - const typedArgs = input as z.infer<typeof findExampleMatchesSchema>; - return acc.concat(typedArgs.matches); - }, []); + const rawMatches = findExampleMatchesToolCalls.reduce< + z.infer<typeof findExampleMatchesSchema>["matches"] + >((acc, { input }) => { + const typedArgs = input as z.infer<typeof findExampleMatchesSchema>; + return acc.concat(typedArgs.matches); + }, []); + + // Dedupe by (emailId, rule) + const seen = new Set<string>(); + const matches = rawMatches.filter((m) => { + const key = `${m.emailId}::${m.rule}`; + if (seen.has(key)) return false; + seen.add(key); + return true; + });If you prefer lodash, import uniqBy outside this range and replace the filter with:
import uniqBy from "lodash/uniqBy"; const matches = uniqBy(rawMatches, (m) => `${m.emailId}::${m.rule}`);
128-135: Consider returning the ‘reason’ field for transparency/debuggabilityThe schema includes reason, but the return payload drops it. Including it helps UI/debug tools explain why an email matched.
Apply this diff:
- return { - matches: matches - .filter((match) => listedEmails[match.emailId]) - .map((match) => ({ - ...listedEmails[match.emailId], - rule: match.rule, - })), - }; + return { + matches: matches + .filter((match) => listedEmails[match.emailId]) + .map((match) => ({ + ...listedEmails[match.emailId], + rule: match.rule, + reason: match.reason, + })), + };docker/cron/cron-job.sh (1)
1-2: Harden the cron script: fail fast, validate secret, and retry curlSmall improvements reduce silent failures and aid observability in cron contexts.
Apply this diff:
-#!/bin/sh -curl -H "Authorization: Bearer $CRON_SECRET" http://web:3000/api/google/watch/all +#!/bin/sh +set -eu + +if [ -z "${CRON_SECRET:-}" ]; then + echo "CRON_SECRET is not set" >&2 + exit 1 +fi + +# Use -f to fail on HTTP errors, -sS for quiet with errors, and a few retries for transient network blips. +curl -fsS --retry 3 \ + -H "Authorization: Bearer $CRON_SECRET" \ + "http://web:3000/api/google/watch/all"apps/web/utils/ai/assistant/chat.ts (1)
943-948: Good defaulting of model; consider centralizing model selectionDefaulting
aiModelto"anthropic/claude-3.5-sonnet"is sensible. To keep consistency with other AI flows (e.g.,getModel(...)usage elsewhere), consider centralizing model/provider resolution via the same helper to avoid drift across features.apps/web/utils/auth.ts (1)
393-401: Post-save DB verification is solid; consider also logging encryption path.This check is valuable. Given the encryption special-case when updating via
emailAccount, add context to your success log to disambiguate the write path (direct account update vs nested through emailAccount). It will speed up debugging if tokens are unexpectedly absent.Apply this diff:
logger.info("[linkAccount] Successfully linked account", { email: user.email, userId: account.userId, accountId: account.id, + writePath: "account.update" /* or "emailAccount.update.account" in that branch */, tokensInDb: { hasAccessToken: !!savedAccount?.access_token, hasRefreshToken: !!savedAccount?.refresh_token, }, });Also applies to: 406-409
apps/web/components/editor/SimpleRichTextEditor.tsx (1)
136-140: Emit update on setContent so listeners fire immediately.Tiptap’s
setContentcan optionally emit an update. If you wantonUpdate-driven logic (likeonClearContents) to run on programmatic content changes, passtrue.Apply this diff:
- if (editor) { - editor.commands.setContent(content); - } + if (editor) { + editor.commands.setContent(content, true); + }run-docker.sh.backup (1)
1-3: Harden the script and improve DX.
- Add strict mode to fail fast on errors.
- Optionally use
--quiet-pullfor fresher images without noisy logs.Apply this diff:
-#!/bin/bash +#!/bin/bash +set -euo pipefail + # Source the environment file to load UPSTASH_REDIS_TOKEN @@ -echo "🚀 Starting docker compose..." -docker compose up -d +echo "🚀 Starting docker compose..." +docker compose up -d --quiet-pullAlso applies to: 15-19
apps/web/utils/encryption.ts (1)
88-96: Right level of logging in production; consider including minimal diagnosticsSwitching to warn in non-development is sensible to reduce noise. Consider adding a small diagnostic like buffer length to the warn (not the token) to aid triage without risking sensitive data exposure.
Example:
- logger.warn("Token decryption failed - token may need refresh"); + logger.warn("Token decryption failed - token may need refresh"); + // optionally: logger.debug("Encrypted buffer length", { len: encryptedText?.length });apps/web/utils/ai/rule/create-rule-schema.ts (4)
40-45: Provider check works, but strongly type provider to avoid drift and stringly-typed bugsInlining
provider === "microsoft"is fine, but the parameter is typed asstring. If another codepath passes"outlook"/"msft"or a mistyped value, MOVE_FOLDER could be incorrectly allowed or disallowed.
- Prefer a shared union type (e.g.,
type EmailProvider = "google" | "microsoft") and reuse it here.- Alternatively, keep using a single source-of-truth helper (previously
isMicrosoftProvider) to centralize provider semantics.I can prepare a small follow-up to introduce a shared type and replace raw strings.
84-90: folderName gating matches the action allowance; keep the provider switch centralizedConditionally including
folderNameonly for the Microsoft provider is consistent with MOVE_FOLDER availability. Same comment as above: centralize provider logic to prevent future drift if provider identifiers evolve.
36-36: Typeprovideras a shared union instead ofstringConcrete suggestion:
-const actionSchema = (provider: string) => +const actionSchema = (provider: EmailProvider) =>And likewise for the exported creators below:
-export const createRuleSchema = (provider: string) => +export const createRuleSchema = (provider: EmailProvider) =>Add this import at the top of the file (or wherever the canonical provider type lives in your codebase):
import type { EmailProvider } from "@/utils/email/provider-types";
1-107: Double-check z.enum tuple construction safetyThe pattern with
(Object.values(ActionType) as [ActionType, ...ActionType[]])is common, but it’s an unsafe cast. Today it’s fine because ActionType has many members. If the filtered branch ever resulted in an empty list (e.g., future conditional constraints), Zod would throw at runtime.Alternative:
- Use
z.nativeEnum(ActionType)for the full set and add a.refine()to exclude MOVE_FOLDER when not Microsoft (yields clearer error messages), or- Precompute a const tuple for allowed values (no filtering) and branch with
.superRefineto enforce provider-specific disallowance.Not blocking; just calling out the cast risk.
apps/web/app/(landing)/welcome/page.tsx (1)
47-49: Parse question with radix and handle arraysThis avoids implicit base detection and handles array values gracefully.
- const questionIndex = searchParams.question - ? Number.parseInt(searchParams.question) - : 0; + const questionParam = Array.isArray(searchParams.question) + ? searchParams.question[0] + : searchParams.question; + const questionIndex = questionParam + ? Number.parseInt(questionParam, 10) + : 0;run-docker.sh (1)
5-11: Sourcing .env works if content is shell-safe; consider using docker compose’s --env-fileDirectly sourcing .env assumes all entries are valid shell assignments without special characters that would be interpreted by Bash. To reduce surprises:
Option A (keep sourcing): Document that apps/web/.env must contain only simple KEY=VALUE pairs without expansions.
Option B (preferred): Pass the env file explicitly to docker compose and avoid
set -a; source ...altogether:-if [ -f "apps/web/.env" ]; then - set -a # Enable export of all variables - # shellcheck disable=SC1091 - source apps/web/.env - set +a # Disable export of all variables - echo "✓ Sourced apps/web/.env" +if [ -f "apps/web/.env" ]; then + echo "✓ Using apps/web/.env"And below, call docker compose with
--env-file apps/web/.env:- docker compose "$@" + docker compose --env-file apps/web/.env "$@"- docker compose up -d + docker compose --env-file apps/web/.env up -dThis keeps env parsing within docker compose’s rules and reduces shell pitfalls.
apps/web/utils/actions/ai-rule.ts (1)
592-615: Prefer narrowing Prisma selects to only required fieldsYou're including actions, categoryFilters, and the entire group with items. The current prompt generator (createPromptFromRule) does not consume group/items, so this pulls more data than necessary. Consider reducing the selection to the fields actually used to cut memory and query cost. If group data will be used soon, ignore this.
- include: { - actions: true, - categoryFilters: true, - group: { - select: { - id: true, - name: true, - items: { - select: { - id: true, - type: true, - value: true, - }, - }, - }, - }, - }, + include: { + actions: true, + categoryFilters: true, + },apps/web/app/api/chat/route.ts (1)
42-42: Nit: hasUser is always true hereThis log line runs after the early return; hasUser will always be true, so the field adds no value.
-logger.info("Processing chat request", { emailAccountId, hasUser: !!user }); +logger.info("Processing chat request", { emailAccountId });apps/web/app/api/user/stats/email-actions/route.ts (1)
31-49: Avoid try/catch in GET routes wrapped with middleware; return 401 for missing authThe middleware centralizes error handling; simplify the handler. Also, if auth email is missing, 401 Unauthorized is more appropriate than 400.
-export const GET = withEmailAccount(async (request) => { - try { - const userEmail = request.auth.email; - - if (!userEmail) { - return NextResponse.json( - { error: "User email not found" }, - { status: 400 }, - ); - } - - const result = await getEmailActionStats({ userEmail }); - - return NextResponse.json(result); - } catch (_error) { - return NextResponse.json( - { error: "Failed to fetch email action stats" }, - { status: 500 }, - ); - } -}); +export const GET = withEmailAccount(async (request) => { + const userEmail = request.auth.email; + if (!userEmail) { + return NextResponse.json( + { error: "User email not found" }, + { status: 401 }, + ); + } + const result = await getEmailActionStats({ userEmail }); + return NextResponse.json(result); +});apps/web/utils/gmail/client.ts (1)
95-107: Also update expiry when token string remains the sameSome providers may return the same access token but extend expiry. Save tokens when either the token or expiry changes to avoid stale expiry in the DB.
- if (newAccessToken !== accessToken) { - await saveTokens({ - tokens: { - access_token: newAccessToken ?? undefined, - expires_at: tokens.credentials.expiry_date - ? Math.floor(tokens.credentials.expiry_date / 1000) - : undefined, - }, - accountRefreshToken: refreshToken, - emailAccountId, - provider: "google", - }); - } + const newExpiryMs = tokens.credentials.expiry_date ?? undefined; + if ( + newAccessToken !== accessToken || + (newExpiryMs && newExpiryMs !== expiresAt) + ) { + await saveTokens({ + tokens: { + access_token: newAccessToken ?? undefined, + expires_at: newExpiryMs ? Math.floor(newExpiryMs / 1000) : undefined, + }, + accountRefreshToken: refreshToken, + emailAccountId, + provider: "google", + }); + }setup-autostart.sh (4)
25-30: ShellCheck SC2155: split declare/assign to avoid masking return valuesMinor nit: declare and assign separately for SCRIPT_DIR to satisfy SC2155.
-readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +readonly SCRIPT_DIRRepeat this pattern where applicable (e.g., LOG_FILE timestamps later).
327-356: Systemd ExecStop hardcodes /usr/bin/docker-compose — not portableMany environments only have the Docker CLI plugin (“docker compose”), not docker-compose at /usr/bin/docker-compose. Use the wrapper, or at least call docker compose.
Apply this diff:
-ExecStop=/usr/bin/docker-compose -f ${COMPOSE_FILE} down +ExecStop=/usr/bin/docker compose -f ${COMPOSE_FILE} downIf you want to reuse the wrapper, place a small shim script (e.g., ${SCRIPT_DIR}/compose-shim.sh) and call that from ExecStart/ExecStop. Systemd doesn’t know your bash functions.
154-193: YAML manipulation via awk is brittleParsing/modifying YAML with awk is fragile and may break with indentation, comments, or anchors. Prefer yq for safe edits.
Example using yq (if available):
yq -i ' .services |= with_entries( .value |= ( if has("restart") then . else . + {restart: "unless-stopped"} end ) ) ' "$COMPOSE_FILE"If yq isn’t guaranteed, document the limitation or add a feature gate that skips this step when yq is unavailable.
335-345: Systemd Type=oneshot with Restart=on-failure is unusual for long-lived stacksThis is workable, but consider Type=simple (default) with ExecStart=... up and ExecStop=... down for clearer semantics. With oneshot + RemainAfterExit, restarts won’t happen unless ExecStart fails.
If you see reliability issues on boot, consider:
- Type=simple
- ExecStart=/usr/bin/docker compose -f ${COMPOSE_FILE} up -d --remove-orphans
- ExecStop=/usr/bin/docker compose -f ${COMPOSE_FILE} down
- Restart=always (or on-failure)
apps/web/CLAUDE.md (1)
34-41: Systemd service name/path likely out of date or user-specificDoc references /etc/systemd/system/inbox-zero.service and working dir /home/jason/services/inbox-zero. The setup script creates inbox-zero-docker.service and uses SCRIPT_DIR for working dir. This mismatch will confuse operators.
Proposed updates:
- Align the service name with setup-autostart.sh (SERVICE_NAME=inbox-zero-docker) or vice versa.
- Replace the hard-coded path with a placeholder (e.g., <project_root>) or reference SCRIPT_DIR.
Do you want me to open a doc PR aligning these?apps/web/components/assistant-chat/types.ts (1)
42-48: Consider replacing{}metadata type withunknownfor clarityUsing
{}can accidentally accept any non-nullish value. If metadata is unused,unknown(orRecord<string, never>) is more intentional.-export type ChatMessage = UIMessage< - {}, // MessageMetadata, - CustomUIDataTypes, - ChatTools ->; +export type ChatMessage = UIMessage< + unknown, // MessageMetadata + CustomUIDataTypes, + ChatTools +>;If other files rely on
{}, keep it as-is.apps/web/app/api/google/linking/auth-url/route.ts (2)
13-15: Avoid embedding userId in OAuth state to minimize exposureState values are reflected in the authorization redirect URL and can show up in third-party logs. Prefer a random nonce in the state and keep userId only in a secure first-party cookie or server-side store.
Example approach (requires adjusting the callback handler to read userId from a cookie or server session):
- const stateObject = { userId, nonce: crypto.randomUUID() }; + const stateObject = { nonce: crypto.randomUUID() }; const state = Buffer.from(JSON.stringify(stateObject)).toString("base64url"); // Optionally store userId separately in a secure cookie to retrieve on callback // response.cookies.set(GOOGLE_LINKING_USER_COOKIE_NAME, userId, { httpOnly: true, ... })
6-6: Add include_granted_scopes and extend SCOPES with OpenID Connect scopesTo ensure prior grants are preserved and you receive an ID token with the standard OIDC claims, apply the following updates:
• In apps/web/app/api/google/linking/auth-url/route.ts, add include_granted_scopes: true
const url = googleAuth.generateAuthUrl({ access_type: "offline", scope: SCOPES.join(" "), state, prompt: "consent", // Force consent screen to ensure refresh token is returned + include_granted_scopes: true, });• In apps/web/utils/gmail/scopes.ts, prepend the required OIDC scopes
export const SCOPES = [ + "openid", + "email", "https://www.googleapis.com/auth/userinfo.profile", "https://www.googleapis.com/auth/userinfo.email", ];apps/web/app/layout.tsx (1)
88-88: Guard startup migrations so they don't run on every layout renderRendering StartupMigrations inside RootLayout can trigger DB checks on every request/render. Consider making the migrations idempotent and run-once per process to avoid unnecessary load.
You can wrap the migration in React’s cache in the component to ensure a single execution per process. See suggested change in apps/web/components/StartupMigrations.tsx below:
// apps/web/components/StartupMigrations.tsx import { cache } from "react"; import { runStartupMigrations } from "@/utils/startup"; const runOnce = cache(async () => { await runStartupMigrations(); }); export async function StartupMigrations() { await runOnce(); return null; }apps/web/components/StartupMigrations.tsx (2)
7-13: Run migrations once per process to reduce DB loadAs written, this will run on every server render where the component is evaluated. Wrap the execution with React’s cache to ensure a single run per process.
Apply:
+import { cache } from "react"; import { runStartupMigrations } from "@/utils/startup"; +const runOnce = cache(async () => { + await runStartupMigrations(); +}); + /** * Server component that runs startup migrations * This component will run on the server side when the app loads */ export async function StartupMigrations() { - // Run migrations on server side during page load - await runStartupMigrations(); + // Run migrations once per server process + await runOnce(); // Return nothing - this is just for side effects return null; }
1-13: Alternative: move to instrumentation.ts for true startup-time executionIf you want to guarantee a single run on boot rather than per-render, consider moving the call into apps/web/instrumentation.ts (Next.js server startup hook) and guard it there. This avoids coupling DB operations to render.
apps/web/scripts/fix-corrupted-tokens.ts (1)
10-37: Use createScopedLogger in fix-corrupted-tokens.tsThis script currently uses
console.log/console.error. Per project guidelines, backend TypeScript files should use our scoped logger utility for consistent, structured logging. Consider an optional refactor:• File:
apps/web/scripts/fix-corrupted-tokens.ts
– Add at top:
ts import { createScopedLogger } from "@/utils/logger"; const logger = createScopedLogger("scripts/fix-corrupted-tokens");
– Replace calls insidemain():
```diff
- console.log("🔧 Starting corrupted token cleanup...");
+ logger.info("🔧 Starting corrupted token cleanup...");... - console.log("✅ Token cleanup completed successfully!"); + logger.info("✅ Token cleanup completed successfully!"); - console.log("📊 Results:"); + logger.info("📊 Results:"); - console.log(` - Total accounts checked: ${result.total}`); + logger.info(` - Total accounts checked: ${result.total}`); - console.log(` - Corrupted tokens fixed: ${result.fixed}`); + logger.info(` - Corrupted tokens fixed: ${result.fixed}`); - console.error("❌ Error during token cleanup:", error); + logger.error("❌ Error during token cleanup", { error }); ```apps/web/utils/startup.ts (1)
16-41: Add caching for startup migrations to avoid per-request DB callsThe
<StartupMigrations />Server Component in apps/web/app/layout.tsx (line 88) callsrunStartupMigrations()on every server render, triggering twocount()queries each time. While these are cheap, they still add latency and load under high traffic. Consider gatekeeping this invocation with a simple cache or durable marker:• Persist a “last-run” timestamp (in DB or KV) and skip if it’s within a TTL (e.g. 1 hour)
• Record a versioned marker after deploy so migrations only run once per release
• Leverage Next.js’s ISR/revalidation: e.g.fetch(..., { next: { revalidate: 3600 } })around your cleanup checkThis will keep the app snappy under load without running unnecessary startup queries.
apps/web/app/(app)/force-relink/page.tsx (2)
1-1: Prefer server page + client child to align with app router conventionsGuideline says pages under apps/web/app/(app)/*/page.tsx are Server Components. Since this page is interactive, move the UI into a small client component (e.g., ForceRelinkClient) and keep the page itself as a Server Component that renders it. This keeps consistency with the app structure.
82-86: Add explicit button typeAlways include a type for buttons. Even though this page isn’t inside a form, being explicit avoids regressions if markup changes later.
Apply this diff:
- <Button + <Button + type="button" onClick={handleRelink} disabled={isRelinking} className="w-full" >apps/web/app/(app)/[emailAccountId]/assistant/RulesPrompt.tsx (1)
351-375: Use finally to ensure isGenerating resets on both success and errorThis avoids duplicated setIsGenerating(false) calls and guarantees cleanup in all paths.
Apply this diff:
- toast.promise( - async () => { - setIsGenerating(true); - const result = await generatePromptFromRulesAction( - emailAccountId, - {}, - ); - - if (result?.serverError) { - setIsGenerating(false); - throw new Error(result.serverError); - } - - if (result?.data?.rulesPrompt) { - editorRef.current?.setContent( - result?.data?.rulesPrompt || "", - ); - } else { - toast.error("No active rules found"); - } - - setIsGenerating(false); - - return result; - }, + toast.promise( + async () => { + setIsGenerating(true); + try { + const result = await generatePromptFromRulesAction( + emailAccountId, + {}, + ); + + if (result?.serverError) { + throw new Error(result.serverError); + } + + if (result?.data?.rulesPrompt) { + editorRef.current?.setContent( + result?.data?.rulesPrompt || "", + ); + } else { + toast.error("No active rules found"); + } + + return result; + } finally { + setIsGenerating(false); + } + },apps/web/utils/migration/fix-encrypted-tokens.ts (2)
73-84: Also clear new Better-Auth expiry fields to avoid stale stateIf any affected accounts already have the new expiry fields populated, clear them alongside legacy fields to keep the account state consistent.
Apply this diff:
prisma.account.update({ where: { id: account.id }, data: { access_token: null, refresh_token: null, // Also clear old expires_at if it exists expires_at: null, + // And clear new Better-Auth expiry fields if present + accessTokenExpiresAt: null, + refreshTokenExpiresAt: null, }, }),
86-94: Consider batching updates to limit DB concurrencyIf many accounts are corrupted, firing all updates at once can spike DB connections. Optional: chunk updates (e.g., in batches of 50) or use a p-limit to cap concurrency.
apps/web/providers/ChatProvider.tsx (3)
78-84: Remove console usage in client codeProject guidelines forbid console.*. Rely on toastError, which you already use.
Apply this diff:
- onError: (error) => { - console.error(error); - toastError({ + onError: (error) => { + toastError({ title: "An error occured!", description: error.message || "", }); },
95-99: Prevent sending empty messagesGuard against blank inputs to avoid no-op requests to the chat API.
Apply this diff:
- const handleSubmit = useCallback(() => { - chat.sendMessage({ text: input }); - setInput(""); - }, [chat.sendMessage, input]); + const handleSubmit = useCallback(() => { + const text = input.trim(); + if (!text) return; + chat.sendMessage({ text }); + setInput(""); + }, [chat.sendMessage, input]);
125-132: Prefer crypto.randomUUID when availableMore robust and avoids Math.random collisions across tabs.
Apply this diff:
function generateUUID(): string { - return "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, (c) => { - const r = (Math.random() * 16) | 0; - const v = c === "x" ? r : (r & 0x3) | 0x8; - return v.toString(16); - }); + if (typeof crypto !== "undefined" && typeof crypto.randomUUID === "function") { + return crypto.randomUUID(); + } + return "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, (c) => { + const r = (Math.random() * 16) | 0; + const v = c === "x" ? r : (r & 0x3) | 0x8; + return v.toString(16); + }); }
| import { useUser } from "@/hooks/useUser"; | ||
| import { LoadingContent } from "@/components/LoadingContent"; | ||
| import { Loading } from "@/components/Loading"; | ||
| import AutoLogOut from "./AutoLogOut"; |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
AutoLogOut never mounts when logged in due to early return.
Because the component returns early when data?.id is truthy (Line 30), <AutoLogOut loggedIn /> never renders in the “logged-in on error page” scenario. The logout logic therefore won’t run.
Two options—pick one:
- Redirect strategy (preferred): remove AutoLogOut and rely on the existing redirect to
WELCOME_PATHfor logged-in users. - Logout strategy: ensure AutoLogOut mounts even when logged in by removing the early return and letting the page render. For example:
@@
- if (isLoading) return <Loading />;
- // will redirect to welcome if user is logged in
- if (data?.id) return <Loading />;
+ if (isLoading) return <Loading />;
@@
- <AutoLogOut loggedIn={!!data?.id} />
+ <AutoLogOut loggedIn={!!data?.id} />If you choose the logout strategy, consider removing the useEffect redirect (Lines 22-26) to avoid racing between redirect and logout.
Also, AutoLogOut currently uses console.log. The guidelines disallow console usage. Consider removing the log or routing it through your logging utilities. To locate usages:
Also applies to: 44-45
🏁 Script executed:
#!/bin/bash
rg -nP --type=tsx -C2 '\bconsole\.'Length of output: 65
🏁 Script executed:
#!/bin/bash
# Locate the error page file under apps/web
file=$(rg -l 'import AutoLogOut from' -g '*.tsx' | grep '/login/error/page.tsx')
echo "Inspecting: $file"
# Verify that the early return on data?.id exists
echo "=== Early return around data?.id ==="
rg -F -n -C3 'if (data?.id)' "$file"
# Search for any console.* usage in TSX files
echo "=== console.* usage in TSX files ==="
rg -P -n 'console\.' -g '*.tsx' -C2Length of output: 15305
Fix AutoLogOut never mounting & remove console.log usage
The early return on line 30 in apps/web/app/(landing)/login/error/page.tsx prevents <AutoLogOut> from ever rendering when data?.id is truthy, so your logout logic never runs. Additionally, AutoLogOut.tsx still uses console.log, which violates our logging guidelines.
Please address both as critical fixes:
• In apps/web/app/(landing)/login/error/page.tsx
– Redirect-only strategy (preferred): remove the import (line 13) and the JSX <AutoLogOut…/> in the return. Rely solely on the existing useEffect redirect.
diff @@ page.tsx - import AutoLogOut from "./AutoLogOut"; … - <AutoLogOut loggedIn={!!data?.id} />
– Logout strategy: delete the early return on line 30 so <AutoLogOut> can mount, and remove (or disable) the useEffect redirect to avoid racing.
diff @@ page.tsx - // will redirect to welcome if user is logged in - if (data?.id) return <Loading />;
• In apps/web/app/(landing)/login/error/AutoLogOut.tsx (around line 11)
Replace
console.log("Logging user out");with calls to our logging utility (e.g. logger.info("Logging user out")).
These changes are required before merging.
🤖 Prompt for AI Agents
In apps/web/app/(landing)/login/error/page.tsx around line 13 and line 30, the
current early return prevents <AutoLogOut/> from ever mounting; either remove
the AutoLogOut import and its JSX (preferred "redirect-only" strategy) so the
component is not used and rely on the existing useEffect redirect, or for the
"logout strategy" delete the early return at line 30 so AutoLogOut can mount and
also remove or disable the redirect useEffect to avoid a race; in
apps/web/app/(landing)/login/error/AutoLogOut.tsx around line 11 replace
console.log("Logging user out") with a call to the project logging utility (e.g.
logger.info("Logging user out")) and ensure the logger is imported and used
instead of console.log.
| if (searchParams?.next) { | ||
| redirect(searchParams?.next); | ||
| } else { |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Sanitize the next parameter to prevent open redirects.
Redirecting to arbitrary searchParams.next can enable open redirects if a full URL is supplied. Restrict to same-origin relative paths.
Apply this diff:
- if (searchParams?.next) {
- redirect(searchParams?.next);
- } else {
+ if (searchParams?.next && searchParams.next.startsWith("/")) {
+ redirect(searchParams.next);
+ } else {Optionally, further normalize/validate with URL parsing if needed.
Search for other places that redirect using an unvalidated next:
🏁 Script executed:
#!/bin/bash
rg -nP -C2 --type=ts --type=tsx '\bredirect\s*\(\s*[^)]*next[^)]*\)'Length of output: 98
I’ll retry searching for unvalidated next in all TS/TSX files using file globs instead of unsupported --type=tsx:
🏁 Script executed:
#!/bin/bash
# Search for redirect(...) calls containing "next" across .ts/.tsx
rg -nP -g '*.ts' -g '*.tsx' -C2 'redirect\s*\(\s*[^)]*next[^)]*\)'Length of output: 447
Prevent open redirects by validating the next parameter
Only one instance of redirect(searchParams.next) was found in the codebase:
• apps/web/app/(landing)/login/page.tsx (lines 24–26)
Apply this patch to restrict redirects to same-origin paths:
- if (searchParams?.next) {
- redirect(searchParams?.next);
- } else {
+ if (searchParams?.next && searchParams.next.startsWith("/")) {
+ redirect(searchParams.next);
+ } else {Optionally, for stronger guarantees, parse and validate via the URL API (e.g. new URL(searchParams.next, origin)) to catch encoded or malformed inputs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (searchParams?.next) { | |
| redirect(searchParams?.next); | |
| } else { | |
| if (searchParams?.next && searchParams.next.startsWith("/")) { | |
| redirect(searchParams.next); | |
| } else { |
| import { Suspense } from "react"; | ||
| import type { Metadata } from "next"; | ||
| import { redirect } from "next/navigation"; | ||
| import { OnboardingForm } from "@/app/(landing)/welcome/form"; | ||
| import { SquaresPattern } from "@/app/(landing)/home/SquaresPattern"; | ||
| import { PageHeading, TypographyP } from "@/components/Typography"; | ||
| import { CardBasic } from "@/components/ui/card"; | ||
| import { auth } from "@/utils/auth"; | ||
| import { env } from "@/env"; | ||
| import prisma from "@/utils/prisma"; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add a scoped logger import/instance
To support the earlier logging change:
import type { Metadata } from "next";
import { redirect } from "next/navigation";
import { OnboardingForm } from "@/app/(landing)/welcome/form";
import { SquaresPattern } from "@/app/(landing)/home/SquaresPattern";
import { PageHeading, TypographyP } from "@/components/Typography";
import { CardBasic } from "@/components/ui/card";
import { auth } from "@/utils/auth";
import { env } from "@/env";
import prisma from "@/utils/prisma";
+import { createScopedLogger } from "@/utils/logger";
+
+const logger = createScopedLogger("landing/welcome");🤖 Prompt for AI Agents
In apps/web/app/(landing)/welcome/page.tsx lines 1-11, the file needs a scoped
logger to support the recent logging change: add an import for the shared logger
module from its canonical path (e.g. the app's logger file) at the top of the
file and create a scoped logger instance/child (e.g. with a service/module name
like "welcome-page") right after the imports so the page can emit contextual
logs; use that scoped logger variable for any subsequent logging in this file.
| // Debug logging | ||
| console.log("[welcome] User check:", { | ||
| userId: session.user.id, | ||
| completedOnboardingAt: user.completedOnboardingAt, | ||
| force: searchParams.force, | ||
| shouldRedirect: !searchParams.force && user.completedOnboardingAt, | ||
| redirectTo: env.NEXT_PUBLIC_APP_HOME_PATH, | ||
| }); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Replace console.log with scoped logger per guidelines
Guideline violations:
- Don’t use console in TS/TSX files.
- Use createScopedLogger for backend/server logs.
Apply this diff to replace console usage:
- // Debug logging
- console.log("[welcome] User check:", {
- userId: session.user.id,
- completedOnboardingAt: user.completedOnboardingAt,
- force: searchParams.force,
- shouldRedirect: !searchParams.force && user.completedOnboardingAt,
- redirectTo: env.NEXT_PUBLIC_APP_HOME_PATH,
- });
+ // Debug logging
+ logger.debug("User check", {
+ userId: session.user.id,
+ completedOnboardingAt: user.completedOnboardingAt,
+ force,
+ shouldRedirect: !force && user.completedOnboardingAt,
+ redirectTo: env.NEXT_PUBLIC_APP_HOME_PATH,
+ });You’ll also need to import and instantiate the logger (see separate suggestion below).
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web/app/(landing)/welcome/page.tsx around lines 35-43, replace the
console.log debug call with a scoped logger: import createScopedLogger at the
top, instantiate a logger for this module (e.g., const logger =
createScopedLogger("welcome:page")), remove the console.log call and replace it
with logger.debug (or logger.info) passing the same message and object; ensure
the import path matches your logger utility and that this code runs only on the
server-side where createScopedLogger is available.
| if (!user) { | ||
| logger.error("User not found or AI not configured", { emailAccountId }); | ||
| return NextResponse.json({ error: "Not authenticated" }); | ||
| } |
There was a problem hiding this comment.
Return 401 Unauthorized for unauthenticated requests
Currently returns 200 by default. Set explicit 401 to align with API semantics and clients expecting an auth error.
- if (!user) {
- logger.error("User not found or AI not configured", { emailAccountId });
- return NextResponse.json({ error: "Not authenticated" });
- }
+ if (!user) {
+ logger.error("User not found or AI not configured", { emailAccountId });
+ return NextResponse.json({ error: "Not authenticated" }, { status: 401 });
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!user) { | |
| logger.error("User not found or AI not configured", { emailAccountId }); | |
| return NextResponse.json({ error: "Not authenticated" }); | |
| } | |
| if (!user) { | |
| logger.error("User not found or AI not configured", { emailAccountId }); | |
| return NextResponse.json({ error: "Not authenticated" }, { status: 401 }); | |
| } |
🤖 Prompt for AI Agents
In apps/web/app/api/chat/route.ts around lines 37 to 40, the handler returns a
JSON error with default 200 when the user is missing; change the NextResponse to
return an explicit 401 Unauthorized. Replace the current NextResponse.json({
error: "Not authenticated" }) with a response that sets the HTTP status to 401
(e.g., NextResponse.json({ error: "Not authenticated" }, { status: 401 })) so
clients receive the correct unauthorized status.
| command: > | ||
| sh -c " | ||
| apk add --no-cache curl dcron && | ||
| echo '0 1 * * * curl -H \"Authorization: Bearer $$CRON_SECRET\" http://web:3000/api/google/watch/all >/dev/null 2>&1' | crontab - && | ||
| crond -f -l 2 | ||
| " |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Cron job won’t expand CRON_SECRET; $$CRON_SECRET becomes PID + 'CRON_SECRET'
Inside a single-quoted crontab line, using $$ produces the shell PID at runtime, not an env var. The header will be wrong (e.g., “Bearer 12345CRON_SECRET”). Also the escaping is needlessly complex.
Apply a minimal in-place fix:
- echo '0 1 * * * curl -H \"Authorization: Bearer $$CRON_SECRET\" http://web:3000/api/google/watch/all >/dev/null 2>&1' | crontab - &&
+ echo '0 1 * * * curl -H \"Authorization: Bearer $CRON_SECRET\" http://web:3000/api/google/watch/all >/dev/null 2>&1' | crontab - &&Recommended (cleaner, avoids quoting gotchas and allows retries):
- echo '0 1 * * * curl -H \"Authorization: Bearer $CRON_SECRET\" http://web:3000/api/google/watch/all >/dev/null 2>&1' | crontab - &&
+ printf '0 1 * * * curl -fsS --retry 3 --retry-all-errors -H "Authorization: Bearer $CRON_SECRET" http://web:3000/api/google/watch/all >/dev/null 2>&1\n' | crontab - &&Even better (preferred): bake a small script and call that from cron. You already added docker/cron/cron-job.sh in this PR — wire it up:
- Mount it:
- Add under the cron service:
- volumes:
- ./docker/cron/cron-job.sh:/usr/local/bin/cron-job.sh:ro
- volumes:
- Add under the cron service:
- Use it in command:
- crontab entry: 0 1 * * * /usr/local/bin/cron-job.sh
This approach avoids env expansion and escaping pitfalls entirely.
🤖 Prompt for AI Agents
In docker-compose.yml around lines 75 to 80, the current crontab uses
$$CRON_SECRET inside a single-quoted line which expands to the shell PID plus
literal "CRON_SECRET" instead of the env var; replace this with a call to the
script you already added to the repo to avoid expansion/escaping issues: add a
volume mapping under the cron service mounting ./docker/cron/cron-job.sh to
/usr/local/bin/cron-job.sh:ro, ensure the script is executable and reads
CRON_SECRET from the environment, and change the crontab entry to run
/usr/local/bin/cron-job.sh (e.g., "0 1 * * * /usr/local/bin/cron-job.sh") so
cron executes the script with proper env var expansion rather than trying to
inline the header string.
| curl -# "$URL" > "$script_file" || exit | ||
| chmod 775 "$script_file" |
There was a problem hiding this comment.
Fix unsafe curl usage and overly-permissive permissions
- curl without --fail/--location/--silent can succeed on HTTP 4xx and write HTML, which would then be executed. Use -fsSL and -o to fail fast and avoid partial/HTML downloads.
- chmod 775 is unnecessary; 755 is sufficient and less permissive.
Apply this diff:
- curl -# "$URL" > "$script_file" || exit
- chmod 775 "$script_file"
+ curl --fail --show-error --location --silent --output "$script_file" "$URL"
+ chmod 755 "$script_file"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| curl -# "$URL" > "$script_file" || exit | |
| chmod 775 "$script_file" | |
| curl --fail --show-error --location --silent --output "$script_file" "$URL" | |
| chmod 755 "$script_file" |
🤖 Prompt for AI Agents
In install_gcloud.sh around lines 10-11, replace the unsafe curl invocation and
overly-permissive chmod: use curl with flags -fsSL and the -o option to write to
the target file and fail fast on HTTP errors (check curl's exit status and exit
on failure), and change the file permissions from 775 to 755 to be less
permissive.
| # Show only a small, safe prefix if present | ||
| if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then | ||
| echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..." | ||
| fi |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid echoing any portion of secrets in logs
Even truncated prefixes can be sensitive in shared logs. Recommend only confirming presence.
Apply this diff:
- # Show only a small, safe prefix if present
- if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then
- echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..."
- fi
+ # Confirm presence of sensitive env var without echoing any portion
+ if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then
+ echo "✓ UPSTASH_REDIS_TOKEN is set"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Show only a small, safe prefix if present | |
| if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then | |
| echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..." | |
| fi | |
| # Confirm presence of sensitive env var without echoing any portion | |
| if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then | |
| echo "✓ UPSTASH_REDIS_TOKEN is set" | |
| fi |
🤖 Prompt for AI Agents
In run-docker.sh around lines 12 to 15, the script currently prints a truncated
portion of UPSTASH_REDIS_TOKEN which leaks secret material; change the echo to
only confirm presence without printing any part of the token (e.g., echo "✓
UPSTASH_REDIS_TOKEN is set"), leaving the conditional intact so it still checks
for non-empty value but does not output any substring of the secret.
| source apps/web/.env | ||
| set +a # Disable export of all variables | ||
| echo "✓ Sourced apps/web/.env" | ||
| echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..." |
There was a problem hiding this comment.
Avoid echoing secret material (even partially).
Printing the first 10 chars of UPSTASH_REDIS_TOKEN is unnecessary leakage. Mask it or omit entirely.
Apply this diff:
- echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..."
+ if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then
+ echo "✓ UPSTASH_REDIS_TOKEN is set"
+ else
+ echo "❌ Error: UPSTASH_REDIS_TOKEN is not set"
+ exit 1
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..." | |
| if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then | |
| echo "✓ UPSTASH_REDIS_TOKEN is set" | |
| else | |
| echo "❌ Error: UPSTASH_REDIS_TOKEN is not set" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In run-docker.sh.backup around line 9, avoid printing any portion of
UPSTASH_REDIS_TOKEN; remove the partial token echo and instead log a
non-sensitive confirmation (for example: "✓ UPSTASH_REDIS_TOKEN is set" or a
masked placeholder), or only indicate presence/length without revealing
characters; update the echo to omit the token substring and ensure no secret
data is written to logs or stdout.
| if docker-compose -f "$temp_file" config >/dev/null 2>&1; then | ||
| mv "$temp_file" "$COMPOSE_FILE" | ||
| log_message "Successfully updated Docker Compose file with restart policies" | ||
| else | ||
| log_message "WARNING: Generated docker-compose.yml is invalid, keeping original" | ||
| rm -f "$temp_file" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Compose CLI detection is inconsistent; falls back to docker-compose only here
You correctly check for either “docker-compose” or “docker compose” earlier, but validation uses only docker-compose. This will fail on systems without the standalone binary.
Apply this diff to use a wrapper function and call it here:
- if docker-compose -f "$temp_file" config >/dev/null 2>&1; then
+ if docker_compose config -f "$temp_file" >/dev/null 2>&1; thenAdd this helper near your utility functions:
# Compose wrapper supporting both the plugin and standalone binary
docker_compose() {
if command -v docker-compose >/dev/null 2>&1; then
docker-compose "$@"
else
docker compose "$@"
fi
}🤖 Prompt for AI Agents
In setup-autostart.sh around lines 196 to 201, the script currently validates
the generated compose file using only the standalone docker-compose binary which
breaks systems that only have the Docker Compose plugin; add a docker_compose()
wrapper function near the other utility functions that runs "docker-compose" if
available or "docker compose" otherwise, then replace the direct "docker-compose
-f \"$temp_file\" config" call with the wrapper (docker_compose -f "$temp_file"
config) so validation works for both plugin and standalone installations; ensure
the wrapper forwards all arguments and uses the same exit/status behavior as the
original command.
## Problem Refresh tokens were being cleared from the database after ~1 hour when access tokens expired, causing authentication failures. ## Root Cause The `handleGmailPermissionsCheck` function was incorrectly wiping ALL tokens (including refresh_token) on ANY refresh error, including temporary network issues or rate limiting. ## Solution 1. **Modified gmail/permissions.ts**: - Only clear access_token and expires_at on refresh failure - Never clear refresh_token unless it's truly invalid (invalid_grant) - Distinguish between temporary errors and permanent auth failures 2. **Improved auth.ts**: - Changed prompt from "select_account+consent" to "consent" to force refresh token - Added logic to preserve existing refresh tokens when new auth doesn't provide one - Prevent saveTokens from overwriting existing refresh tokens with null 3. **Added comprehensive logging** to track token lifecycle This ensures refresh tokens persist through temporary failures and are only cleared when truly invalid. 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
apps/web/utils/auth.ts (2)
91-93: Retain select_account in Google OAuth prompt to avoid sticky account issuesUsing only "consent" can cause Google to reuse the currently active account, which makes it harder to switch accounts during re-auth. Consider including "select_account" alongside "consent".
If multi-account users are expected, apply this change:
- prompt: "consent", // Force consent to ensure refresh token + prompt: "consent select_account", // Force consent and allow account selection
429-437: Post-link verification logging looks good; consider including providerAccountId for easier cross-correlationThe savedAccount check and boolean summary are helpful. Adding providerAccountId will make correlating logs and DB entries easier when users have multiple linked accounts.
- logger.info("[linkAccount] Successfully linked account", { - email: user.email, - userId: account.userId, - accountId: account.id, + logger.info("[linkAccount] Successfully linked account", { + email: user.email, + userId: account.userId, + accountId: account.id, + provider: account.providerId, + providerAccountId: account.accountId, tokensInDb: { hasAccessToken: !!savedAccount?.access_token, hasRefreshToken: !!savedAccount?.refresh_token, }, });Also applies to: 442-446
apps/web/utils/gmail/permissions.ts (2)
34-35: URL-encode access token in the tokeninfo requestGuard against any unexpected characters in the access token by encoding it.
- `https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=${accessToken}`, + `https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=${encodeURIComponent(accessToken)}`,
11-11: Remove stale TODO or clarify actionThe catch block below already handles network errors. Either remove this TODO or rephrase it to track any remaining edge cases you still plan to handle.
-// TODO: this can also error on network error +// Network errors are handled below in the catch block.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
apps/web/utils/auth.ts(4 hunks)apps/web/utils/gmail/permissions.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
apps/web/**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (apps/web/CLAUDE.md)
apps/web/**/*.{ts,tsx}: Use TypeScript with strict null checks
Path aliases: Use@/for imports from project root
Use proper error handling with try/catch blocks
Format code with Prettier
Leverage TypeScript inference for better DX
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
!{.cursor/rules/*.mdc}
📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)
Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/form-handling.mdc)
**/*.ts: The same validation should be done in the server action too
Define validation schemas using Zod
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
apps/web/utils/gmail/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/gmail-api.mdc)
Keep provider-specific implementation details isolated in the appropriate utils subfolder (e.g., 'apps/web/utils/gmail/')
Files:
apps/web/utils/gmail/permissions.ts
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/logging.mdc)
**/*.{ts,tsx}: UsecreateScopedLoggerfor logging in backend TypeScript files
Typically add the logger initialization at the top of the file when usingcreateScopedLogger
Only use.with()on a logger instance within a specific function, not for a global loggerImport Prisma in the project using
import prisma from "@/utils/prisma";
**/*.{ts,tsx}: Don't use TypeScript enums.
Don't use TypeScript const enum.
Don't use the TypeScript directive @ts-ignore.
Don't use primitive type aliases or misleading types.
Don't use empty type parameters in type aliases and interfaces.
Don't use any or unknown as type constraints.
Don't use implicit any type on variable declarations.
Don't let variables evolve into any type through reassignments.
Don't use non-null assertions with the ! postfix operator.
Don't misuse the non-null assertion operator (!) in TypeScript files.
Don't use user-defined types.
Use as const instead of literal types and type annotations.
Use export type for types.
Use import type for types.
Don't declare empty interfaces.
Don't merge interfaces and classes unsafely.
Don't use overload signatures that aren't next to each other.
Use the namespace keyword instead of the module keyword to declare TypeScript namespaces.
Don't use TypeScript namespaces.
Don't export imported variables.
Don't add type annotations to variables, parameters, and class properties that are initialized with literal expressions.
Don't use parameter properties in class constructors.
Use either T[] or Array consistently.
Initialize each enum member value explicitly.
Make sure all enum members are literal values.
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
apps/web/utils/**
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
Create utility functions in
utils/folder for reusable logic
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
apps/web/utils/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
apps/web/utils/**/*.ts: Use lodash utilities for common operations (arrays, objects, strings)
Import specific lodash functions to minimize bundle size
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)
**/*.{js,jsx,ts,tsx}: Don't useelements in Next.js projects.
Don't use elements in Next.js projects.
Don't use namespace imports.
Don't access namespace imports dynamically.
Don't use global eval().
Don't use console.
Don't use debugger.
Don't use var.
Don't use with statements in non-strict contexts.
Don't use the arguments object.
Don't use consecutive spaces in regular expression literals.
Don't use the comma operator.
Don't use unnecessary boolean casts.
Don't use unnecessary callbacks with flatMap.
Use for...of statements instead of Array.forEach.
Don't create classes that only have static members (like a static namespace).
Don't use this and super in static contexts.
Don't use unnecessary catch clauses.
Don't use unnecessary constructors.
Don't use unnecessary continue statements.
Don't export empty modules that don't change anything.
Don't use unnecessary escape sequences in regular expression literals.
Don't use unnecessary labels.
Don't use unnecessary nested block statements.
Don't rename imports, exports, and destructured assignments to the same name.
Don't use unnecessary string or template literal concatenation.
Don't use String.raw in template literals when there are no escape sequences.
Don't use useless case statements in switch statements.
Don't use ternary operators when simpler alternatives exist.
Don't use useless this aliasing.
Don't initialize variables to undefined.
Don't use the void operators (they're not familiar).
Use arrow functions instead of function expressions.
Use Date.now() to get milliseconds since the Unix Epoch.
Use .flatMap() instead of map().flat() when possible.
Use literal property access instead of computed property access.
Don't use parseInt() or Number.parseInt() when binary, octal, or hexadecimal literals work.
Use concise optional chaining instead of chained logical expressions.
Use regular expression literals instead of the RegExp constructor when possible.
Don't use number literal object member names th...
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
!pages/_document.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)
!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.
Files:
apps/web/utils/gmail/permissions.tsapps/web/utils/auth.ts
🧬 Code Graph Analysis (1)
apps/web/utils/auth.ts (1)
apps/web/utils/encryption.ts (1)
encryptToken(29-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Static Code Analysis Bash
- GitHub Check: Jit Security
🔇 Additional comments (3)
apps/web/utils/auth.ts (1)
313-325: Good diagnostic logging; non-sensitive and actionableLogging the presence and length of tokens without dumping their values is a good balance between debuggability and security.
apps/web/utils/gmail/permissions.ts (2)
87-104: Refresh-on-invalid_token + re-check flow looks solidAttempting a one-time refresh on invalid_token and immediately re-checking permissions with the new access token is correct and minimizes unnecessary user prompts.
142-147: Prisma schema verified: token fields and relation are correctI confirmed in apps/web/prisma/schema.prisma that
- model Account includes refresh_token, access_token, and expires_at
- model EmailAccount has accountId @relation(fields: [accountId], references: [id])
No changes needed here.
| // Critical: Check if refresh token is missing | ||
| if (!account.refreshToken) { | ||
| logger.error( | ||
| "[linkAccount] CRITICAL: No refresh token received from OAuth provider", | ||
| { | ||
| accountId: account.id, | ||
| userId: account.userId, | ||
| provider: account.providerId, | ||
| hint: "User may need to revoke app permissions and re-authenticate", | ||
| }, | ||
| ); | ||
|
|
||
| // Check if there's an existing account with a refresh token | ||
| const existingAccount = await prisma.account.findUnique({ | ||
| where: { | ||
| provider_providerAccountId: { | ||
| provider: account.providerId, | ||
| providerAccountId: account.accountId, | ||
| }, | ||
| }, | ||
| select: { refresh_token: true }, | ||
| }); | ||
|
|
||
| if (existingAccount?.refresh_token) { | ||
| logger.info( | ||
| "[linkAccount] Found existing refresh token in database, will preserve it", | ||
| ); | ||
| // Don't update the refresh token if we don't have a new one | ||
| account.refreshToken = existingAccount.refresh_token; | ||
| } else { | ||
| logger.error( | ||
| "[linkAccount] No existing refresh token found either - user must re-authenticate", | ||
| ); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Assigning to account.refreshToken in an after-hook does not persist; preservation logic is ineffective
This runs in an "after" database hook. At this point, the DB row has already been written by the provider adapter. Two issues:
- existingAccount is read after the write, so if the provider omitted refresh_token, you will most likely read null here (i.e., the previous value is already clobbered).
- Mutating account.refreshToken in-memory doesn’t persist anything; no subsequent write uses this mutated value.
Net effect: this block won’t actually preserve a prior refresh token.
Recommended fixes (choose one):
- Preferred: Intercept before the write to prevent nulling out an existing refresh token. Add a databaseHooks.account.update.before and databaseHooks.account.create.before to drop refresh_token from the update payload when it’s missing/empty. If better-auth lacks a before hook for account, implement this protection in your Prisma adapter or Prisma middleware/extension layer.
- At minimum (stopgap): If you do locate a non-null DB value you want to preserve, explicitly write it back.
Apply a stopgap write here so the intent is not a no-op:
if (existingAccount?.refresh_token) {
logger.info(
"[linkAccount] Found existing refresh token in database, will preserve it",
);
- // Don't update the refresh token if we don't have a new one
- account.refreshToken = existingAccount.refresh_token;
+ // Ensure the DB keeps the existing refresh token
+ await prisma.account.update({
+ where: { id: account.id },
+ data: { refresh_token: existingAccount.refresh_token },
+ });
} else {And consider moving this preservation to a "before" hook or Prisma middleware so the token cannot be nulled in the first place. I can draft that refactor if helpful.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Critical: Check if refresh token is missing | |
| if (!account.refreshToken) { | |
| logger.error( | |
| "[linkAccount] CRITICAL: No refresh token received from OAuth provider", | |
| { | |
| accountId: account.id, | |
| userId: account.userId, | |
| provider: account.providerId, | |
| hint: "User may need to revoke app permissions and re-authenticate", | |
| }, | |
| ); | |
| // Check if there's an existing account with a refresh token | |
| const existingAccount = await prisma.account.findUnique({ | |
| where: { | |
| provider_providerAccountId: { | |
| provider: account.providerId, | |
| providerAccountId: account.accountId, | |
| }, | |
| }, | |
| select: { refresh_token: true }, | |
| }); | |
| if (existingAccount?.refresh_token) { | |
| logger.info( | |
| "[linkAccount] Found existing refresh token in database, will preserve it", | |
| ); | |
| // Don't update the refresh token if we don't have a new one | |
| account.refreshToken = existingAccount.refresh_token; | |
| } else { | |
| logger.error( | |
| "[linkAccount] No existing refresh token found either - user must re-authenticate", | |
| ); | |
| } | |
| } | |
| // Check if there's an existing account with a refresh token | |
| const existingAccount = await prisma.account.findUnique({ | |
| where: { | |
| provider_providerAccountId: { | |
| provider: account.providerId, | |
| providerAccountId: account.accountId, | |
| }, | |
| }, | |
| select: { refresh_token: true }, | |
| }); | |
| if (existingAccount?.refresh_token) { | |
| logger.info( | |
| "[linkAccount] Found existing refresh token in database, will preserve it", | |
| ); | |
| // Ensure the DB keeps the existing refresh token | |
| await prisma.account.update({ | |
| where: { id: account.id }, | |
| data: { refresh_token: existingAccount.refresh_token }, | |
| }); | |
| } else { | |
| logger.error( | |
| "[linkAccount] No existing refresh token found either - user must re-authenticate", | |
| ); | |
| } |
🤖 Prompt for AI Agents
In apps/web/utils/auth.ts around lines 326-361: the after-hook mutates
account.refreshToken and reads existingAccount after the provider write, so the
prior token is likely already clobbered and the in-memory assignment is never
persisted; fix by either (preferred) moving preservation to a pre-write step
(add databaseHooks.account.update.before and account.create.before, or add
Prisma middleware/adapter logic) that removes refresh_token from the update
payload when the incoming value is missing/empty so the DB value is never
nulled, or (stopgap) perform an explicit update here when you find an existing
non-null refresh_token (i.e., run a prisma.account.update to write the preserved
token back); ensure you do not rely on in-memory mutation alone and add logging
for both paths.
| // Don't save if we're about to null out an existing refresh token | ||
| if (!refreshToken && !tokens.refresh_token) { | ||
| logger.warn("Skipping token save - would null out refresh token", { | ||
| providerAccountId, | ||
| emailAccountId, | ||
| hasAccessToken: !!tokens.access_token, | ||
| }); | ||
|
|
||
| // Still update access token and expiry if provided | ||
| if (tokens.access_token) { | ||
| const data: any = { | ||
| access_token: tokens.access_token, | ||
| expires_at: tokens.expires_at | ||
| ? new Date(tokens.expires_at * 1000) | ||
| : null, | ||
| }; | ||
|
|
||
| if (emailAccountId) { | ||
| if (data.access_token) | ||
| data.access_token = encryptToken(data.access_token) || undefined; | ||
|
|
||
| await prisma.emailAccount.update({ | ||
| where: { id: emailAccountId }, | ||
| data: { account: { update: data } }, | ||
| }); | ||
| } else if (providerAccountId) { | ||
| await prisma.account.update({ | ||
| where: { | ||
| provider_providerAccountId: { | ||
| provider, | ||
| providerAccountId, | ||
| }, | ||
| }, | ||
| data, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid wiping expires_at and passing undefined to Prisma in the “skip-save” path
- Building data with expires_at: null when expires_at is absent will erase the existing expiry timestamp, which is likely unintended.
- Passing undefined properties to Prisma updates is brittle. Build update objects conditionally.
Use conditional construction and only include fields that are present:
- // Still update access token and expiry if provided
- if (tokens.access_token) {
- const data: any = {
- access_token: tokens.access_token,
- expires_at: tokens.expires_at
- ? new Date(tokens.expires_at * 1000)
- : null,
- };
-
- if (emailAccountId) {
- if (data.access_token)
- data.access_token = encryptToken(data.access_token) || undefined;
-
- await prisma.emailAccount.update({
- where: { id: emailAccountId },
- data: { account: { update: data } },
- });
- } else if (providerAccountId) {
- await prisma.account.update({
- where: {
- provider_providerAccountId: {
- provider,
- providerAccountId,
- },
- },
- data,
- });
- }
- }
+ // Still update access token and expiry if provided (without clearing existing values)
+ if (tokens.access_token || typeof tokens.expires_at === "number") {
+ const update: Record<string, unknown> = {};
+ if (tokens.access_token) {
+ update.access_token = tokens.access_token;
+ }
+ if (typeof tokens.expires_at === "number") {
+ update.expires_at = new Date(tokens.expires_at * 1000);
+ }
+
+ if (emailAccountId) {
+ // Encrypt nested account token updates when going through EmailAccount
+ if (update.access_token) {
+ update.access_token =
+ encryptToken(update.access_token as string) || undefined;
+ }
+ await prisma.emailAccount.update({
+ where: { id: emailAccountId },
+ data: { account: { update } },
+ });
+ } else if (providerAccountId) {
+ await prisma.account.update({
+ where: {
+ provider_providerAccountId: {
+ provider,
+ providerAccountId,
+ },
+ },
+ data: update,
+ });
+ }
+ }Follow-up: Consider guarding the “full save” path below similarly so undefined values are not included in updates (especially access_token).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Don't save if we're about to null out an existing refresh token | |
| if (!refreshToken && !tokens.refresh_token) { | |
| logger.warn("Skipping token save - would null out refresh token", { | |
| providerAccountId, | |
| emailAccountId, | |
| hasAccessToken: !!tokens.access_token, | |
| }); | |
| // Still update access token and expiry if provided | |
| if (tokens.access_token) { | |
| const data: any = { | |
| access_token: tokens.access_token, | |
| expires_at: tokens.expires_at | |
| ? new Date(tokens.expires_at * 1000) | |
| : null, | |
| }; | |
| if (emailAccountId) { | |
| if (data.access_token) | |
| data.access_token = encryptToken(data.access_token) || undefined; | |
| await prisma.emailAccount.update({ | |
| where: { id: emailAccountId }, | |
| data: { account: { update: data } }, | |
| }); | |
| } else if (providerAccountId) { | |
| await prisma.account.update({ | |
| where: { | |
| provider_providerAccountId: { | |
| provider, | |
| providerAccountId, | |
| }, | |
| }, | |
| data, | |
| }); | |
| } | |
| } | |
| // Don't save if we're about to null out an existing refresh token | |
| if (!refreshToken && !tokens.refresh_token) { | |
| logger.warn("Skipping token save - would null out refresh token", { | |
| providerAccountId, | |
| emailAccountId, | |
| hasAccessToken: !!tokens.access_token, | |
| }); | |
| // Still update access token and expiry if provided (without clearing existing values) | |
| if (tokens.access_token || typeof tokens.expires_at === "number") { | |
| const update: Record<string, unknown> = {}; | |
| if (tokens.access_token) { | |
| update.access_token = tokens.access_token; | |
| } | |
| if (typeof tokens.expires_at === "number") { | |
| update.expires_at = new Date(tokens.expires_at * 1000); | |
| } | |
| if (emailAccountId) { | |
| // Encrypt nested account token updates when going through EmailAccount | |
| if (update.access_token) { | |
| update.access_token = | |
| encryptToken(update.access_token as string) || undefined; | |
| } | |
| await prisma.emailAccount.update({ | |
| where: { id: emailAccountId }, | |
| data: { account: { update } }, | |
| }); | |
| } else if (providerAccountId) { | |
| await prisma.account.update({ | |
| where: { | |
| provider_providerAccountId: { | |
| provider, | |
| providerAccountId, | |
| }, | |
| }, | |
| data: update, | |
| }); | |
| } | |
| } | |
| } |
| } catch (refreshError) { | ||
| // Log the specific error for debugging | ||
| logger.warn("Failed to refresh Gmail token", { | ||
| emailAccountId, | ||
| error: | ||
| refreshError instanceof Error | ||
| ? refreshError.message | ||
| : String(refreshError), | ||
| isInvalidGrant: | ||
| refreshError instanceof Error && | ||
| refreshError.message.includes("invalid_grant"), | ||
| }); | ||
|
|
||
| // Only clear tokens if it's truly an invalid_grant error | ||
| if ( | ||
| refreshError instanceof Error && | ||
| refreshError.message.includes("invalid_grant") | ||
| ) { | ||
| // This means the refresh token is actually invalid | ||
| // Fall through to cleanup | ||
| } else { | ||
| // For other errors (network, temporary), don't clear tokens | ||
| return { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Harden invalid_grant detection; don’t parse error message strings
Relying on refreshError.message.includes("invalid_grant") is brittle and can silently break with library changes. Prefer extracting a structured error code (e.g., from error.response.data.error for Google/gaxios/axios errors), then branching on that.
Apply this diff within the catch block to use a structured code and capture a flag for the cleanup path:
} catch (refreshError) {
- // Log the specific error for debugging
- logger.warn("Failed to refresh Gmail token", {
- emailAccountId,
- error:
- refreshError instanceof Error
- ? refreshError.message
- : String(refreshError),
- isInvalidGrant:
- refreshError instanceof Error &&
- refreshError.message.includes("invalid_grant"),
- });
-
- // Only clear tokens if it's truly an invalid_grant error
- if (
- refreshError instanceof Error &&
- refreshError.message.includes("invalid_grant")
- ) {
- // This means the refresh token is actually invalid
- // Fall through to cleanup
- } else {
- // For other errors (network, temporary), don't clear tokens
- return {
- hasAllPermissions: false,
- error: "Temporary error checking permissions. Please try again.",
- missingScopes,
- };
- }
+ const code = extractGoogleOAuthErrorCode(refreshError);
+ const msg =
+ refreshError instanceof Error ? refreshError.message : String(refreshError);
+ const isInvalidGrant = code === "invalid_grant";
+
+ // Log the specific error for debugging
+ logger.warn("Failed to refresh Gmail token", {
+ emailAccountId,
+ error: msg,
+ code,
+ isInvalidGrant,
+ });
+
+ // Only clear tokens if it's truly an invalid_grant error
+ if (isInvalidGrant) {
+ // This means the refresh token is actually invalid
+ // Mark for cleanup
+ invalidGrantOnRefresh = true;
+ // Fall through to cleanup
+ } else {
+ // For other errors (network, temporary), don't clear tokens
+ return {
+ hasAllPermissions: false,
+ error: "Temporary error checking permissions. Please try again.",
+ missingScopes,
+ };
+ }Add this small helper (place it near the top of the file), and a flag inside handleGmailPermissionsCheck:
// Helper: normalize Google OAuth error code from common error shapes (gaxios/axios/unknown)
function extractGoogleOAuthErrorCode(err: unknown): string | null {
const anyErr = err as any;
const codeFromResponse = anyErr?.response?.data?.error;
if (typeof codeFromResponse === "string") return codeFromResponse;
const msg: string | undefined =
typeof anyErr?.message === "string" ? anyErr.message : undefined;
if (!msg) return null;
if (msg.includes("invalid_grant")) return "invalid_grant";
if (msg.includes("invalid_client")) return "invalid_client";
if (msg.includes("invalid_request")) return "invalid_request";
return null;
}And inside handleGmailPermissionsCheck, declare a flag at the top (so the cleanup path can inspect it):
export async function handleGmailPermissionsCheck({...}) {
let invalidGrantOnRefresh = false;
// ...
}🤖 Prompt for AI Agents
In apps/web/utils/gmail/permissions.ts around lines 105 to 127, the catch block
currently detects "invalid_grant" by inspecting refreshError.message which is
brittle; add a small helper near the top of this file named
extractGoogleOAuthErrorCode(err: unknown): string | null that inspects common
shapes (err.response.data.error, err.message) and returns a normalized error
code (e.g., "invalid_grant") or null, declare a boolean flag
invalidGrantOnRefresh at the start of handleGmailPermissionsCheck, call the
helper inside the catch to set invalidGrantOnRefresh and include that normalized
code in the logger, and then use invalidGrantOnRefresh (not message.includes) to
decide whether to clear tokens or return for transient errors so only true OAuth
invalid_grant triggers cleanup.
| // Only clear tokens if refresh truly failed | ||
| logger.error("Gmail token refresh failed - user needs to re-authenticate", { | ||
| emailAccountId, | ||
| hasRefreshToken: !!refreshToken, | ||
| }); | ||
|
|
||
| const emailAccount = await prisma.emailAccount.findUnique({ | ||
| where: { id: emailAccountId }, | ||
| }); | ||
| if (!emailAccount) | ||
| return { hasAllPermissions: false, error: "Email account not found" }; | ||
|
|
||
| // DON'T clear the refresh token! Only clear access token and expiry | ||
| // The refresh token might still be valid, just temporarily failing | ||
| await prisma.account.update({ | ||
| where: { id: emailAccount.accountId }, | ||
| data: { | ||
| access_token: null, | ||
| refresh_token: null, | ||
| expires_at: null, | ||
| // DO NOT CLEAR refresh_token here - it might still be valid! | ||
| }, |
There was a problem hiding this comment.
Don’t keep an invalid refresh token after invalid_grant; it causes retry loops
The cleanup path intentionally avoids clearing refresh_token, but in the invalid_grant case the refresh token is definitively unusable (revoked/invalid). Leaving it in the DB will lead to repeated failing refresh attempts and a poor UX. Clear refresh_token only when invalidGrantOnRefresh is true; keep it otherwise to avoid losing a good token on transient failures.
Apply this diff to conditionally clear the refresh token and fix the comments:
-// Only clear tokens if refresh truly failed
logger.error("Gmail token refresh failed - user needs to re-authenticate", {
emailAccountId,
hasRefreshToken: !!refreshToken,
});
// DON'T clear the refresh token! Only clear access token and expiry
-// The refresh token might still be valid, just temporarily failing
+// Only clear the refresh token if the refresh failed with invalid_grant.
+// For transient failures, keep it.
await prisma.account.update({
where: { id: emailAccount.accountId },
data: {
access_token: null,
expires_at: null,
- // DO NOT CLEAR refresh_token here - it might still be valid!
+ ...(invalidGrantOnRefresh ? { refresh_token: null } : {}),
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Only clear tokens if refresh truly failed | |
| logger.error("Gmail token refresh failed - user needs to re-authenticate", { | |
| emailAccountId, | |
| hasRefreshToken: !!refreshToken, | |
| }); | |
| const emailAccount = await prisma.emailAccount.findUnique({ | |
| where: { id: emailAccountId }, | |
| }); | |
| if (!emailAccount) | |
| return { hasAllPermissions: false, error: "Email account not found" }; | |
| // DON'T clear the refresh token! Only clear access token and expiry | |
| // The refresh token might still be valid, just temporarily failing | |
| await prisma.account.update({ | |
| where: { id: emailAccount.accountId }, | |
| data: { | |
| access_token: null, | |
| refresh_token: null, | |
| expires_at: null, | |
| // DO NOT CLEAR refresh_token here - it might still be valid! | |
| }, | |
| logger.error("Gmail token refresh failed - user needs to re-authenticate", { | |
| emailAccountId, | |
| hasRefreshToken: !!refreshToken, | |
| }); | |
| const emailAccount = await prisma.emailAccount.findUnique({ | |
| where: { id: emailAccountId }, | |
| }); | |
| if (!emailAccount) | |
| return { hasAllPermissions: false, error: "Email account not found" }; | |
| // Only clear the refresh token if the refresh failed with invalid_grant. | |
| // For transient failures, keep it. | |
| await prisma.account.update({ | |
| where: { id: emailAccount.accountId }, | |
| data: { | |
| access_token: null, | |
| expires_at: null, | |
| ...(invalidGrantOnRefresh ? { refresh_token: null } : {}), | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In apps/web/utils/gmail/permissions.ts around lines 136-156, the current cleanup
always preserves the refresh_token which causes retry loops when the token is
definitively invalid; update the logic to clear refresh_token only when the
refresh error is an invalid_grant (use the existing invalidGrantOnRefresh flag
or add one), by setting refresh_token to null in the prisma.account.update data
only when invalidGrantOnRefresh is true (otherwise omit the field or leave it
unchanged), and update the inline comments to explain we clear refresh tokens
only for invalid_grant errors and retain them for transient failures.
|
Hey, this PR has a tonne of extra stuff in it. But will try and take the parts that are important. |
Yeah sorry I'm literally just trying to get a working version on my home server that doesn't immediately do this after like a single refresh. Any ideas? I'm banging my (claude's) head against the wall with this one.
I think this could be a good feature though -- but i think its one of the random things you're talking about lol Recording.at.2025-08-20.05.36.52.mp4 |
Added comprehensive documentation of the critical refresh token bug and its fix, including: - Problem description and symptoms - Root cause analysis - Applied fixes and code changes - Testing procedures - Recovery steps for affected users 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/web/CLAUDE.md (2)
235-252: ExampleForm uses useCallback but doesn’t import it.Copy-paste of this sample will fail. Import useCallback from React.
-import { useForm, SubmitHandler } from "react-hook-form"; +import { useForm, SubmitHandler } from "react-hook-form"; +import { useCallback } from "react";
305-330: Bug in sample: data.examples vs data.items; remove unused import.GET route returns
{ items }, but the component mapsdata?.examples. Also, Button is imported but unused.-import { Button } from "@/components/ui/button"; @@ - {data?.examples.map((example) => ( + {data?.items.map((example) => ( <div key={example.id} className="border p-4 rounded"> <h3 className="font-semibold">{example.name}</h3> <p className="text-gray-600">{example.email}</p> {example.description && ( <p className="text-sm text-gray-500">{example.description}</p> )} </div> ))}
🧹 Nitpick comments (5)
apps/web/CLAUDE.md (5)
36-41: Generalize systemd unit details and fix capitalization.“SystemD” should be “systemd”, and the hard-coded home path is user-specific. Recommend a neutral example path to avoid confusion.
-### SystemD Service +### systemd service @@ -- Working directory: `/home/jason/services/inbox-zero` +- Working directory: `/path/to/inbox-zero`
70-70: Use consistent product naming: Next.js (not NextJS).Minor consistency fix.
-- NextJS app router structure with (app) directory +- Next.js app router structure with (app) directory
388-401: Normalize file paths for consistency.Use repo-relative or alias paths (e.g.,
apps/web/utils/gmail/permissions.tsor@/utils/gmail/permissions.ts) instead of “/utils/...”.-The `handleGmailPermissionsCheck` function in `/utils/gmail/permissions.ts` was incorrectly wiping ALL tokens (including refresh_token) on ANY refresh error, including temporary network issues or rate limiting. +The `handleGmailPermissionsCheck` function in `@/utils/gmail/permissions.ts` was incorrectly wiping ALL tokens (including refresh_token) on ANY refresh error, including temporary network issues or rate limiting.
421-425: Avoid user-specific absolute paths in commands.Use a repo-relative script location and ensure it exists in version control.
-node /home/jason/services/inbox-zero/test-token-refresh.js +node scripts/test-token-refresh.jsIf you want, I can draft the
scripts/test-token-refresh.jswith a safe read-only probe against your token refresh endpoint.
402-407: No raw tokens logged—but PII (emails) still printed
I didn’t find anyconsole.logorlogger.*calls that output rawaccess_tokenorrefresh_token. However, there are severallogger.info/errorcalls that include user emails (e.g.{ email }), which count as PII.• Add a centralized redaction helper for both tokens and emails, for example:
•redactToken(token: string): string→abcd…wxyz
•redactEmail(email: string): string→j***@example.com
• Replace any direct logging of tokens or emails with the helper, e.g.:
logger.info("…", { email: redactEmail(email), token: redactToken(token) })
• To double-check for any remaining offenders, you can run:rg -nP "logger\.\w+\([^)]*(access_token|refresh_token|email)" -g "apps/web/utils/auth.ts"Let me know if you’d like a draft implementation of these helpers or help rolling out a quick grep-and-replace across the repo.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
apps/web/CLAUDE.md(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
!{.cursor/rules/*.mdc}
📄 CodeRabbit Inference Engine (.cursor/rules/cursor-rules.mdc)
Never place rule files in the project root, in subdirectories outside .cursor/rules, or in any other location
Files:
apps/web/CLAUDE.md
!pages/_document.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/ultracite.mdc)
!pages/_document.{js,jsx,ts,tsx}: Don't import next/document outside of pages/_document.jsx in Next.js projects.
Don't import next/document outside of pages/_document.jsx in Next.js projects.
Files:
apps/web/CLAUDE.md
🪛 LanguageTool
apps/web/CLAUDE.md
[grammar] ~7-~7: There might be a mistake here.
Context: ...ker Compose with the following services: - PostgreSQL database (port 5432) - **Re...
(QB_NEW_EN)
[grammar] ~8-~8: There might be a mistake here.
Context: ...s: - PostgreSQL database (port 5432) - Redis cache (port 6380) - **Serverles...
(QB_NEW_EN)
[grammar] ~9-~9: There might be a mistake here.
Context: ...port 5432) - Redis cache (port 6380) - Serverless Redis HTTP proxy (port 8079...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...erverless Redis HTTP** proxy (port 8079) - Web application (port 3000) - Cron...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ... 8079) - Web application (port 3000) - Cron service for scheduled tasks ### ...
(QB_NEW_EN)
[grammar] ~36-~36: There might be a mistake here.
Context: ...etc/systemd/system/inbox-zero.service`): - Auto-starts on boot - Restarts on failur...
(QB_NEW_EN)
[grammar] ~37-~37: There might be a mistake here.
Context: ...ox-zero.service`): - Auto-starts on boot - Restarts on failure - Working directory:...
(QB_NEW_EN)
[grammar] ~38-~38: There might be a mistake here.
Context: ...uto-starts on boot - Restarts on failure - Working directory: `/home/jason/services...
(QB_NEW_EN)
[grammar] ~39-~39: There might be a mistake here.
Context: ...estarts on failure - Working directory: /home/jason/services/inbox-zero - Runs: `docker compose up --remove-orphan...
(QB_NEW_EN)
[grammar] ~44-~44: There might be a mistake here.
Context: ...n-docker.sh The run-docker.sh script: 1. Sources environment variables from `apps...
(QB_NEW_EN)
[grammar] ~46-~46: There might be a mistake here.
Context: ...-d` (or passes through custom arguments) 3. Provides environment validation ### Doc...
(QB_NEW_EN)
[grammar] ~51-~51: There might be a mistake here.
Context: ...ompose.yml**: Main compose configuration - docker/Dockerfile.prod: Production Doc...
(QB_NEW_EN)
[grammar] ~52-~52: There might be a mistake here.
Context: ...Dockerfile.prod**: Production Dockerfile - Web service can use pre-built image (`gh...
(QB_NEW_EN)
[grammar] ~384-~384: There might be a mistake here.
Context: ... Persistence Issue (FIXED) #### Problem Refresh tokens were being incorrectly cl...
(QB_NEW_EN)
[grammar] ~387-~387: There might be a mistake here.
Context: ...atedly re-authenticate. #### Root Cause The handleGmailPermissionsCheck functi...
(QB_NEW_EN)
[grammar] ~390-~390: There might be a mistake here.
Context: ... issues or rate limiting. #### Symptoms - "Failed to initialize email provider" er...
(QB_NEW_EN)
[grammar] ~391-~391: There might be a mistake here.
Context: ...r" error with "No refresh token" message - Users redirected to /welcome or login sc...
(QB_NEW_EN)
[grammar] ~392-~392: There might be a mistake here.
Context: ... to /welcome or login screens repeatedly - Refresh tokens showing as NULL in databa...
(QB_NEW_EN)
[grammar] ~395-~395: There might be a mistake here.
Context: ... authentication worked #### Fix Applied 1. Modified gmail/permissions.ts: - On...
(QB_NEW_EN)
[grammar] ~396-~396: There might be a mistake here.
Context: ...ed 1. Modified gmail/permissions.ts: - Only clear access_token and `expires_a...
(QB_NEW_EN)
[grammar] ~402-~402: There might be a mistake here.
Context: ...ing any tokens 2. Improved auth.ts: - Changed OAuth prompt from `select_accoun...
(QB_NEW_EN)
[grammar] ~420-~420: There might be a mistake here.
Context: ... 'YOUR_ACCOUNT_ID'; ``` #### Testing Monitor refresh token persistence with t...
(QB_NEW_EN)
[grammar] ~426-~426: There might be a mistake here.
Context: ...esh.js ``` #### Force Re-Authentication If a user's refresh token is already NUL...
(QB_NEW_EN)
[grammar] ~427-~427: There might be a mistake here.
Context: ... a user's refresh token is already NULL: 1. Navigate to /force-relink in browser 2...
(QB_NEW_EN)
[grammar] ~428-~428: There might be a mistake here.
Context: ...dy NULL: 1. Navigate to /force-relink in browser 2. Click re-authentication butt...
(QB_NEW_EN)
[grammar] ~432-~432: There might be a mistake here.
Context: ... new refresh token #### Important Notes - Google doesn't always return refresh tok...
(QB_NEW_EN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Jit Security
- GitHub Check: Socket Security: Pull Request Alerts
🔇 Additional comments (4)
apps/web/CLAUDE.md (4)
1-437: Overall: Useful, actionable developer guide.The doc is comprehensive and ties deployment, development, and the OAuth fixes together well. With the minor corrections above, this will be an excellent internal reference.
8-13: Port mapping verified
The Redis service in docker-compose.yml uses
•${REDIS_PORT:-6380}:6379
which matches the documented host port 6380 → container port 6379. No changes needed.
16-23: run-docker.sh resides at the repository rootThe helper script is located at
run-docker.shin the project root, so the documentation’s reference (./run-docker.sh) is correct. There are no copies underapps/web/; you can ignore or remove any outdated mentions ofapps/web/run-docker.sh.
49-55: Docker configuration references verified
- Confirmed
docker/Dockerfile.prodexists in the repo.- Confirmed
docker-compose.ymlis present at the project root.- Confirmed
ghcr.io/elie222/inbox-zero:latestis published (tags: v2.5.27, latest).No further changes needed here.

Summary
This PR fixes critical issues with Gmail token refresh that were causing users to repeatedly re-authenticate their Google accounts. It also fixes onboarding redirect logic to properly check user status before showing the welcome screen.
Problem Statement
Users were experiencing:
Changes Made
1. Fixed Gmail Token Refresh Logic (
apps/web/utils/gmail/client.ts)expiresAtas milliseconds and add 5-minute buffer before expiration2. Enhanced Error Handling for Token Refresh
3. Fixed Login Redirect Logic (
apps/web/app/(landing)/login/page.tsx)/welcomeregardless of onboarding statuscompletedOnboardingAtbefore redirecting4. Added Debug Tooling (
apps/web/app/(app)/force-relink/page.tsx)5. Improved Token Persistence Debugging (
apps/web/utils/auth.ts)Testing
Migration Notes
No database migrations required. The changes are backward compatible.
Related Issues
This addresses common issues reported about:
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
Bug Fixes