Skip to content

Comments

Add upstream to my fork#684

Closed
zudsniper wants to merge 3 commits intoelie222:mainfrom
zudsniper:main
Closed

Add upstream to my fork#684
zudsniper wants to merge 3 commits intoelie222:mainfrom
zudsniper:main

Conversation

@zudsniper
Copy link

@zudsniper zudsniper commented Aug 14, 2025

lol author hardcoded UPSTASH_TOKEN cuz its being so fuckin annoying

Summary by CodeRabbit

  • New Features

    • Automatic startup checks and migrations to keep accounts healthy.
    • Daily background sync job to maintain Google watch subscriptions.
  • Bug Fixes

    • Improved handling of corrupted/expired tokens with safer decryption and automatic cleanup; some users may need to re‑authenticate.
    • More reliable login error flow with automatic sign‑out when already logged in.
  • Chores

    • Support for a new authentication secret environment variable with legacy fallback.
    • Docker utilities for easier setup and auto‑start; sample environment updates.

zudsniper and others added 3 commits August 13, 2025 16:09
- Add BETTER_AUTH_SECRET environment variable support with fallback to NEXTAUTH_SECRET
- Fix database schema to include Better-Auth specific fields (accessTokenExpiresAt, refreshTokenExpiresAt)
- Update auth configuration field mappings to match Better-Auth expectations
- Fix saveTokens function to use correct field names
- Re-enable AutoLogOut component for proper session cleanup
- Update .env.example with both BETTER_AUTH_SECRET and legacy NEXTAUTH_SECRET

These changes resolve consistent sign-in issues caused by field mapping mismatches
between NextAuth and Better-Auth, particularly the expires_at field conversion errors.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
…package

- Add @ai-sdk/ui-utils dependency to package.json
- Update imports in AI utility files to import stepCountIs from main ai package
- Fix imports in utils/ai/example-matches/find-example-matches.ts
- Fix imports in utils/ai/assistant/process-user-request.ts
- Fix imports in utils/ai/group/create-group.ts
- Add stepCountIs to llms/index.ts imports

The @ai-sdk/ui-utils package has been deprecated and stepCountIs moved to the main ai package in AI SDK 5.0.
This resolves the build error "stepCountIs is not exported from @ai-sdk/ui-utils".

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
…ition

- Create automatic migration utilities to detect and fix corrupted tokens
- Add startup migrations that run automatically on app initialization
- Improve encryption error handling with better logging and validation
- Add standalone cleanup script for manual token fixes when needed
- Clear old tokens that can't be decrypted to force re-authentication
- Integrate StartupMigrations component into app layout for automatic execution

This ensures migrating users don't need admin intervention - corrupted tokens are
automatically detected and cleared, allowing seamless re-authentication on next login.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Aug 14, 2025

@zudsniper is attempting to deploy a commit to the Inbox Zero OSS Program Team on Vercel.

A member of the Team first needs to authorize it.

@CLAassistant
Copy link

CLAassistant commented Aug 14, 2025

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 14, 2025

Walkthrough

Adds startup-time token cleanup with new migration utilities and a server component. Updates auth secret sourcing and token fields. Adjusts chat API usage and client flow. Introduces environment/schema changes, a Prisma migration, and multiple Ops additions (Docker cron, autostart, scripts). Minor import reorderings and a login error page tweak.

Changes

Cohort / File(s) Summary
Env & Config
apps/web/.env.example, apps/web/env.ts, apps/web/next.config.ts, apps/web/package.json
Added BETTER_AUTH_SECRET; made NEXTAUTH_SECRET optional; TypeScript build errors ignored; added dependency @ai-sdk/ui-utils.
Auth & Token Migration
apps/web/utils/auth.ts, apps/web/utils/encryption.ts, apps/web/utils/migration/fix-encrypted-tokens.ts, apps/web/utils/startup.ts, apps/web/components/StartupMigrations.tsx, apps/web/app/layout.tsx, apps/web/scripts/fix-corrupted-tokens.ts
Multi-source secret resolution; improved decrypt validation/logging; new routines to detect/fix corrupted tokens and decide cleanup; startup runner and server component to execute migrations; CLI script to repair tokens.
Database Schema
apps/web/prisma/schema.prisma, apps/web/prisma/migrations/20250623222304_/migration.sql
Account: removed default on expires_at; added accessTokenExpiresAt, refreshTokenExpiresAt. Migration drops EmailAccount.digestScheduleId (FK, index, column).
Chat Flow
apps/web/app/api/chat/route.ts, apps/web/providers/ChatProvider.tsx, apps/web/utils/ai/... (assistant/process-user-request.ts, example-matches/find-example-matches.ts, group/create-group.ts, llms/index.ts)
Server: switch to convertToCoreMessages and UIMessage from @ai-sdk/ui-utils. Client: move to api-based chat config with experimental_prepareRequestBody; use chat.append. Other files: import reorders only.
Login UI
apps/web/app/(landing)/login/error/page.tsx
Use AutoLogOut with loggedIn derived from data.id.
Docker & Ops
docker-compose.yml, docker-compose.yml.backup, docker/cron/cron-job.sh, run-docker.sh, run-docker.sh.backup, setup-autostart.sh
Added cron service to call API daily; hard-coded Redis token in compose; backup compose file; curl cron script; Docker runner scripts; comprehensive autostart setup (docker restart, cron, systemd) with test/remove modes.

Sequence Diagram(s)

sequenceDiagram
  participant Request as Layout Render
  participant Comp as StartupMigrations (server)
  participant Start as runStartupMigrations()
  participant Check as shouldRunTokenCleanup()
  participant Fix as fixCorruptedTokens()

  Request->>Comp: mount
  Comp->>Start: await runStartupMigrations()
  Start->>Check: evaluate counts
  alt Cleanup needed
    Start->>Fix: fixCorruptedTokens()
    Fix-->>Start: { fixed, total, ids? }
  else Not needed
    Start-->>Comp: noop
  end
  Comp-->>Request: null
Loading
sequenceDiagram
  participant UI as ChatProvider (client)
  participant API as /api/chat
  participant Core as convertToCoreMessages
  participant Svc as aiProcessAssistantChat

  UI->>UI: experimental_prepareRequestBody({id, messages, requestBody})
  UI->>API: POST /api/chat { id, message, ... }
  API->>Core: convertToCoreMessages(uiMessages)
  API->>Svc: aiProcessAssistantChat(messages=core)
  Svc-->>API: response
  API-->>UI: stream/response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

A rabbit taps the startup drum,
Tokens cleaned, migrations hum.
Secrets shuffled, keys align,
Chat now speaks in core design.
Cron at dawn will make a call—
Burrows tidy, servers tall.
Hippity hop, ship it all! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jit-ci
Copy link

jit-ci bot commented Aug 14, 2025

Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset.

In case there are security findings, they will be communicated to you as a comment inside the PR.

Hope you’ll enjoy using Jit.

Questions? Comments? Want to learn more? Get in touch with us.

@zudsniper
Copy link
Author

Wait I meant to make this PR the other direction, shit sorry

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 23

🔭 Outside diff range comments (7)
apps/web/utils/ai/example-matches/find-example-matches.ts (1)

69-93: Add error handling inside listEmails tool execute.

External Gmail calls can fail; surface actionable errors to the agent instead of throwing unhandled exceptions.

Apply this diff:

   const listEmailsTool = (gmail: gmail_v1.Gmail) => ({
     description: "List email messages. Returns max 20 results.",
     parameters: z.object({
       query: z.string().optional().describe("Optional Gmail search query."),
     }),
     execute: async ({ query }: { query: string | undefined }) => {
-      const { messages } = await queryBatchMessages(gmail, {
-        query: `${query || ""} -label:sent`.trim(),
-        maxResults: 20,
-      });
+      try {
+        const { messages } = await queryBatchMessages(gmail, {
+          query: `${query || ""} -label:sent`.trim(),
+          maxResults: 20,
+        });
 
-      const results = messages.map((message) => ({
-        emailId: message.id,
-        from: message.headers.from,
-        subject: message.headers.subject,
-        snippet: message.snippet,
-      }));
+        const results = messages.map((message) => ({
+          emailId: message.id,
+          from: message.headers.from,
+          subject: message.headers.subject,
+          snippet: message.snippet,
+        }));
 
-      for (const result of results) {
-        listedEmails[result.emailId] = result;
-      }
+        for (const result of results) {
+          listedEmails[result.emailId] = result;
+        }
 
-      return results;
+        return results;
+      } catch (err) {
+        const message = err instanceof Error ? err.message : String(err);
+        throw new Error(`Failed to list emails: ${message}`);
+      }
     },
   });
apps/web/utils/ai/group/create-group.ts (1)

41-54: Guard Gmail API errors inside listEmails tool.

Wrap the external call to avoid unhandled promise rejections and provide useful error context.

Apply this diff:

   execute: async ({ query }: { query: string | undefined }) => {
-    const { messages } = await queryBatchMessages(gmail, {
-      query: `${query || ""} -label:sent`.trim(),
-      maxResults: 20,
-    });
+    try {
+      const { messages } = await queryBatchMessages(gmail, {
+        query: `${query || ""} -label:sent`.trim(),
+        maxResults: 20,
+      });
 
-    const results = messages.map((message) => ({
-      from: message.headers.from,
-      subject: message.headers.subject,
-      snippet: message.snippet,
-    }));
+      const results = messages.map((message) => ({
+        from: message.headers.from,
+        subject: message.headers.subject,
+        snippet: message.snippet,
+      }));
 
-    return results;
+      return results;
+    } catch (err) {
+      const message = err instanceof Error ? err.message : String(err);
+      throw new Error(`Failed to list emails: ${message}`);
+    }
   },
apps/web/utils/ai/assistant/process-user-request.ts (3)

64-66: Fix potential crash when messages is empty.

Accessing messages[messages.length - 1] without a length guard can throw. Also keep the “assistant cannot be last” logic.

Apply this diff:

-  if (messages[messages.length - 1].role === "assistant")
+  if (messages.length === 0)
+    throw new Error("No messages provided");
+  if (messages[messages.length - 1].role === "assistant")
     throw new Error("Assistant message cannot be last");

146-159: Avoid logging full message contents; redact sensitive data.

PII-heavy payloads in trace logs violate least-privilege. Log metadata and sizes instead.

Apply this diff:

-  logger.trace("Input", { allMessages });
+  logger.trace("Input", {
+    messageCount: allMessages.length,
+    roles: allMessages.map((m) => m.role),
+    approxChars: allMessages.reduce((n, m) => n + String(m.content).length, 0),
+  });

706-717: Handle "none" by clearing the sender's category (don't return "Category not found")

The tool schema includes "none" but the current implementation looks up a category name and returns "Category not found". I verified updateCategoryForSender is typed to accept a string (non-nullable) in apps/web/utils/categorize/senders/categorize.ts, so passing null currently requires a cast. Recommend explicitly supporting clearing the category and making the API types correct.

Files to change

  • apps/web/utils/ai/assistant/process-user-request.ts — handle category === "none" before finding the category and call updateCategoryForSender with categoryId: null.
  • apps/web/utils/categorize/senders/categorize.ts — update updateCategoryForSender signature to accept categoryId: string | null.

Suggested diffs

  1. process-user-request.ts (inside getUpdateCategoryTool.execute)
  • Add handling for "none" before finding cat:
+    // Support clearing category
+    if (category === "none") {
+      try {
+        await updateCategoryForSender({
+          emailAccountId,
+          sender: existingSender?.email || sender,
+          categoryId: null,
+        });
+        return { success: true };
+      } catch (error) {
+        const message = error instanceof Error ? error.message : String(error);
+        logger.error("Error while clearing category for sender", {
+          ...loggerOptions,
+          sender,
+          error: message,
+        });
+        return { error: "Failed to clear category for sender", message };
+      }
+    }
  1. categorize/senders/categorize.ts
  • Make updateCategoryForSender accept null safely:
-export async function updateCategoryForSender({
-  emailAccountId,
-  sender,
-  categoryId,
-}: {
-  emailAccountId: string;
-  sender: string;
-  categoryId: string;
-}) {
+export async function updateCategoryForSender({
+  emailAccountId,
+  sender,
+  categoryId,
+}: {
+  emailAccountId: string;
+  sender: string;
+  categoryId: string | null;
+}) {

Notes

  • Prisma schema allows categoryId to be null (Newsletter.categoryId is optional), and other code already sets categoryId: null for bulk operations — changing the function signature to string | null is safe and avoids casting hacks.
  • No other call sites need behavioral changes; they will still pass strings. This keeps callers type-safe and allows the assistant tool to clear categories cleanly.
apps/web/utils/llms/index.ts (1)

137-144: createGenerateObject doesn’t pass a model; usage logging may be wrong

Unlike createGenerateText, this wrapper never injects modelOptions.model. If callers don’t explicitly provide a model, generateObject may fail or use a default. Also, you log usage using modelOptions, which can be inconsistent with the actual model used.

Align with createGenerateText by injecting the selected model.

Apply this diff:

-      const result = await generateObject(
-        {
-          ...options,
-          ...commonOptions,
-        },
-        ...restArgs,
-      );
+      const result = await generateObject(
+        {
+          ...options,
+          ...commonOptions,
+          model: modelOptions.model,
+        },
+        ...restArgs,
+      );
apps/web/providers/ChatProvider.tsx (1)

65-71: Replace console.error with scoped logger

Project guidelines prohibit console usage. Use createScopedLogger.

Apply this diff:

+import { createScopedLogger } from "@/utils/logger";

And define a scoped logger (near the top-level, once):

-const ChatContext = createContext<ChatContextType | undefined>(undefined);
+const ChatContext = createContext<ChatContextType | undefined>(undefined);
+const logger = createScopedLogger("chat-provider");

Then update onError:

-    onError: (error) => {
-      console.error(error);
+    onError: (error) => {
+      logger.error("AI chat error", { error });
       toastError({
         title: "An error occured!",
         description: error.message || "",
       });
     },
🧹 Nitpick comments (23)
apps/web/utils/ai/example-matches/find-example-matches.ts (1)

121-136: Deduplicate matches to avoid duplicate rows when the tool is called multiple times.

Multiple tool invocations can return the same emailId; dedupe before returning.

Apply this diff:

   const matches = findExampleMatchesToolCalls.reduce<
     z.infer<typeof findExampleMatchesSchema>["matches"]
   >((acc, { input }) => {
     const typedArgs = input as z.infer<typeof findExampleMatchesSchema>;
     return acc.concat(typedArgs.matches);
   }, []);
 
   return {
-    matches: matches
-      .filter((match) => listedEmails[match.emailId])
-      .map((match) => ({
-        ...listedEmails[match.emailId],
-        rule: match.rule,
-      })),
+    matches: Array.from(
+      new Map(
+        matches
+          .filter((match) => listedEmails[match.emailId])
+          .map((match) => [
+            match.emailId,
+            { ...listedEmails[match.emailId], rule: match.rule },
+          ]),
+      ).values(),
+    ),
   };
apps/web/utils/ai/group/create-group.ts (1)

110-121: Avoid accumulating duplicates in senders/subjects.

Repeated tool calls can create duplicates; dedupe for stable, predictable output. Prefer lodash per guidelines for utils/.

Apply this diff:

-  const combinedArgs = generateGroupItemsToolCalls.reduce<
-    z.infer<typeof generateGroupItemsSchema>
-  >(
-    (acc, { input }) => {
-      const typedArgs = input as z.infer<typeof generateGroupItemsSchema>;
-      return {
-        senders: [...acc.senders, ...typedArgs.senders],
-        subjects: [...acc.subjects, ...typedArgs.subjects],
-      };
-    },
-    { senders: [], subjects: [] },
-  );
+  const combinedArgs = generateGroupItemsToolCalls.reduce<
+    z.infer<typeof generateGroupItemsSchema>
+  >(
+    (acc, { input }) => {
+      const typedArgs = input as z.infer<typeof generateGroupItemsSchema>;
+      return {
+        senders: Array.from(new Set([...acc.senders, ...typedArgs.senders])),
+        subjects: Array.from(new Set([...acc.subjects, ...typedArgs.subjects])),
+      };
+    },
+    { senders: [], subjects: [] },
+  );

If you'd rather use lodash:

  • import { uniq } from "lodash"
  • replace Array.from(new Set(...)) with uniq([...])
apps/web/utils/ai/assistant/process-user-request.ts (1)

256-271: Align static conditions update with description (include body).

Description lists body as editable static condition; include it in the update call.

Apply this diff:

       return updateRule(ruleName, {
         from: staticConditions?.from,
         to: staticConditions?.to,
         subject: staticConditions?.subject,
+        body: staticConditions?.body,
       });
apps/web/app/(landing)/login/error/page.tsx (1)

27-31: Avoid duplicate loading/redirect handling; rely on LoadingContent consistently

You’re short-circuiting on isLoading and data?.id while also passing the same state to LoadingContent. This duplication makes flow harder to reason about and can cause UI flicker. Let LoadingContent handle loading/error/display and keep the redirect in the useEffect.

Apply this diff to remove the redundant early returns:

-  if (isLoading) return <Loading />;
-  // will redirect to /welcome if user is logged in
-  if (data?.id) return <Loading />;
+  // Rely on LoadingContent for consistent loading/error handling
apps/web/.env.example (1)

4-5: Fix dotenv formatting for new secrets (align with linter)

The new lines have spaces around “=” and inline comments after empty values. This breaks common dotenv parsers and is flagged by the linter. Move comments to their own lines and quote placeholder values.

Apply this diff:

- BETTER_AUTH_SECRET= # Generate a random secret here: https://generate-secret.vercel.app/32
- NEXTAUTH_SECRET= # Legacy support - Generate a random secret here: https://generate-secret.vercel.app/32
+# Generate a random secret here: https://generate-secret.vercel.app/32
+BETTER_AUTH_SECRET="replace_me"
+# Legacy support - Generate a random secret here: https://generate-secret.vercel.app/32
+NEXTAUTH_SECRET="replace_me"

Lint references:

  • [SpaceCharacter]: The line has spaces around equal sign
  • [ValueWithoutQuotes]: This value needs to be surrounded in quotes
apps/web/utils/encryption.ts (1)

88-96: Log byte length (not hex string length) and reduce sensitive error payloads in non-dev.

Minor improvement: in dev logs, prefer byte length over hex string length to avoid confusion. Current field name suggests characters, not bytes.

Apply this diff:

-    if (process.env.NODE_ENV === "development") {
-      logger.error("Decryption failed", {
-        error,
-        encryptedLength: encryptedText?.length,
-      });
+    if (process.env.NODE_ENV === "development") {
+      logger.error("Decryption failed", {
+        error,
+        encryptedHexLength: encryptedText?.length,
+        encryptedByteLength: Math.floor((encryptedText?.length ?? 0) / 2),
+      });
     } else {
       logger.warn("Token decryption failed - token may need refresh");
     }
run-docker.sh (1)

5-19: Prefer docker compose --env-file over sourcing .env to reduce risk and side effects.

Sourcing an .env file executes it in the current shell, which carries parsing/escaping pitfalls and executes any shell content. Using the Compose-native --env-file keeps parsing rules scoped to Compose and avoids exporting all variables into your shell environment.

Proposed rewrite:

-# Source the environment file to load variables for Docker
-if [ -f "apps/web/.env" ]; then
-    set -a  # Enable export of all variables
-    # shellcheck disable=SC1091
-    source apps/web/.env
-    set +a  # Disable export of all variables
-    echo "✓ Sourced apps/web/.env"
-    # Acknowledge presence without revealing any part of the value
-    if [ -n "${UPSTASH_REDIS_TOKEN:-}" ]; then
-      echo "✓ UPSTASH_REDIS_TOKEN is set"
-    fi
-else
-    echo "❌ Error: apps/web/.env file not found"
-    exit 1
-fi
+ENV_FILE="apps/web/.env"
+if [ ! -f "$ENV_FILE" ]; then
+  echo "❌ Error: $ENV_FILE file not found"
+  exit 1
+fi
+echo "✓ Using env file: $ENV_FILE"
 
 # If arguments are provided, pass them through to docker compose directly.
 # Otherwise, default to `up -d`.
 if [ "$#" -gt 0 ]; then
   echo "🚀 Running: docker compose $*"
-  docker compose "$@"
+  docker compose --env-file "$ENV_FILE" "$@"
 else
   echo "🚀 Starting docker compose (default): docker compose up -d"
-  docker compose up -d
+  docker compose --env-file "$ENV_FILE" up -d
 fi

If you need help migrating docker-compose.yml to rely on env_file in services instead, I can draft that too.

Also applies to: 21-29

docker-compose.yml.backup (2)

11-24: Avoid sharing a single volume between Postgres and Redis.

Using the same named volume for two data stores makes backups/restores and cleanups error-prone and couples lifecycles unnecessarily.

Apply a split into dedicated volumes:

   db:
     image: postgres
     restart: always
     container_name: inbox-zero
     environment:
       - POSTGRES_USER=${POSTGRES_USER:-postgres}
       - POSTGRES_DB=${POSTGRES_DB:-inboxzero}
       - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
     volumes:
-      - database-data:/var/lib/postgresql/data/
+      - postgres-data:/var/lib/postgresql/data/
@@
   redis:
     image: redis
     ports:
       - ${REDIS_PORT:-6380}:6379
     volumes:
-      - database-data:/data
+      - redis-data:/data
@@
 volumes:
-  database-data:
+  postgres-data:
+  redis-data:

13-14: Limit DB exposure to localhost during development.

To reduce accidental network exposure, bind Postgres to loopback only.

-      - ${POSTGRES_PORT:-5432}:5432
+      - 127.0.0.1:${POSTGRES_PORT:-5432}:5432
docker-compose.yml (1)

64-65: Pin image versions for reproducibility.

Avoid “latest” in production-like flows; pin to a known tag (or digest).

Examples:

  • alpine:3.19 (cron)
  • hiett/serverless-redis-http:
  • ghcr.io/elie222/inbox-zero:

Also applies to: 41-41, 30-30

run-docker.sh.backup (1)

3-13: Optional: guard .env sourcing to avoid exporting unintended values.

If apps/web/.env contains untrusted content, using set -a may export too much. Prefer explicit export of required vars (UPSTASH_REDIS_TOKEN, CRON_SECRET, etc.), or use an allowlist loader.

setup-autostart.sh (4)

26-26: ShellCheck SC2155: split declaration/assignment for SCRIPT_DIR.

Avoid masking return values from command substitutions.

-readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+readonly SCRIPT_DIR="$script_dir"

65-69: ShellCheck SC2155: split declaration/assignment for timestamp.

Minor linter fix.

-    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
+    local timestamp
+    timestamp=$(date '+%Y-%m-%d %H:%M:%S')

389-399: Test/Remove flows: guard systemctl usage on non-systemd hosts.

test_configuration/remove_configuration call systemctl unconditionally when present; on non-systemd hosts (e.g., macOS) systemctl won’t exist. You already gate with command_exists in setup; mirror that guard here to avoid noisy errors.


136-145: Committing .backup files to VCS is risky.

The script creates docker-compose.yml.backup; the repo already contains a *.backup compose. It’s easy to commit these by accident and drift config. Add to .gitignore or use a temp path under SCRIPT_DIR/.cache.

apps/web/utils/llms/index.ts (3)

218-223: Guard optional onFinish before Promise.all

onFinish is optional; passing undefined into Promise.all is fine but noisy and weakly typed. Guard it to avoid accidental type/logic hazards.

Apply this diff:

-      try {
-        await Promise.all([usagePromise, finishPromise]);
+      try {
+        const promises: Promise<unknown>[] = [usagePromise];
+        if (finishPromise) promises.push(finishPromise);
+        await Promise.all(promises);

81-88: Avoid logging raw tool call inputs (potential PII)

Even at trace level, logging raw tool inputs risks leaking PII/secrets. Log a truncated preview or metadata instead.

Apply this diff:

-      if (args[0].tools) {
-        const toolCallInput = result.toolCalls?.[0]?.input;
-        logger.trace("Result", {
-          label,
-          result: toolCallInput,
-        });
-      }
+      if (args[0].tools) {
+        const toolCallInput = result.toolCalls?.[0]?.input;
+        const preview =
+          toolCallInput != null
+            ? String(
+                typeof toolCallInput === "string"
+                  ? toolCallInput
+                  : JSON.stringify(toolCallInput),
+              ).slice(0, MAX_LOG_LENGTH)
+            : undefined;
+        logger.trace("Result", {
+          label,
+          toolCallInputPreview: preview,
+        });
+      }

303-344: Consider replacing custom withRetry with p-retry

You’re already using p-retry elsewhere (per the note). Consolidating on p-retry reduces maintenance surface and provides tried-and-tested backoff/abort ergonomics.

If you want, I can draft a p-retry wrapper that preserves your retryIf/maxRetries/delayMs semantics.

apps/web/scripts/fix-corrupted-tokens.ts (1)

8-39: Use scoped logger instead of console; verify TS path alias works under tsx

I recommend switching console.* to a scoped logger and ensuring the "@/..." path alias resolves when running this script. My verification did not find a tsconfig "paths" mapping for '@/*' nor a "tsx" entry in apps/web/package.json, so please confirm the path mapping or use relative imports / tsconfig-paths when running.

Files to check / change:

  • apps/web/scripts/fix-corrupted-tokens.ts — replace console usage and add logger import/initialization.
  • Verify tsconfig.json (repo root or apps/web/tsconfig.json) contains a "paths" mapping for "@/*".
  • Verify apps/web/package.json has tsx in deps/devDeps or run the script with tsconfig-paths/register.

Apply this diff:

 import { fixCorruptedTokens } from "@/utils/migration/fix-encrypted-tokens";
+import { createScopedLogger } from "@/utils/logger";
 
+const logger = createScopedLogger("scripts:fix-corrupted-tokens");
+
 async function main() {
-  console.log("🔧 Starting corrupted token cleanup...");
+  logger.info("🔧 Starting corrupted token cleanup...");
 
   try {
     const result = await fixCorruptedTokens();
 
-    console.log("✅ Token cleanup completed successfully!");
-    console.log("📊 Results:");
-    console.log(`   - Total accounts checked: ${result.total}`);
-    console.log(`   - Corrupted tokens fixed: ${result.fixed}`);
+    logger.info("✅ Token cleanup completed successfully!");
+    logger.info("📊 Results:");
+    logger.info(`   - Total accounts checked: ${result.total}`);
+    logger.info(`   - Corrupted tokens fixed: ${result.fixed}`);
 
     if (result.fixed > 0) {
-      console.log(
+      logger.warn(
         "\n⚠️  Users with affected accounts will need to re-authenticate on their next login.",
       );
-      console.log(
+      logger.info(
         "   This will happen automatically - no action required from users.",
       );
     } else {
-      console.log("🎉 No corrupted tokens found - all accounts are healthy!");
+      logger.info("🎉 No corrupted tokens found - all accounts are healthy!");
     }
 
     process.exit(0);
   } catch (error) {
-    console.error("❌ Error during token cleanup:", error);
+    logger.error("❌ Error during token cleanup:", { error });
     process.exit(1);
   }
 }
apps/web/providers/ChatProvider.tsx (1)

78-91: Guard empty submissions

Prevent appending empty messages.

Apply this diff:

-  const handleSubmit = useCallback(() => {
+  const handleSubmit = useCallback(() => {
+    if (!input.trim()) return;
     chat.append({
       role: "user",
       parts: [
         {
           type: "text",
           text: input,
         },
       ],
     });
 
     setInput("");
-  }, [chat.append, input]);
+  }, [chat.append, input]);
apps/web/utils/migration/fix-encrypted-tokens.ts (3)

21-27: Remove unused field from select to reduce payload

userId is selected but never used. Trim the select to reduce data transfer and memory.

       select: {
         id: true,
-        userId: true,
         provider: true,
         access_token: true,
         refresh_token: true,
       },

106-109: Comment says “Only runs once per deployment” but no enforcement is implemented

If you truly want a once-per-deploy (or once-ever) run, persist a run marker (e.g., in a settings/config table or KV) keyed by a version, and short-circuit when present. I can help wire a minimal SystemSetting model and helper.


16-28: Optional: Batch-scan accounts to reduce memory/DB load on large datasets

findMany without pagination loads all matching rows into memory and may overwhelm the DB if the table is large. Consider a cursor-based loop with take to process in batches.

Example approach:

const take = 500;
let cursor: string | undefined;
let total = 0;

while (true) {
  const batch = await prisma.account.findMany({
    where: { OR: [{ access_token: { not: null } }, { refresh_token: { not: null } }] },
    select: { id: true, provider: true, access_token: true, refresh_token: true },
    take,
    ...(cursor ? { skip: 1, cursor: { id: cursor } } : {}),
    orderBy: { id: "asc" },
  });

  if (batch.length === 0) break;
  total += batch.length;

  // process batch...
  cursor = batch[batch.length - 1]!.id;
}

Comment on lines +4 to +5
BETTER_AUTH_SECRET= # Generate a random secret here: https://generate-secret.vercel.app/32
NEXTAUTH_SECRET= # Legacy support - Generate a random secret here: https://generate-secret.vercel.app/32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure encryption envs match env.ts expectations

env.ts requires EMAIL_ENCRYPT_SECRET and EMAIL_ENCRYPT_SALT, but this example file lists GOOGLE_ENCRYPT_* and MICROSOFT_ENCRYPT_*. Add the required EMAIL_ENCRYPT_* envs here to avoid runtime validation failures.

Proposed insertion near the “Gmail” or general auth section:

+# Email encryption (required by apps/web/env.ts)
+EMAIL_ENCRYPT_SECRET= # openssl rand -hex 32
+EMAIL_ENCRYPT_SALT=   # openssl rand -hex 16

Follow-up: If the system actually expects provider-specific secrets (GOOGLE_/MICROSOFT_), either keep both sets or align env.ts to match the provider-specific keys. I can generate a follow-up PR once you confirm the intended keys.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
BETTER_AUTH_SECRET= # Generate a random secret here: https://generate-secret.vercel.app/32
NEXTAUTH_SECRET= # Legacy support - Generate a random secret here: https://generate-secret.vercel.app/32
BETTER_AUTH_SECRET= # Generate a random secret here: https://generate-secret.vercel.app/32
NEXTAUTH_SECRET= # Legacy support - Generate a random secret here: https://generate-secret.vercel.app/32
# Email encryption (required by apps/web/env.ts)
EMAIL_ENCRYPT_SECRET= # openssl rand -hex 32
EMAIL_ENCRYPT_SALT= # openssl rand -hex 16
🧰 Tools
🪛 dotenv-linter (3.3.0)

[warning] 4-4: [SpaceCharacter] The line has spaces around equal sign

(SpaceCharacter)


[warning] 4-4: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)


[warning] 5-5: [SpaceCharacter] The line has spaces around equal sign

(SpaceCharacter)


[warning] 5-5: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)

🤖 Prompt for AI Agents
In apps/web/.env.example around lines 4-5, the example file is missing the
EMAIL_ENCRYPT_SECRET and EMAIL_ENCRYPT_SALT environment variables that env.ts
validates; add EMAIL_ENCRYPT_SECRET and EMAIL_ENCRYPT_SALT (with placeholder
comments to generate random values) near the Gmail/auth section so runtime
validation passes, and either retain the existing GOOGLE_/MICROSOFT_
provider-specific keys or duplicate/alias them as needed to keep provider
expectations and env.ts aligned.

import { GlobalProviders } from "@/providers/GlobalProviders";
import { UTM } from "@/app/utm";
import { startupImage } from "@/app/startup-image";
import { StartupMigrations } from "@/components/StartupMigrations";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Avoid running startup migrations in the layout render path.

Running migrations during RootLayout render can add latency per request, trigger multiple concurrent runs under load, and entangle UI with ops logic. Prefer Next.js instrumentation.ts to run once per server boot.

Apply this change:

  • Remove the import and component usage in layout.tsx:
-import { StartupMigrations } from "@/components/StartupMigrations";
@@
-        <StartupMigrations />
  • Create apps/web/instrumentation.ts:
// apps/web/instrumentation.ts
import { runStartupMigrations } from "@/utils/startup";

export async function register() {
  await runStartupMigrations();
}

Ensure runStartupMigrations is idempotent and protected by a distributed lock (e.g., Postgres advisory lock or Redis lock) to prevent concurrent execution across replicas.

Also applies to: 88-88


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching for StartupMigrations usages..."
rg -n --hidden --no-ignore --smart-case "StartupMigrations" -S -C3 || true

echo
echo "Searching for runStartupMigrations usages/definitions..."
rg -n --hidden --no-ignore --smart-case "runStartupMigrations" -S -C3 || true

echo
echo "Looking for apps/web/app/layout.tsx file..."
rg -n --hidden --no-ignore --smart-case --glob "apps/web/**" "layout.tsx" -S -C3 || true

# If apps/web/app/layout.tsx exists, print it
if [ -f "apps/web/app/layout.tsx" ]; then
  echo
  echo "----- apps/web/app/layout.tsx -----"
  sed -n '1,240p' apps/web/app/layout.tsx || true
else
  echo
  echo "apps/web/app/layout.tsx not found at that exact path. Searching for any layout.tsx..."
  rg -n --hidden --no-ignore --smart-case "layout.tsx" -S -C3 || true
fi

echo
echo "Searching for instrumentation.ts files..."
rg -n --hidden --no-ignore --smart-case "instrumentation.ts" -S -C3 || true

echo
echo "Searching for utils/startup or similar startup utilities..."
rg -n --hidden --no-ignore --smart-case "startup" -S -C3 || true

Length of output: 20456


Move startup migrations out of RootLayout — run once on server boot via instrumentation.register

Verified: apps/web/app/layout.tsx imports and renders StartupMigrations (calls runStartupMigrations in apps/web/utils/startup.ts). No apps/web/instrumentation.ts exists. Running migrations in RootLayout causes per-request latency and concurrent-run risk.

Files to change:

  • apps/web/app/layout.tsx — remove import and component usage (currently at import line ~15 and render line ~88).
  • apps/web/components/StartupMigrations.tsx — server component that currently calls runStartupMigrations(); delete or leave unused.
  • apps/web/utils/startup.ts — keep runStartupMigrations here; make sure it’s idempotent and protected by a distributed lock.
  • Add apps/web/instrumentation.ts — call runStartupMigrations from register() so it runs once per server boot.

Apply these edits:

Remove from apps/web/app/layout.tsx:

-import { StartupMigrations } from "@/components/StartupMigrations";
@@
-        <StartupMigrations />

Add apps/web/instrumentation.ts:

// apps/web/instrumentation.ts
import { runStartupMigrations } from "@/utils/startup";

export async function register() {
  await runStartupMigrations();
}

Ensure runStartupMigrations is idempotent and guarded by a distributed lock (Postgres advisory lock or Redis lock) to prevent concurrent execution across replicas.

🤖 Prompt for AI Agents
In apps/web/app/layout.tsx around line 15 and line 88, remove the import of
StartupMigrations and the component usage so migrations are no longer run
per-request in RootLayout; create a new file apps/web/instrumentation.ts that
exports an async register() which calls runStartupMigrations() from
apps/web/utils/startup.ts so migrations run once at server boot; ensure
runStartupMigrations in apps/web/utils/startup.ts is idempotent and protected by
a distributed lock (e.g., Postgres advisory lock or a Redis lock) to prevent
concurrent execution across replicas; delete or leave
apps/web/components/StartupMigrations.tsx unused after removing its usage.

Comment on lines +7 to +13
export async function StartupMigrations() {
// Run migrations on server side during page load
await runStartupMigrations();

// Return nothing - this is just for side effects
return null;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Running migrations on every layout render will add latency and DB load

Because this server component is rendered with the root layout, it’ll execute on most requests. Prefer a once-per-process trigger (or Next.js instrumentation.ts) to avoid per-request overhead.

Apply this diff to execute once per server instance:

 import { runStartupMigrations } from "@/utils/startup";
 
 /**
  * Server component that runs startup migrations
  * This component will run on the server side when the app loads
  */
 export async function StartupMigrations() {
-  // Run migrations on server side during page load
-  await runStartupMigrations();
+  // Ensure migrations run only once per server instance
+  // Module scope guarantees persistence across requests in the same process
+  let startupMigrationsPromise:
+    | Promise<void>
+    | null
+    | undefined = (globalThis as unknown as { __IZ_STARTUP_MIGRATIONS_PROMISE?: Promise<void> })
+      .__IZ_STARTUP_MIGRATIONS_PROMISE;
+  if (!startupMigrationsPromise) {
+    startupMigrationsPromise = runStartupMigrations();
+    (globalThis as unknown as { __IZ_STARTUP_MIGRATIONS_PROMISE?: Promise<void> }).__IZ_STARTUP_MIGRATIONS_PROMISE =
+      startupMigrationsPromise;
+  }
+  await startupMigrationsPromise;
 
   // Return nothing - this is just for side effects
   return null;
 }

Alternatively, move this into apps/web/instrumentation.ts to run once at server start. I can provide the instrumentation.ts variant if you prefer.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function StartupMigrations() {
// Run migrations on server side during page load
await runStartupMigrations();
// Return nothing - this is just for side effects
return null;
}
export async function StartupMigrations() {
// Ensure migrations run only once per server instance
// Module scope guarantees persistence across requests in the same process
let startupMigrationsPromise:
| Promise<void>
| null
| undefined = (globalThis as unknown as { __IZ_STARTUP_MIGRATIONS_PROMISE?: Promise<void> })
.__IZ_STARTUP_MIGRATIONS_PROMISE;
if (!startupMigrationsPromise) {
startupMigrationsPromise = runStartupMigrations();
(globalThis as unknown as { __IZ_STARTUP_MIGRATIONS_PROMISE?: Promise<void> }).__IZ_STARTUP_MIGRATIONS_PROMISE =
startupMigrationsPromise;
}
await startupMigrationsPromise;
// Return nothing - this is just for side effects
return null;
}

Comment on lines +21 to +22
NEXTAUTH_SECRET: z.string().min(1).optional(), // Legacy support for migration
BETTER_AUTH_SECRET: z.string().min(1).optional(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Scan for accidental hardcoded provider tokens

Given the sensitive nature of auth and queue/redis providers, ensure no Upstash/QStash secrets are hardcoded in source files.

Use this secret-scan to flag suspicious assignments (excludes .env files):


🏁 Script executed:

#!/bin/bash
# Grep for potentially hardcoded UPSTASH/QSTASH tokens in code (not env files)
rg -n -C1 -P '(UPSTASH|QSTASH)[A-Z0-9_]*\s*[:=]\s*["\'][^"\']{8,}["\']' -g '!**/.env*' -g '!**/node_modules/**'

Length of output: 116


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Searching for provider names (upstash/qstash)..."
rg -n -C1 -P "(?i)\b(upstash|qstash)\b" -g '!**/.env*' -g '!**/node_modules/**' || echo "No matches for provider names."

echo "Searching for secret/token/key assignments (>=8 chars)..."
rg -n -C1 -P "(?i)\b(secret|token|key)\b\s*[:=]\s*['\"][A-Za-z0-9_\-+/=]{8,}['\"]" -g '!**/.env*' -g '!**/node_modules/**' || echo "No matches for secret assignments."

echo "Searching for long string literals (>=20 chars)..."
rg -n -C1 -P "['\"][A-Za-z0-9_\-+/=]{20,}['\"]" -g '!**/.env*' -g '!**/node_modules/**' || echo "No long string literals found."

Length of output: 240467


Hardcoded Upstash/QStash tokens found — please remove & rotate immediately

Scan results show committed token literals in the repo. Actionable locations:

  • docker-compose.yml
    • SRH_TOKEN (line ~35) — hardcoded long token literal present.
    • UPSTASH_REDIS_TOKEN (line ~61) — hardcoded long token literal present (same value).
  • docker/Dockerfile.prod
    • Several build-time envs set to dummy secrets (NEXTAUTH_SECRET, UPSTASH_REDIS_TOKEN, QSTASH_TOKEN, QSTASH_CURRENT_SIGNING_KEY, QSTASH_NEXT_SIGNING_KEY). Confirm these are intentionally dummy; if not, remove.
  • apps/web/env.ts (lines 21–22)
    • NEXTAUTH_SECRET / BETTER_AUTH_SECRET are Zod schema entries (no hardcoded secret here).

Recommended fixes (concise):

  • Remove the hardcoded tokens from docker-compose.yml and use environment variables / secrets manager instead.
  • Immediately rotate any tokens that were committed to source control.
  • If dummy values are needed for builds, use build ARGs or placeholder values that cannot be confused with real secrets.
  • Add a secret-scan check to CI / pre-commit to prevent future commits.
🤖 Prompt for AI Agents
In apps/web/env.ts around lines 21 to 22, the reviewer flagged hardcoded
Upstash/QStash tokens in the repo and listed these Zod schema entries
(NEXTAUTH_SECRET / BETTER_AUTH_SECRET); ensure you are not assigning any literal
default values in this file—keep these schema entries as optional validators
only and do not populate them with secrets. Remove any accidental hardcoded
strings, confirm the app reads secrets from process.env or a secret manager at
runtime, and if builds require placeholders use non-sensitive dummy values (or
build ARGs) that cannot be mistaken for real tokens; separately rotate any real
tokens found in docker-compose.yml or Dockerfile.prod and replace them with env
vars or secret references.

Comment on lines +18 to +20
typescript: {
ignoreBuildErrors: true,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Don’t ship with TypeScript errors suppressed in production

typescript.ignoreBuildErrors: true masks real issues and undermines type safety (violates “Use TypeScript with strict null checks”). Gate it by environment so production builds fail on type errors.

Apply this diff to only ignore in non-production:

   typescript: {
-    ignoreBuildErrors: true,
+    ignoreBuildErrors: env.NODE_ENV !== "production",
   },

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/next.config.ts around lines 18 to 20, the config currently sets
typescript.ignoreBuildErrors: true which suppresses TypeScript errors in all
environments; change it so ignoreBuildErrors is true only for non-production
builds (e.g., NODE_ENV !== 'production') and false (or omitted) for production.
Update the config to read the environment and set ignoreBuildErrors accordingly,
ensuring production builds fail on type errors.

Comment on lines +1 to +2
#!/bin/bash

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden the script: fail fast, run from script dir, and respect compose variants.

Add strict mode and run from the script directory so relative paths resolve consistently. Also bail out if docker compose fails.

 #!/bin/bash
+set -euo pipefail
+
+# Move to repo root (script directory)
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+cd "$SCRIPT_DIR"
@@
 echo "🚀 Starting docker compose..."
-docker compose up -d
+docker compose up -d
 echo ""
-echo "✅ Docker compose started successfully"
+echo "✅ Docker compose started successfully"

If you need to support both docker compose and docker-compose:

# outside the diff – helper to select the available command
if docker compose version >/dev/null 2>&1; then
  DC='docker compose'
elif command -v docker-compose >/dev/null 2>&1; then
  DC='docker-compose'
else
  echo "❌ Error: Docker Compose not found"
  exit 1
fi
$DC up -d

Also applies to: 15-19

🤖 Prompt for AI Agents
In run-docker.sh.backup (lines 1-2, and also apply changes at lines 15-19),
enable strict mode and fail-fast (set -euo pipefail and optionally IFS), change
working directory to the script's directory so relative paths resolve
consistently, detect and select the available Docker Compose command (prefer
`docker compose` but fall back to `docker-compose`, exiting with an error if
neither exists), use the selected command for the compose up command and check
its exit status so the script bails if compose fails, and ensure all subsequent
compose invocations use the chosen command.

Comment on lines +8 to +9
echo "✓ Sourced apps/web/.env"
echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Don’t print even partial secret values.

Printing the first 10 chars of UPSTASH_REDIS_TOKEN is a leakage vector in logs and CI.

-    echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..."
+    if [[ -n "${UPSTASH_REDIS_TOKEN:-}" ]]; then
+      echo "✓ UPSTASH_REDIS_TOKEN is set"
+    else
+      echo "❌ Error: UPSTASH_REDIS_TOKEN is not set"
+      exit 1
+    fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "✓ Sourced apps/web/.env"
echo "✓ UPSTASH_REDIS_TOKEN is set: ${UPSTASH_REDIS_TOKEN:0:10}..."
echo "✓ Sourced apps/web/.env"
if [[ -n "${UPSTASH_REDIS_TOKEN:-}" ]]; then
echo "✓ UPSTASH_REDIS_TOKEN is set"
else
echo "❌ Error: UPSTASH_REDIS_TOKEN is not set"
exit 1
fi
🤖 Prompt for AI Agents
In run-docker.sh.backup around lines 8 to 9, do not print any part of
UPSTASH_REDIS_TOKEN to logs; remove the partial-token interpolation and instead
check presence and echo a non-secret confirmation (for example: "✓
UPSTASH_REDIS_TOKEN is set" or "✓ UPSTASH_REDIS_TOKEN is not set") without
including or slicing the variable value, ensuring no secret characters are
output.

Comment on lines +152 to +193
# Add restart policy to services that don't have one
# This is a bit complex but ensures we don't duplicate existing restart policies
awk '
/^services:/ { in_services = 1 }
/^[a-zA-Z]/ && !/^services:/ && !/^ / { in_services = 0 }
/^ [a-zA-Z]/ && in_services {
in_service = 1
service_line = $0
has_restart = 0
}
/^ restart:/ && in_service { has_restart = 1 }
/^ [a-zA-Z]/ && in_service && NR > 1 && prev_in_service {
if (!prev_has_restart && prev_service_line != "") {
print prev_service_line
print " restart: unless-stopped"
}
prev_service_line = service_line
prev_has_restart = has_restart
prev_in_service = in_service
has_restart = 0
}
/^[^ ]/ && !in_services && prev_in_service {
if (!prev_has_restart && prev_service_line != "") {
print prev_service_line
print " restart: unless-stopped"
}
prev_in_service = 0
}
END {
if (prev_in_service && !prev_has_restart && prev_service_line != "") {
print prev_service_line
print " restart: unless-stopped"
}
}
{
if (!in_service || has_restart || (in_service && !/^ [a-zA-Z]/)) {
print $0
}
prev_in_service = in_service
if (!/^ [a-zA-Z]/) in_service = 0
}
' "$COMPOSE_FILE" > "$temp_file"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Brittle YAML mutation via awk; prefer a safer strategy.

The awk transformation risks corrupting docker-compose.yml (indentation/anchors/complex service shapes). It’s easy to miss edge cases.

Safer alternatives:

  • Prefer committing restart: unless-stopped per service directly (source of truth in VCS).
  • If dynamic, use yq to surgically set services.*.restart.
  • As a last resort, detect services lacking restart and emit a warning instead of mutating the file.

If you still want in-place changes without yq, at least limit scope to known services and insert after service headers you control; validate with docker compose config.

Comment on lines +196 to +201
if docker-compose -f "$temp_file" config >/dev/null 2>&1; then
mv "$temp_file" "$COMPOSE_FILE"
log_message "Successfully updated Docker Compose file with restart policies"
else
log_message "WARNING: Generated docker-compose.yml is invalid, keeping original"
rm -f "$temp_file"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Prefer “docker compose” over “docker-compose” for v2.

You already detect Compose v2 in validate_environment; use it consistently.

-    if docker-compose -f "$temp_file" config >/dev/null 2>&1; then
+    if docker compose -f "$temp_file" config >/dev/null 2>&1; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if docker-compose -f "$temp_file" config >/dev/null 2>&1; then
mv "$temp_file" "$COMPOSE_FILE"
log_message "Successfully updated Docker Compose file with restart policies"
else
log_message "WARNING: Generated docker-compose.yml is invalid, keeping original"
rm -f "$temp_file"
if docker compose -f "$temp_file" config >/dev/null 2>&1; then
mv "$temp_file" "$COMPOSE_FILE"
log_message "Successfully updated Docker Compose file with restart policies"
else
log_message "WARNING: Generated docker-compose.yml is invalid, keeping original"
rm -f "$temp_file"
🤖 Prompt for AI Agents
In setup-autostart.sh around lines 196 to 201, the script uses the legacy
"docker-compose" command even though validate_environment already detects
Compose v2; update this block to invoke the same compose command that
validate_environment set (use the shared variable used for detection, e.g.
$COMPOSE_CMD or $DOCKER_COMPOSE) so that when v2 is detected it runs "docker
compose config" and when v1 it runs "docker-compose config"; ensure you only
change the command invocation and keep the success/failure handling identical.

Comment on lines +334 to +345
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=${SCRIPT_DIR}
ExecStart=${RUN_SCRIPT}
ExecStop=/usr/bin/docker-compose -f ${COMPOSE_FILE} down
TimeoutStartSec=0
Restart=on-failure
RestartSec=5s
User=${USER}
Group=${USER}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Systemd ExecStop uses docker-compose path; align with Compose v2.

Hard-coding /usr/bin/docker-compose will fail on systems with only docker compose. Use docker compose and let systemd resolve docker via PATH, or inject the chosen binary.

 ExecStart=${RUN_SCRIPT}
-ExecStop=/usr/bin/docker-compose -f ${COMPOSE_FILE} down
+ExecStop=/usr/bin/docker compose -f ${COMPOSE_FILE} down

Optional: create an Environment=DOCKER_COMPOSE="docker compose" and use it in ExecStop for portability.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=${SCRIPT_DIR}
ExecStart=${RUN_SCRIPT}
ExecStop=/usr/bin/docker-compose -f ${COMPOSE_FILE} down
TimeoutStartSec=0
Restart=on-failure
RestartSec=5s
User=${USER}
Group=${USER}
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=${SCRIPT_DIR}
ExecStart=${RUN_SCRIPT}
ExecStop=/usr/bin/docker compose -f ${COMPOSE_FILE} down
TimeoutStartSec=0
Restart=on-failure
RestartSec=5s
User=${USER}
Group=${USER}
🤖 Prompt for AI Agents
In setup-autostart.sh around lines 334 to 345, the unit hard-codes ExecStop to
/usr/bin/docker-compose which breaks systems using Compose v2; replace the
hard-coded path with a PATH-resolvable command or an environment variable and
update ExecStop accordingly. Add a line like Environment=DOCKER_COMPOSE="docker
compose" (or Environment=DOCKER_COMPOSE="docker-compose" if you prefer) in the
[Service] section and change ExecStop to use ${DOCKER_COMPOSE} -f
${COMPOSE_FILE} down so systemd will resolve the correct compose binary at
runtime and the file works on both Compose v1 and v2 installations.

@elie222
Copy link
Owner

elie222 commented Aug 16, 2025

not sure what this is. feel free to open an issue if there's something we should change around the token

@elie222 elie222 closed this Aug 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants