feat(marketing): add GEO-optimized blog posts and comparison pages#1569
Conversation
…compare index Add content targeting key AI coding tool comparison queries to improve Generative Engine Optimization (GEO) for Superset.
📝 WalkthroughWalkthroughAdds extensive marketing content (multiple MDX blog and comparison pages), a new Compare hub page, two plaintext LLM routes, sitemap and category updates, and small blog type/metadata adjustments; changes are content, route handlers, and minor type/metadata surface additions. Changes
Sequence Diagram(s)(No sequence diagrams generated.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (1)
apps/marketing/content/compare/best-ai-coding-agents-2026.mdx (1)
173-176: Style: three consecutive sentences starting with "For" — consider varying the openers.The static analysis tool flags this: "For parallel agent execution… For single-agent depth… For inline completions…" — three consecutive sentences share the same opener. A minor reword improves readability for a GEO-optimized page where prose quality signals content authority.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/content/compare/best-ai-coding-agents-2026.mdx` around lines 173 - 176, The three consecutive sentences beginning "For parallel agent execution, Superset...", "For single-agent depth, Claude Code and Cursor lead.", and "For inline completions, GitHub Copilot is the standard." should be reworded to avoid the repeated opener; rewrite at least one or two of these lines to vary the sentence starts (e.g., "Superset leads for parallel agent execution," "Claude Code and Cursor excel at single-agent depth," or "Inline completions are dominated by GitHub Copilot") while preserving the original comparisons and intent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/marketing/content/blog/parallel-coding-agents-guide.mdx`:
- Around line 7-10: The frontmatter key relatedSlugs contains references to
posts that are not included in this PR — specifically the entries
"roadmap-to-100-agents" and "git-worktrees-history-deep-dive" — so either remove
those missing slugs from relatedSlugs or add the corresponding posts to this PR;
locate the relatedSlugs array in the article (the relatedSlugs block) and either
delete the two missing slug strings or replace them with valid existing post
slugs (or add the missing post files to the branch) so the list only contains
verifiable posts.
- Around line 83-93: The script uses eval "$AGENT_CMD" which is a
shell-injection risk; either remove eval and invoke the command safely (e.g.,
parse AGENT_CMD into an array and exec it using "${CMD[@]}" or accept
command+args separately so you can run them without eval) or, if you must accept
compound commands, add an explicit comment warning about injection and
sanitize/validate TASK_NAME and AGENT_CMD before use; update references around
TASK_NAME, AGENT_CMD, BRANCH, WORKTREE_DIR and replace the eval usage
accordingly.
In `@apps/marketing/content/blog/working-with-worktrees-in-superset.mdx`:
- Around line 48-52: Update the diagram and accompanying text to clarify that
.git in the worktree (shown under add-signup-validation) is a plain text
gitfile, not a directory; replace the ambiguous ".git # Points back to main
repo" with something like ".git # gitfile pointing to main repo's
.git/worktrees/<name>" so readers understand it's a file referencing the main
repo and that the full project checkout is a separate directory entry.
In `@apps/marketing/content/compare/best-ai-coding-agents-2026.mdx`:
- Line 67: Update the Windsurf pricing cell in the comparison table row that
currently reads "| **Windsurf** | Agent (IDE) | Sequential (Cascade flows) |
Windsurf IDE | No | Free, Pro $15/mo, Ultra $60/mo |" to reflect the correct
tiers: "Free, Pro $15/mo, Teams $30/mo, Enterprise $60/mo" (replace the "Ultra
$60/mo" label with "Enterprise $60/mo" and add the missing "Teams $30/mo" tier).
In `@apps/marketing/content/compare/superset-vs-claude-code.mdx`:
- Line 32: Update the table cell that currently reads "Source-available (not
open source)" for Claude Code to "Proprietary"; locate the Markdown table row
where the comparator column contains "**Open source** | Yes (Apache 2.0) |
Source-available (not open source)" and replace the Claude Code license string
with "Proprietary" so the row becomes "**Open source** | Yes (Apache 2.0) |
Proprietary".
In `@apps/marketing/content/compare/superset-vs-codex.mdx`:
- Line 28: Update the table cell that currently lists "o3, o4-mini, GPT-4.1"
(the AI approach row in the superset-vs-codex table) to clarify that Codex CLI
can use any model available in the OpenAI Responses API via "codex -m <model>",
that the actual default is "codex-mini-latest" (not o3/o4-mini), and explicitly
add GPT-5-Codex variants as supported examples so the table doesn't imply the
listed models are the only or primary options.
In `@apps/marketing/content/compare/superset-vs-devin.mdx`:
- Line 30: Update the Pricing row that currently begins with "| **Pricing**" in
the "At a Glance" table to list Devin's multiple tiers (e.g., "Core $20/mo
(ACU-based) · Teams $500/mo · Enterprise (custom)") and then revise the
Recommendation and FAQ sections that reference Devin pricing to acknowledge both
the $20/month Core plan and the $500/month Team plan (adjust the copy to mention
Core $20/mo ACU-based entry and Teams $500/mo rather than implying only $500).
Ensure all references to "Devin pricing" in the "At a Glance" table,
Recommendation paragraph, and FAQ reflect the two entry points and ACU-based
billing for Core.
In `@apps/marketing/content/compare/superset-vs-windsurf.mdx`:
- Line 29: Update the Pricing comparison row string "| **Pricing** | Free tier +
Pro $20/seat/mo | Free tier (limited), Pro $15/mo, Ultra $60/mo, Teams
$35/seat/mo |" to reflect Windsurf's actual tiers: change "Ultra" to
"Enterprise", update Teams price to "$30/user/mo", and replace the Free/Pro
descriptions with "Free (25 credits/mo)" and "Pro $15/mo (500 credits)". Ensure
the final cell reads something like "Free (25 credits/mo), Pro $15/mo (500
credits), Teams $30/user/mo, Enterprise $60/user/mo".
In `@apps/marketing/src/app/compare/page.tsx`:
- Around line 72-100: The page can be blank when getComparisonPages() returns
items with unrecognized types because the current fallback only checks
pages.length === 0; update the fallback to check whether there are no recognized
items by replacing the final condition with (roundups.length + oneVsOne.length)
=== 0 so the "No comparisons yet." message displays when both roundups and
oneVsOne are empty, or alternatively ensure you filter/partition pages into
roundups and oneVsOne (the existing roundups and oneVsOne arrays) and render a
fallback when their combined length is zero; reference the roundups, oneVsOne,
pages arrays and CompareCard mapping to locate where to change the condition.
---
Nitpick comments:
In `@apps/marketing/content/compare/best-ai-coding-agents-2026.mdx`:
- Around line 173-176: The three consecutive sentences beginning "For parallel
agent execution, Superset...", "For single-agent depth, Claude Code and Cursor
lead.", and "For inline completions, GitHub Copilot is the standard." should be
reworded to avoid the repeated opener; rewrite at least one or two of these
lines to vary the sentence starts (e.g., "Superset leads for parallel agent
execution," "Claude Code and Cursor excel at single-agent depth," or "Inline
completions are dominated by GitHub Copilot") while preserving the original
comparisons and intent.
| relatedSlugs: | ||
| - working-with-worktrees-in-superset | ||
| - roadmap-to-100-agents | ||
| - git-worktrees-history-deep-dive |
There was a problem hiding this comment.
Same missing relatedSlugs concern as the companion article.
roadmap-to-100-agents and git-worktrees-history-deep-dive are referenced but not included in this PR. Please see the verification script in the companion file review above to confirm whether these posts already exist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/blog/parallel-coding-agents-guide.mdx` around lines 7
- 10, The frontmatter key relatedSlugs contains references to posts that are not
included in this PR — specifically the entries "roadmap-to-100-agents" and
"git-worktrees-history-deep-dive" — so either remove those missing slugs from
relatedSlugs or add the corresponding posts to this PR; locate the relatedSlugs
array in the article (the relatedSlugs block) and either delete the two missing
slug strings or replace them with valid existing post slugs (or add the missing
post files to the branch) so the list only contains verifiable posts.
| ```bash | ||
| #!/bin/bash | ||
| TASK_NAME="$1" | ||
| AGENT_CMD="$2" | ||
| BRANCH="agent/$TASK_NAME" | ||
| WORKTREE_DIR="../$(basename $(pwd))-$TASK_NAME" | ||
|
|
||
| git worktree add "$WORKTREE_DIR" -b "$BRANCH" | ||
| cd "$WORKTREE_DIR" | ||
| eval "$AGENT_CMD" | ||
| ``` |
There was a problem hiding this comment.
Example script uses eval — document the risk or use a safer form.
eval "$AGENT_CMD" on line 92 is a classic shell injection vector. Even in illustrative code, readers commonly cargo-cult shell snippets directly into CI scripts or wrapper tools. A reader who feeds untrusted input into AGENT_CMD would get arbitrary command execution. Consider replacing eval with direct command invocation or adding an explicit warning:
📝 Suggested safer alternative
-eval "$AGENT_CMD"
+# Run the agent command directly (avoid eval for untrusted input)
+$AGENT_CMDOr add a comment if eval is intentionally retained for compound commands:
+# WARNING: eval executes arbitrary shell code — ensure AGENT_CMD is trusted
eval "$AGENT_CMD"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```bash | |
| #!/bin/bash | |
| TASK_NAME="$1" | |
| AGENT_CMD="$2" | |
| BRANCH="agent/$TASK_NAME" | |
| WORKTREE_DIR="../$(basename $(pwd))-$TASK_NAME" | |
| git worktree add "$WORKTREE_DIR" -b "$BRANCH" | |
| cd "$WORKTREE_DIR" | |
| eval "$AGENT_CMD" | |
| ``` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/blog/parallel-coding-agents-guide.mdx` around lines 83
- 93, The script uses eval "$AGENT_CMD" which is a shell-injection risk; either
remove eval and invoke the command safely (e.g., parse AGENT_CMD into an array
and exec it using "${CMD[@]}" or accept command+args separately so you can run
them without eval) or, if you must accept compound commands, add an explicit
comment warning about injection and sanitize/validate TASK_NAME and AGENT_CMD
before use; update references around TASK_NAME, AGENT_CMD, BRANCH, WORKTREE_DIR
and replace the eval usage accordingly.
| ~/.superset/worktrees/my-project/ | ||
| └── add-signup-validation/ # Agent's isolated worktree | ||
| ├── .git # Points back to main repo | ||
| └── (full project checkout) | ||
| ``` |
There was a problem hiding this comment.
Minor technical inaccuracy: .git in a worktree is a file, not a directory.
In a linked worktree, .git is a plain text file (a "gitfile") containing the path back to the main repo's .git/worktrees/<name> entry — it is not a directory. The diagram currently renders it ambiguously alongside (full project checkout), which may mislead readers who inspect their worktrees and find a file where they expect a directory.
📝 Suggested wording fix
- ├── .git # Points back to main repo
+ ├── .git # gitfile — points to main repo's .git/worktrees/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ~/.superset/worktrees/my-project/ | |
| └── add-signup-validation/ # Agent's isolated worktree | |
| ├── .git # Points back to main repo | |
| └── (full project checkout) | |
| ``` | |
| ~/.superset/worktrees/my-project/ | |
| └── add-signup-validation/ # Agent's isolated worktree | |
| ├── .git # gitfile — points to main repo's .git/worktrees/ | |
| └── (full project checkout) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/blog/working-with-worktrees-in-superset.mdx` around
lines 48 - 52, Update the diagram and accompanying text to clarify that .git in
the worktree (shown under add-signup-validation) is a plain text gitfile, not a
directory; replace the ambiguous ".git # Points back to main repo" with
something like ".git # gitfile pointing to main repo's .git/worktrees/<name>"
so readers understand it's a file referencing the main repo and that the full
project checkout is a separate directory entry.
| | **Claude Code** | Agent (CLI) | Single session | None | Source-available | Max $20-200/mo or API usage | | ||
| | **Codex CLI** | Agent (CLI) | Single session (+ cloud Codex) | None | Yes (Apache 2.0) | Free + API usage | | ||
| | **GitHub Copilot** | Agent (extension) | Single thread (+ cloud Coding Agent) | VS Code, JetBrains, etc. | No | Free, Pro $10/mo, Business $19/user/mo | | ||
| | **Windsurf** | Agent (IDE) | Sequential (Cascade flows) | Windsurf IDE | No | Free, Pro $15/mo, Ultra $60/mo | |
There was a problem hiding this comment.
Windsurf pricing tiers are incorrect — "Ultra" plan doesn't exist.
The comparison table lists Windsurf as "Free, Pro $15/mo, Ultra $60/mo," but the actual plan structure is different. Windsurf has plans: Pro at $15.00 per month, Teams at $30.00 per user per month, Enterprise at $60.00 per user per month. There is no "Ultra" tier — the $60 plan is called Enterprise — and the $30 Teams plan is missing entirely from the table. Incorrect competitor pricing in a published comparison page directly undermines credibility.
📝 Suggested fix
-| **Windsurf** | Agent (IDE) | Sequential (Cascade flows) | Windsurf IDE | No | Free, Pro $15/mo, Ultra $60/mo |
+| **Windsurf** | Agent (IDE) | Sequential (Cascade flows) | Windsurf IDE | No | Free, Pro $15/mo, Teams $30/user/mo, Enterprise $60/user/mo |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/compare/best-ai-coding-agents-2026.mdx` at line 67,
Update the Windsurf pricing cell in the comparison table row that currently
reads "| **Windsurf** | Agent (IDE) | Sequential (Cascade flows) | Windsurf IDE
| No | Free, Pro $15/mo, Ultra $60/mo |" to reflect the correct tiers: "Free,
Pro $15/mo, Teams $30/mo, Enterprise $60/mo" (replace the "Ultra $60/mo" label
with "Enterprise $60/mo" and add the missing "Teams $30/mo" tier).
| | **Parallelism** | Core feature — many agents on separate branches simultaneously | Single session; requires separate terminal tabs for parallelism | | ||
| | **Isolation** | Automatic Git worktree per task — agents cannot conflict | Shares your working directory — manual worktree setup needed | | ||
| | **Pricing** | Free tier + Pro $20/seat/mo | Free tier (limited), Pro $20/mo via Anthropic Max or API usage | | ||
| | **Open source** | Yes (Apache 2.0) | Source-available (not open source) | |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Claude Code licensing source-available open source 2025 2026
💡 Result:
- Claude Code (the Anthropic coding client/CLI) is not open source and not “source-available” in the OSI/FOSS sense. Anthropic’s own docs say Claude Code use is governed by Anthropic Consumer Terms (Free/Pro/Max) or Commercial Terms (Team/Enterprise/API), i.e., a commercial/proprietary license rather than an open-source license. [1]
- Anthropic has treated Claude Code’s client code as closed/obfuscated and under restrictive terms. Reporting in 2025 described Claude Code as tied to Anthropic’s commercial license, with obfuscated source, and noted Anthropic issued a DMCA takedown after someone de-obfuscated and reposted the code. [2]
- Claude Code may bundle open-source components, but that doesn’t make Claude Code itself open source. A 2025 GitHub issue specifically raised concerns about open-source attribution obligations in the distributed Claude Code binaries (e.g., Apache-2.0 components). [3]
- Some adjacent Anthropic repos are open source, but they are separate from the proprietary Claude Code client. For example,
anthropics/claude-code-actionis MIT-licensed. [4]
Sources: [1] [2] [3] [4]
Line 32: Update Claude Code's licensing classification from "Source-available" to "Proprietary."
Claude Code's source code is not available—it is obfuscated and distributed under Anthropic's proprietary commercial license terms (Consumer/Commercial Terms), not under any open-source or source-available license. Anthropic has actively enforced this through DMCA takedowns. The current classification misleadingly suggests source code is accessible, which is incorrect. Change to "Proprietary" for accuracy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/compare/superset-vs-claude-code.mdx` at line 32,
Update the table cell that currently reads "Source-available (not open source)"
for Claude Code to "Proprietary"; locate the Markdown table row where the
comparator column contains "**Open source** | Yes (Apache 2.0) |
Source-available (not open source)" and replace the Claude Code license string
with "Proprietary" so the row becomes "**Open source** | Yes (Apache 2.0) |
Proprietary".
| |---|---|---| | ||
| | **Category** | Agent orchestration terminal | AI coding agent (terminal-native) | | ||
| | **What it does** | Runs 10+ coding agents in parallel with Git worktree isolation | AI assistant that reads, writes, and executes code via OpenAI models | | ||
| | **AI approach** | Agent-agnostic — works with Codex, Claude Code, Aider, OpenCode, etc. | OpenAI models only (o3, o4-mini, GPT-4.1) | |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
OpenAI Codex CLI supported models 2025 2026 GPT-4.1
💡 Result:
Codex CLI supported models (2025–2026) + GPT‑4.1
- Codex CLI can use any model available in the OpenAI Responses API (you pick with
codex -m <model>; example given:codex -m o3). Default (per Help Center) “targets GPT‑5.” [1] - Models explicitly called out for Codex CLI in 2025 launch materials:
- o3 and o4‑mini (brought “into your local workflow” via Codex CLI). [2]
- codex-mini-latest (fine-tuned from o4‑mini), released as the default model in Codex CLI and also available in the API as
codex-mini-latest. [2][3]
- Codex-optimized GPT‑5 variants in Codex (usable via Codex CLI as selectable models as Codex evolved):
- GPT‑5‑Codex announced as available “everywhere you use Codex,” with developers able to choose it for local tasks via Codex CLI. [4]
- Newer Codex frontier releases continued into 2026 (e.g., GPT‑5.3‑Codex). [5]
Is GPT‑4.1 supported in Codex CLI?
Yes—because Codex CLI supports any model in the Responses API [1], and gpt-4.1 is available on the Responses API. [6]
Sources:
[1] OpenAI Help Center – “OpenAI Codex CLI – Getting Started” (FAQ: which models does Codex use?)
[2] OpenAI – “Introducing Codex” (Updates to Codex CLI: o3/o4‑mini; codex-mini-latest default in CLI)
[3] OpenAI Platform Docs – codex-mini-latest model page
[4] OpenAI – “Introducing upgrades to Codex” (GPT‑5‑Codex; selectable for local tasks via Codex CLI)
[5] OpenAI – “Introducing GPT‑5.3‑Codex” (Feb 5, 2026)
[6] OpenAI Platform Docs – gpt-4.1 model page (endpoints include Responses API)
The model list is incomplete and lacks important context.
The table's listing of o3, o4-mini, GPT-4.1 is technically supported by Codex CLI (GPT-4.1 is available on the Responses API), but the list is incomplete. Codex CLI supports any model in the OpenAI Responses API (selectable via codex -m <model>), with codex-mini-latest as the default—not o3 or o4-mini. Additionally, the table omits GPT-5-Codex variants, which are selectable for Codex tasks. Consider clarifying that Codex CLI's flexibility to use any Responses API model, and explicitly call out the actual default, to avoid implying the listed models are the only or primary options.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/compare/superset-vs-codex.mdx` at line 28, Update the
table cell that currently lists "o3, o4-mini, GPT-4.1" (the AI approach row in
the superset-vs-codex table) to clarify that Codex CLI can use any model
available in the OpenAI Responses API via "codex -m <model>", that the actual
default is "codex-mini-latest" (not o3/o4-mini), and explicitly add GPT-5-Codex
variants as supported examples so the table doesn't imply the listed models are
the only or primary options.
| | **AI approach** | Agent-agnostic — orchestrates any CLI agent | Proprietary AI with browser, editor, and terminal in cloud | | ||
| | **Parallelism** | 10+ agents across isolated local worktrees | Multiple Devin sessions run as separate cloud VMs | | ||
| | **Code privacy** | Code never leaves your machine | Code runs on Cognition's cloud infrastructure | | ||
| | **Pricing** | Free tier + Pro $20/seat/mo | Teams $500/seat/mo | |
There was a problem hiding this comment.
Devin pricing is incomplete — omits the $20/month Core plan.
Cognition introduced a major price drop with Devin 2.0: it is available starting at $20 per month (with ACU-based pay-as-you-go pricing), whereas previous versions started at $500 per month. Devin currently offers a $20/month Core plan, a $500/month Team plan, and a custom-priced Enterprise plan.
The "At a Glance" table (Line 30) shows only Teams $500/seat/mo, and the recommendation (Line 77) and FAQ (Line 96) repeat the same figure. This creates a false binary of "$20 (Superset) vs $500 (Devin)" that is contradicted by Devin's current pricing page. A reader who checks will immediately lose trust in the comparison.
Suggested fix: Update the pricing row to reflect Devin's full tier structure (e.g., Core $20/mo (ACU-based) · Teams $500/mo) and adjust the recommendation/FAQ copy accordingly to acknowledge the two entry points.
Also applies to: 77-77, 96-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/compare/superset-vs-devin.mdx` at line 30, Update the
Pricing row that currently begins with "| **Pricing**" in the "At a Glance"
table to list Devin's multiple tiers (e.g., "Core $20/mo (ACU-based) · Teams
$500/mo · Enterprise (custom)") and then revise the Recommendation and FAQ
sections that reference Devin pricing to acknowledge both the $20/month Core
plan and the $500/month Team plan (adjust the copy to mention Core $20/mo
ACU-based entry and Teams $500/mo rather than implying only $500). Ensure all
references to "Devin pricing" in the "At a Glance" table, Recommendation
paragraph, and FAQ reflect the two entry points and ACU-based billing for Core.
| | **AI approach** | Agent-agnostic — works with Claude Code, Codex, Aider, or any CLI agent | Built-in AI models proxied through Windsurf servers | | ||
| | **Parallelism** | Core feature — 10+ agents across isolated worktrees | Sequential by default; Windsurf Flows handles multi-step tasks | | ||
| | **Editor** | Works alongside any editor (VS Code, Cursor, JetBrains, Xcode) | You must use the Windsurf IDE | | ||
| | **Pricing** | Free tier + Pro $20/seat/mo | Free tier (limited), Pro $15/mo, Ultra $60/mo, Teams $35/seat/mo | |
There was a problem hiding this comment.
Windsurf pricing row has two inaccuracies.
Windsurf pricing is: Free (25 credits/mo), Pro $15/mo (500 credits), Teams $30/user/mo, Enterprise $60/user/mo.
The comparison table (Line 29) shows Ultra $60/mo, Teams $35/seat/mo. Two corrections needed:
- The $60 tier is named Enterprise, not "Ultra" — "Ultra" doesn't appear anywhere in Windsurf's current pricing structure.
- Teams is $30/user/mo, not $35.
🛠️ Proposed fix
-| **Pricing** | Free tier + Pro $20/seat/mo | Free tier (limited), Pro $15/mo, Ultra $60/mo, Teams $35/seat/mo |
+| **Pricing** | Free tier + Pro $20/seat/mo | Free tier (limited), Pro $15/mo, Teams $30/user/mo, Enterprise $60/user/mo |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/content/compare/superset-vs-windsurf.mdx` at line 29, Update
the Pricing comparison row string "| **Pricing** | Free tier + Pro $20/seat/mo |
Free tier (limited), Pro $15/mo, Ultra $60/mo, Teams $35/seat/mo |" to reflect
Windsurf's actual tiers: change "Ultra" to "Enterprise", update Teams price to
"$30/user/mo", and replace the Free/Pro descriptions with "Free (25 credits/mo)"
and "Pro $15/mo (500 credits)". Ensure the final cell reads something like "Free
(25 credits/mo), Pro $15/mo (500 credits), Teams $30/user/mo, Enterprise
$60/user/mo".
| {roundups.length > 0 && ( | ||
| <section className="mb-12"> | ||
| <h2 className="text-xl font-medium text-foreground mb-6"> | ||
| Roundups | ||
| </h2> | ||
| <div className="flex flex-col gap-4"> | ||
| {roundups.map((page) => ( | ||
| <CompareCard key={page.slug} page={page} /> | ||
| ))} | ||
| </div> | ||
| </section> | ||
| )} | ||
|
|
||
| {oneVsOne.length > 0 && ( | ||
| <section> | ||
| <h2 className="text-xl font-medium text-foreground mb-6"> | ||
| Head-to-Head Comparisons | ||
| </h2> | ||
| <div className="flex flex-col gap-4"> | ||
| {oneVsOne.map((page) => ( | ||
| <CompareCard key={page.slug} page={page} /> | ||
| ))} | ||
| </div> | ||
| </section> | ||
| )} | ||
|
|
||
| {pages.length === 0 && ( | ||
| <p className="text-muted-foreground">No comparisons yet.</p> | ||
| )} |
There was a problem hiding this comment.
Silent blank page when all loaded comparisons have unrecognized type values.
The "No comparisons yet." fallback (line 98) only fires when pages.length === 0. If getComparisonPages() returns entries whose type is neither "1v1" nor "roundup" (e.g., a front-matter typo like type: "1vs1"), pages.length > 0 suppresses the message, but neither section renders — leaving the content area completely blank with no user feedback.
🛡️ Proposed fix
- {pages.length === 0 && (
+ {pages.length === 0 || (roundups.length === 0 && oneVsOne.length === 0) ? (
<p className="text-muted-foreground">No comparisons yet.</p>
- )}
+ ) : null}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/marketing/src/app/compare/page.tsx` around lines 72 - 100, The page can
be blank when getComparisonPages() returns items with unrecognized types because
the current fallback only checks pages.length === 0; update the fallback to
check whether there are no recognized items by replacing the final condition
with (roundups.length + oneVsOne.length) === 0 so the "No comparisons yet."
message displays when both roundups and oneVsOne are empty, or alternatively
ensure you filter/partition pages into roundups and oneVsOne (the existing
roundups and oneVsOne arrays) and render a fallback when their combined length
is zero; reference the roundups, oneVsOne, pages arrays and CompareCard mapping
to locate where to change the condition.
There was a problem hiding this comment.
🧹 Nitpick comments (5)
apps/marketing/src/app/llms-full.txt/route.ts (3)
20-111: Addexport const dynamic = 'force-static'— same caching concern asllms.txt/route.ts.This route performs the same synchronous filesystem reads (
getBlogPosts,getComparisonPages) on every request without Next.js-level static generation. The full-content nature of this route makes each cold invocation significantly more expensive thanllms.txt. Addingexport const dynamic = 'force-static'pre-renders the output at build time.import { COMPANY } from "@superset/shared/constants"; import { getBlogPosts } from "@/lib/blog"; import { getComparisonPages } from "@/lib/compare"; import { FAQ_ITEMS } from "../components/FAQSection/constants"; + +export const dynamic = "force-static"; function stripMdxSyntax(content: string): string {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/src/app/llms-full.txt/route.ts` around lines 20 - 111, The route GET in apps/marketing/src/app/llms-full.txt/route.ts performs synchronous filesystem reads each request and should be pre-rendered; add an exported module-level constant export const dynamic = 'force-static' to the file (next to the GET export) so Next.js will statically generate this route at build time, preventing repeated runtime filesystem reads from getBlogPosts and getComparisonPages.
65-85: Unbounded response size will degrade over time.Every blog post and comparison page is included at full MDX content. With 9 pages today, the payload is manageable; at 50–100 posts it could exceed LLM context window limits (defeating the purpose of the endpoint) and impose a large generation cost at cache-miss time. Consider:
- Per-entry word/character cap — truncate each post's
stripMdxSyntax(post.content)to a max length (e.g., first 2 000 tokens worth).- Summary field — add a
summaryfrontmatter field and use it here instead of the full content.- Selective inclusion — only include the N most-recent posts/pages rather than all of them.
The
Cache-Control: max-age=3600header mitigates server load for CDN hits, but a cold cache generation with 50+ full posts is still expensive and will eventually produce a response too large for any LLM to consume.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/src/app/llms-full.txt/route.ts` around lines 65 - 85, The current block appends full MDX content for every post via stripMdxSyntax(post.content), which can grow unbounded; change it to first limit the posts list (e.g., const mostRecent = posts.slice(0, N) where N is configurable) and for each entry use post.summary if present otherwise a truncated version of stripMdxSyntax(post.content) (truncate to a maxChars or maxTokens constant, e.g., ~12k chars or ~2k tokens equivalent) before pushing into sections; update the loop that references posts and stripMdxSyntax to use mostRecent and the summary-or-truncated value so each entry is bounded in size.
12-13:stripMdxSyntaxclosing-tag regex doesn't enforce tag-name matching.The pattern
/<[A-Z]\w*\b[^>]*>[\s\S]*?<\/[A-Z]\w*>/gmatches the first subsequent uppercase closing tag, not necessarily the one corresponding to the opening tag. For adjacent/nested different-type components — e.g.,<Video /><Table>...</Table>— the lazy[\s\S]*?and unanchored closing-tag class can strip incorrect ranges. For the current set of marketing MDX this is likely cosmetic, but it will silently mangle content as the blog grows.♻️ Suggested approach — capture and back-reference the tag name
- .replace(/<[A-Z]\w*\b[^>]*>[\s\S]*?<\/[A-Z]\w*>/g, "") + .replace(/<([A-Z]\w*)\b[^>]*>[\s\S]*?<\/\1>/g, "")Using a back-reference (
\1) ensures the closing tag always matches the captured opening tag name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/src/app/llms-full.txt/route.ts` around lines 12 - 13, The closing-tag regex in stripMdxSyntax's second .replace call can match the wrong closing tag; update that replace to capture the opening tag name (e.g., capture [A-Z]\w* as group 1) and use a back-reference for the closing tag so the closing tag must match the opening tag (keep the lazy content match and existing attributes handling), leaving the first self-closing .replace intact.apps/marketing/src/app/llms.txt/route.ts (2)
6-52: Addexport const dynamic = 'force-static'to pre-render this route at build time.Route Handlers are not cached by default in Next.js 16. You can opt into caching for GET methods using a route config option such as
export const dynamic = 'force-static'. Without it, this handler runs on every incoming request, performing synchronous filesystem reads each time (getBlogPosts,getComparisonPages). TheCache-Controlheader only governs browser/CDN caching — it does not affect Next.js's own static generation.Since this content only changes when MDX files change (i.e., at deploy time), static pre-rendering is the correct model. When Cache Components is enabled in Next.js 16, GET Route Handlers can be prerendered when they don't access dynamic or runtime data.
♻️ Proposed fix
import { COMPANY } from "@superset/shared/constants"; import { getBlogPosts } from "@/lib/blog"; import { getComparisonPages } from "@/lib/compare"; import { FAQ_ITEMS } from "../components/FAQSection/constants"; + +export const dynamic = "force-static"; export async function GET() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/src/app/llms.txt/route.ts` around lines 6 - 52, Add a top-level route config to force static prerendering by exporting export const dynamic = 'force-static' in this module so the GET handler (export async function GET) is prerendered at build time instead of running on every request; place the export alongside the existing GET export, ensure getBlogPosts and getComparisonPages remain synchronous/build-time-safe (no runtime-only APIs), and keep the existing Cache-Control header for CDN/browser caching.
14-17: Hardcoded product description is duplicated inllms-full.txt/route.ts.Lines 15 and 17 are textually identical to
llms-full.txt/route.tslines 33–35 — the other file even carries the comment// Header section (same as llms.txt). Extract to a shared constant to avoid drift.♻️ Suggested extraction
In a shared file (e.g.,
apps/marketing/src/lib/llms-content.ts):+ import { COMPANY } from "@superset/shared/constants"; + + export function buildLlmsHeader(docsUrl: string): string[] { + return [ + `# ${COMPANY.NAME}`, + "", + "> Run 10+ parallel coding agents on your machine", + "", + `${COMPANY.NAME} is an open-source desktop application that lets developers run multiple AI coding agents in parallel, each in its own isolated Git worktree. It works with any CLI-based agent including Claude Code, OpenCode, and OpenAI Codex. Agents can work on different branches or features simultaneously without conflicts. ${COMPANY.NAME} is free, does not proxy API calls, and supports macOS with Windows and Linux coming soon.`, + "", + "## Docs", + "", + `- [Documentation](${docsUrl})`, + `- [Getting Started](${docsUrl}/getting-started)`, + `- [GitHub](${COMPANY.GITHUB_URL})`, + ]; + }Then in both route files:
- `# ${COMPANY.NAME}`, - "", - "> Run 10+ parallel coding agents on your machine", - ... + ...buildLlmsHeader(docsUrl),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/marketing/src/app/llms.txt/route.ts` around lines 14 - 17, Duplicate hardcoded header text in llms.txt/route.ts and llms-full.txt/route.ts should be extracted into a shared builder to avoid drift: create a shared function (e.g., buildLlmsHeader) that imports COMPANY and returns the header string[] (including the "> Run 10+ parallel coding agents..." line and the long product paragraph), then replace the hardcoded header arrays in both route files to call buildLlmsHeader(docsUrl) and add the import for the shared builder; ensure the returned array matches the original lines and update any local variables (docsUrl) passed into the builder.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@apps/marketing/src/app/llms-full.txt/route.ts`:
- Around line 28-43: Duplicate hardcoded header blocks are present where
sections.push builds the same header using COMPANY.NAME and docsUrl; extract
that repeated string construction into a shared helper function (e.g.,
buildLlmsHeader()) that returns the header string and replace the inline
sections.push(...) with sections.push(buildLlmsHeader(COMPANY, docsUrl)); ensure
the helper reproduces the exact lines (including Docs links) and is
exported/imported so both this route and the llms.txt route use it to avoid
drift.
---
Nitpick comments:
In `@apps/marketing/src/app/llms-full.txt/route.ts`:
- Around line 20-111: The route GET in
apps/marketing/src/app/llms-full.txt/route.ts performs synchronous filesystem
reads each request and should be pre-rendered; add an exported module-level
constant export const dynamic = 'force-static' to the file (next to the GET
export) so Next.js will statically generate this route at build time, preventing
repeated runtime filesystem reads from getBlogPosts and getComparisonPages.
- Around line 65-85: The current block appends full MDX content for every post
via stripMdxSyntax(post.content), which can grow unbounded; change it to first
limit the posts list (e.g., const mostRecent = posts.slice(0, N) where N is
configurable) and for each entry use post.summary if present otherwise a
truncated version of stripMdxSyntax(post.content) (truncate to a maxChars or
maxTokens constant, e.g., ~12k chars or ~2k tokens equivalent) before pushing
into sections; update the loop that references posts and stripMdxSyntax to use
mostRecent and the summary-or-truncated value so each entry is bounded in size.
- Around line 12-13: The closing-tag regex in stripMdxSyntax's second .replace
call can match the wrong closing tag; update that replace to capture the opening
tag name (e.g., capture [A-Z]\w* as group 1) and use a back-reference for the
closing tag so the closing tag must match the opening tag (keep the lazy content
match and existing attributes handling), leaving the first self-closing .replace
intact.
In `@apps/marketing/src/app/llms.txt/route.ts`:
- Around line 6-52: Add a top-level route config to force static prerendering by
exporting export const dynamic = 'force-static' in this module so the GET
handler (export async function GET) is prerendered at build time instead of
running on every request; place the export alongside the existing GET export,
ensure getBlogPosts and getComparisonPages remain synchronous/build-time-safe
(no runtime-only APIs), and keep the existing Cache-Control header for
CDN/browser caching.
- Around line 14-17: Duplicate hardcoded header text in llms.txt/route.ts and
llms-full.txt/route.ts should be extracted into a shared builder to avoid drift:
create a shared function (e.g., buildLlmsHeader) that imports COMPANY and
returns the header string[] (including the "> Run 10+ parallel coding agents..."
line and the long product paragraph), then replace the hardcoded header arrays
in both route files to call buildLlmsHeader(docsUrl) and add the import for the
shared builder; ensure the returned array matches the original lines and update
any local variables (docsUrl) passed into the builder.
Summary
/compareindex page listing all comparison pages with roundup and head-to-head sectionsChanges
Blog Posts (3 new)
Comparison Pages (6 new)
Infrastructure
/compareindex page with roundup and head-to-head sections/compareroute, bumped comparison page priority from 0.7 to 0.9GEO Queries Targeted
cursor alternatives,claude code alternatives,codex alternatives,windsurf alternatives,devin alternatives,github copilot alternatives,best ai coding tools 2026,parallel coding agents,ai agent orchestrationTest Plan
bun run lint:fixpassesbun run typecheckpasses (all 18 packages)/blog/<slug>/compare/<slug>/compareindex page lists all comparison pagesSummary by CodeRabbit
New Features
Documentation
Content Metadata
Chores