Skip to content

Add blog post: roadmap to 100 parallel coding agents#1173

Merged
saddlepaddle merged 8 commits intomainfrom
tick
Feb 3, 2026
Merged

Add blog post: roadmap to 100 parallel coding agents#1173
saddlepaddle merged 8 commits intomainfrom
tick

Conversation

@saddlepaddle
Copy link
Copy Markdown
Collaborator

@saddlepaddle saddlepaddle commented Feb 3, 2026

Summary

  • Adds blog post covering our plan to scale from 5-7 to 100 coding agents by end of 2026
  • Three key strategies: quality gates before human review, agent-driven UIs, and proactive event-driven agents
  • Includes product screenshot cover image, pipeline diagram SVG, and two minimal SVG diagrams (quality gates, proactive agents)
  • Fixes blog prose link styles (white underlines instead of invisible) and adds missing h1 styling

Test plan

  • Verify blog post renders correctly at /blog/roadmap-to-100-agents
  • Check all three SVG diagrams display properly (pipeline, quality gates, proactive agents)
  • Verify cover image (product screenshot) displays
  • Confirm links are visible with subtle white underlines
  • Confirm h1 headings ("Mapping out the problem", "How we'll improve it") are properly styled
  • Check all external links work (Block paper, BrowserUse, Maestro, Ralph loops, Codex skills, Devin, Circleback)

Summary by CodeRabbit

  • Documentation

    • Published a new blog outlining a roadmap to scale parallel coding agents to 100, with workflows, challenges, diagrams, references, and calls to action.
  • Style

    • Enhanced prose typography and spacing for headings; refined link underlines, color transition, and hover behavior.
  • UI

    • Updated team page header to use an arrow icon and applied minor layout/content formatting tweaks.
  • New Features

    • Added dynamic Open Graph image generation and adjusted default OG image background.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 3, 2026

📝 Walkthrough

Walkthrough

Adds a new marketing blog post, introduces a dynamic per-post Open Graph image generator, tweaks the global prose styles, adjusts the site-level OG image background, and applies minor UI/import/lint comment edits to team pages and the sitemap.

Changes

Cohort / File(s) Summary
Blog Content
apps/marketing/content/blog/roadmap-to-100-agents.mdx
Adds a new long-form blog post describing plans to scale from ~5–7 to 100 coding agents, outlining bottlenecks, proposed agent workflows, references, diagrams, and CTAs.
Per-post Open Graph generator
apps/marketing/src/app/blog/[slug]/opengraph-image.tsx
New Next.js ImageResponse generator: fetches Inter font, loads avatar data URIs or initials, retrieves post metadata by slug, and renders a composed OG image (title, date, author block, logo/fallback). Exports alt, size, contentType, and default Image async function.
Site Open Graph
apps/marketing/src/app/opengraph-image.tsx
Changed OG image background from a gradient to a solid color #0a0a0a; layout/content otherwise unchanged.
Global CSS
apps/marketing/src/app/globals.css
Added .prose h1 typography rules and updated .prose a link decoration (underline, text-decoration-color, underline offset, hover transition).
Team pages (UI & lint comments)
apps/marketing/src/app/team/page.tsx, apps/marketing/src/app/team/[id]/page.tsx
Replaced COMPANY import with ArrowRight icon in team listing header, small JSX formatting changes, and added biome-ignore/lint comment near dangerouslySetInnerHTML; no behavioral changes.
Sitemap / Import ordering
apps/marketing/src/app/sitemap.ts
Reordered imports and reformatted the sitemap return to a multi-line array literal; no semantic change to sitemap content.

Sequence Diagram(s)

sequenceDiagram
  participant Browser
  participant NextRoute as Next.js Route
  participant CMS as Content Loader
  participant FontHost as Font Remote
  participant AssetLoader as Local Asset Loader
  participant ImageBuilder as ImageResponse

  Browser->>NextRoute: GET /blog/{slug}/opengraph-image
  NextRoute->>CMS: fetchPostBySlug(slug)
  NextRoute->>FontHost: fetch Inter font (binary)
  NextRoute->>AssetLoader: load local avatar asset -> dataURI
  Note right of NextRoute: resolve author avatar or initials
  NextRoute->>ImageBuilder: compose image with title, date, author, logo, embedded font
  ImageBuilder-->>NextRoute: ImageResponse (image/png)
  NextRoute-->>Browser: 200 ImageResponse
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through fonts and pixels bright,
I stitched a title, date, and light,
New prose, new image, CSS in tune,
I nibble bugs beneath the moon,
Let agents dance and launch by night. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly summarizes the main change: adding a new blog post about scaling to 100 parallel coding agents.
Description check ✅ Passed The PR description covers the key changes (blog post, styling fixes), includes a test plan, but omits required template sections like Related Issues and Type of Change.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch tick

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Blog post covering our plan to scale from 5-7 to 100 coding agents by
end of 2026, with three key strategies: quality gates before human
review, agent-driven UIs for faster review, and proactive event-driven
agents. Includes product screenshot cover, pipeline diagram, and two
minimal SVG diagrams. Also fixes blog prose styles for link visibility
(white underlines) and h1 sizing.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 3, 2026

🚀 Preview Deployment

🔗 Preview Links

Service Status Link
Neon Database (Neon) View Branch
Fly.io Electric (Fly.io) View App
Vercel API (Vercel) Open Preview
Vercel Web (Vercel) Open Preview
Vercel Marketing (Vercel) Open Preview
Vercel Admin (Vercel) Open Preview
Vercel Docs (Vercel) Open Preview

Preview updates automatically with new commits

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/marketing/content/blog/roadmap-to-100-agents.mdx`:
- Around line 11-19: Tighten two grammar points in the intro: replace the
sentence "Right now at Superset, we're able to reliably manage 5-7 coding agents
in parallel - whether that's Claude Code, Codex, etc. - at a time." with a
cleaner version like "Right now at Superset, we can reliably manage 5–7 coding
agents in parallel — whether Claude Code, Codex, or similar." and change "Our
goal is to be able to manage 100 coding agents in parallel each by the end of
2026." to "Our goal is to manage 100 coding agents in parallel by the end of
2026." to remove redundancy and tighten phrasing.

Comment on lines +11 to +19
Right now at Superset, we're able to reliably manage 5-7 coding agents in parallel - whether that's Claude Code, Codex,
etc. - at a time. Our goal is to be able to manage 100 coding agents in parallel each by the end of 2026.

Most people believe that the path from seven to 100 agents is better models, faster inference, and smarter agents. It's
not. Agent compute is already cheap enough, you can run hundreds of agents a month all for less than the cost of one
engineer.

What's stopping us is every agent needs a human to review its code, give feedback, and decide what to work on next.
Scale the agents all you want - it's the humans that don't scale.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tighten two grammar points in the intro.

✏️ Suggested wording tweaks
-Right now at Superset, we're able to reliably manage 5-7 coding agents in parallel - whether that's Claude Code, Codex,
-etc. - at a time. Our goal is to be able to manage 100 coding agents in parallel each by the end of 2026.
+Right now at Superset, we're able to reliably manage 5-7 coding agents in parallel - whether that's Claude Code, Codex,
+etc. - at a time. Our goal is to be able to manage 100 coding agents in parallel by the end of 2026.
-Scale the agents all you want - it's the humans that don't scale.
+Scale the agents all you want - it's the humans who don't scale.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Right now at Superset, we're able to reliably manage 5-7 coding agents in parallel - whether that's Claude Code, Codex,
etc. - at a time. Our goal is to be able to manage 100 coding agents in parallel each by the end of 2026.
Most people believe that the path from seven to 100 agents is better models, faster inference, and smarter agents. It's
not. Agent compute is already cheap enough, you can run hundreds of agents a month all for less than the cost of one
engineer.
What's stopping us is every agent needs a human to review its code, give feedback, and decide what to work on next.
Scale the agents all you want - it's the humans that don't scale.
Right now at Superset, we're able to reliably manage 5-7 coding agents in parallel - whether that's Claude Code, Codex,
etc. - at a time. Our goal is to be able to manage 100 coding agents in parallel by the end of 2026.
Most people believe that the path from seven to 100 agents is better models, faster inference, and smarter agents. It's
not. Agent compute is already cheap enough, you can run hundreds of agents a month all for less than the cost of one
engineer.
What's stopping us is every agent needs a human to review its code, give feedback, and decide what to work on next.
Scale the agents all you want - it's the humans who don't scale.
🧰 Tools
🪛 LanguageTool

[style] ~18-~18: Try using a synonym here to strengthen your writing.
Context: ...agent needs a human to review its code, give feedback, and decide what to work on ne...

(GIVE_PROVIDE)


[style] ~19-~19: Consider using “who” when you are referring to people instead of objects.
Context: ...e agents all you want - it's the humans that don't scale. # Mapping out the problem...

(THAT_WHO)

🤖 Prompt for AI Agents
In `@apps/marketing/content/blog/roadmap-to-100-agents.mdx` around lines 11 - 19,
Tighten two grammar points in the intro: replace the sentence "Right now at
Superset, we're able to reliably manage 5-7 coding agents in parallel - whether
that's Claude Code, Codex, etc. - at a time." with a cleaner version like "Right
now at Superset, we can reliably manage 5–7 coding agents in parallel — whether
Claude Code, Codex, or similar." and change "Our goal is to be able to manage
100 coding agents in parallel each by the end of 2026." to "Our goal is to
manage 100 coding agents in parallel by the end of 2026." to remove redundancy
and tighten phrasing.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/marketing/content/blog/roadmap-to-100-agents.mdx`:
- Line 150: The closing sentence "If you're running agents at scale and hitting
these walls, we'd love to compare notes, reach out to us at
founders@superset.sh" is missing terminal punctuation and should make the email
a clickable mailto link; update that sentence to end with a period and replace
the plain email with a mailto link (e.g.,
[founders@superset.sh](mailto:founders@superset.sh)) so it becomes properly
punctuated and the email is clickable in the MDX.

But the throughput framing gives us a clear test for every feature we build: does this reduce the time a human spends
per agent interaction?

If you're running agents at scale and hitting these walls, we'd love to compare notes, reach out to us at founders@superset.sh
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing period and consider mailto link for email.

The closing sentence is missing terminal punctuation and the email could be a clickable link for better UX.

✏️ Suggested fix
-If you're running agents at scale and hitting these walls, we'd love to compare notes, reach out to us at founders@superset.sh
+If you're running agents at scale and hitting these walls, we'd love to compare notes. Reach out to us at [founders@superset.sh](mailto:founders@superset.sh).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
If you're running agents at scale and hitting these walls, we'd love to compare notes, reach out to us at founders@superset.sh
If you're running agents at scale and hitting these walls, we'd love to compare notes. Reach out to us at [founders@superset.sh](mailto:founders@superset.sh).
🤖 Prompt for AI Agents
In `@apps/marketing/content/blog/roadmap-to-100-agents.mdx` at line 150, The
closing sentence "If you're running agents at scale and hitting these walls,
we'd love to compare notes, reach out to us at founders@superset.sh" is missing
terminal punctuation and should make the email a clickable mailto link; update
that sentence to end with a period and replace the plain email with a mailto
link (e.g., [founders@superset.sh](mailto:founders@superset.sh)) so it becomes
properly punctuated and the email is clickable in the MDX.

Generate per-post OG images with title, date, author avatar (grayscale),
and Superset SVG logo. Also remove gradient from root OG image.
@saddlepaddle saddlepaddle merged commit 194af2a into main Feb 3, 2026
10 of 12 checks passed
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@apps/marketing/src/app/blog/`[slug]/opengraph-image.tsx:
- Around line 11-24: The readFileAsDataUri function currently swallows all
errors and returns null; update it to catch the exception, log it using the
required prefixed console pattern (e.g. "[opengraph-image/readFile] ...") and
include the filePath and mime in the message plus the caught error details, then
continue to return null (or rethrow if appropriate); locate the
readFileAsDataUri function and replace the empty catch block with a
console.error call that follows the "[domain/operation] message" convention and
includes the error object and context.
- Around line 1-24: Add an explicit Node runtime export and surface file-read
errors instead of swallowing them: add export const runtime = 'nodejs' near the
top of the file, and modify readFileAsDataUri to catch the thrown error as a
variable and log a contextual message (e.g., using console.error) including
filePath and the error before returning null so failures are visible while
preserving the graceful fallback.

Comment on lines +1 to +24
import fs from "node:fs";
import path from "node:path";
import { ImageResponse } from "next/og";
import { getBlogPost } from "@/lib/blog";
import { formatBlogDate } from "@/lib/blog-utils";

export const alt = "Superset Blog";
export const size = { width: 1200, height: 630 };
export const contentType = "image/png";

function readFileAsDataUri({
filePath,
mime,
}: {
filePath: string;
mime: string;
}): string | null {
try {
const absolutePath = path.join(process.cwd(), "public", filePath);
const buffer = fs.readFileSync(absolutePath);
return `data:${mime};base64,${buffer.toString("base64")}`;
} catch {
return null;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Next.js 16 opengraph-image route default runtime Edge or Node.js

💡 Result:

In Next.js 16, an opengraph-image.(ts|tsx|js) metadata route defaults to the Node.js runtime (runtime: 'nodejs') unless you explicitly set otherwise. [1]

To run it on the Edge runtime, add:

export const runtime = 'edge'

If you don’t set a runtime on a segment/route, Next.js uses nodejs by default. [2]

Sources: [1] [2]


🌐 Web query:

Next.js ImageResponse node:fs file system access runtime configuration

💡 Result:

To use node:fs (or node:fs/promises) while generating an ImageResponse (OG images via next/og), the code must run in the Node.js runtime, not the Edge runtime—because the Edge runtime does not provide filesystem APIs. [1], [2]

App Router (recommended): force Node.js runtime for the OG image file

In app/opengraph-image.tsx (or app/**/opengraph-image.tsx, app/icon.tsx, app/twitter-image.tsx, etc.):

import { ImageResponse } from "next/og";
import { readFile } from "node:fs/promises";
import { join } from "node:path";

export const runtime = "nodejs";

export default async function Image() {
  const font = await readFile(join(process.cwd(), "assets/MyFont.ttf"));
  return new ImageResponse(<div>Hello</div>, {
    fonts: [{ name: "MyFont", data: font, style: "normal" }],
  });
}

If you still see “node:fs … not supported in the Edge Runtime”

There’s a known Next.js issue where an opengraph-image file set to runtime = "nodejs" can still be analyzed/loaded as if it were in Edge when the corresponding page/segment is Edge, triggering errors like node:fs/promises not supported. [3]

Workarounds:

  • Make the page/segment that owns the metadata also use Node.js (so runtimes don’t mix for that segment). [3]
  • Or avoid filesystem access by fetching the asset (e.g., from public/ or a remote URL) instead of fs, which works in Edge too. [1], [2]

What not to use

This is not next.config.js runtimeConfig (serverRuntimeConfig/publicRuntimeConfig)—that setting is unrelated to choosing Edge vs Node runtimes. [4]

Sources:
[1] Next.js runtimes (Edge vs Node.js) docs: Edge runtime caveats / limited Node API support.
[2] Vercel Edge Runtime limitations (explicitly: no filesystem access).
[3] Next.js issue #77796 describing opengraph-image + runtime="nodejs" + page in Edge causing node:fs errors.
[4] Next.js next.config.js runtime configuration (serverRuntimeConfig/publicRuntimeConfig) docs.


Declare Node.js runtime explicitly and log errors instead of silently returning null.

OG image routes default to Node.js runtime in Next.js 16, so node:fs access works by default. However, add export const runtime = 'nodejs' explicitly to prevent issues if the parent page/segment uses Edge runtime (a known incompatibility in Next.js).

Additionally, the readFileAsDataUri function silently swallows errors by returning null. Per coding guidelines, log errors with context instead of silently catching:

🔧 Suggested changes
+export const runtime = 'nodejs';
+
 function readFileAsDataUri({
	filePath,
	mime,
}: {
	filePath: string;
	mime: string;
}): string | null {
	try {
		const absolutePath = path.join(process.cwd(), "public", filePath);
		const buffer = fs.readFileSync(absolutePath);
		return `data:${mime};base64,${buffer.toString("base64")}`;
	} catch (error) {
+		console.error(`[opengraph-image] Failed to read file at ${filePath}:`, error);
-		return null;
+		throw error;
	}
}

Or, if graceful fallback is necessary, still log the error before returning null.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import fs from "node:fs";
import path from "node:path";
import { ImageResponse } from "next/og";
import { getBlogPost } from "@/lib/blog";
import { formatBlogDate } from "@/lib/blog-utils";
export const alt = "Superset Blog";
export const size = { width: 1200, height: 630 };
export const contentType = "image/png";
function readFileAsDataUri({
filePath,
mime,
}: {
filePath: string;
mime: string;
}): string | null {
try {
const absolutePath = path.join(process.cwd(), "public", filePath);
const buffer = fs.readFileSync(absolutePath);
return `data:${mime};base64,${buffer.toString("base64")}`;
} catch {
return null;
}
import fs from "node:fs";
import path from "node:path";
import { ImageResponse } from "next/og";
import { getBlogPost } from "@/lib/blog";
import { formatBlogDate } from "@/lib/blog-utils";
export const runtime = 'nodejs';
export const alt = "Superset Blog";
export const size = { width: 1200, height: 630 };
export const contentType = "image/png";
function readFileAsDataUri({
filePath,
mime,
}: {
filePath: string;
mime: string;
}): string | null {
try {
const absolutePath = path.join(process.cwd(), "public", filePath);
const buffer = fs.readFileSync(absolutePath);
return `data:${mime};base64,${buffer.toString("base64")}`;
} catch (error) {
console.error(`[opengraph-image] Failed to read file at ${filePath}:`, error);
throw error;
}
}
🤖 Prompt for AI Agents
In `@apps/marketing/src/app/blog/`[slug]/opengraph-image.tsx around lines 1 - 24,
Add an explicit Node runtime export and surface file-read errors instead of
swallowing them: add export const runtime = 'nodejs' near the top of the file,
and modify readFileAsDataUri to catch the thrown error as a variable and log a
contextual message (e.g., using console.error) including filePath and the error
before returning null so failures are visible while preserving the graceful
fallback.

Comment on lines +11 to +24
function readFileAsDataUri({
filePath,
mime,
}: {
filePath: string;
mime: string;
}): string | null {
try {
const absolutePath = path.join(process.cwd(), "public", filePath);
const buffer = fs.readFileSync(absolutePath);
return `data:${mime};base64,${buffer.toString("base64")}`;
} catch {
return null;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Log file read failures instead of silently returning null.

Right now errors are swallowed, which makes missing assets hard to diagnose. Log the exception with context using the required prefix.

🛠️ Proposed fix
 function readFileAsDataUri({
 	filePath,
 	mime,
 }: {
 	filePath: string;
 	mime: string;
 }): string | null {
 	try {
 		const absolutePath = path.join(process.cwd(), "public", filePath);
 		const buffer = fs.readFileSync(absolutePath);
 		return `data:${mime};base64,${buffer.toString("base64")}`;
-	} catch {
+	} catch (error) {
+		console.error("[marketing/og-image] readFileAsDataUri failed", {
+			filePath,
+			mime,
+			error,
+		});
 		return null;
 	}
 }
As per coding guidelines, Never swallow errors silently; at minimum log them with context; Use prefixed console logging with pattern `[domain/operation] message` for all logging.
🤖 Prompt for AI Agents
In `@apps/marketing/src/app/blog/`[slug]/opengraph-image.tsx around lines 11 - 24,
The readFileAsDataUri function currently swallows all errors and returns null;
update it to catch the exception, log it using the required prefixed console
pattern (e.g. "[opengraph-image/readFile] ...") and include the filePath and
mime in the message plus the caught error details, then continue to return null
(or rethrow if appropriate); locate the readFileAsDataUri function and replace
the empty catch block with a console.error call that follows the
"[domain/operation] message" convention and includes the error object and
context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant