diff --git a/README.md b/README.md index 104343d7..2dcc8202 100644 --- a/README.md +++ b/README.md @@ -8,19 +8,13 @@ npm version CI Release - GitHub Pages - Figma Plugin - MCP Registry - GitHub Action

-

The design linter that scores how easily your Figma design can be implemented by AI or developers — before a single line of code is written.

+

Predicts where your Figma design will break when AI implements it — scored against real code conversion difficulty.

-

No AI tokens consumed per analysis. Rules run deterministically — AI only validated the scores during development.

- -

Share your Figma design to help improve scoring accuracy.

- -

Try it in your browser — no install needed.

+

+ Try it in your browser — no install needed. +

CanICode Report @@ -28,70 +22,74 @@ --- -## How It Works - -32 rules. 6 categories. Every node in the Figma tree. +## Why CanICode -| Category | Rules | What it checks | -|----------|-------|----------------| -| Layout | 9 | Auto-layout usage, responsive behavior | -| Design Token | 7 | Color/font/shadow tokenization, spacing consistency | -| Component | 3 | Component reuse, detached instances | -| Naming | 5 | Semantic names, default names, naming conventions | -| AI Readability | 5 | Structure clarity, z-index reliance, empty frames | -| Handoff Risk | 3 | Hardcoded values, truncation handling, interaction coverage | +AI code generators (Claude, Cursor, GPT) can turn a Figma design into working code — but they fail predictably on certain patterns: missing Auto Layout, raw color values, unnamed layers, ambiguous nesting. -Each issue is classified: **Blocking** > **Risk** > **Missing Info** > **Suggestion**. +CanICode finds these patterns **before** you generate code, so you can fix the design instead of debugging the output. -### Rule Scores Validated by AI +- **29 rules** across 5 dimensions: Structure, Token, Component, Naming, Behavior +- **Deterministic** — no AI tokens consumed per analysis, runs in milliseconds +- **Calibrated** — scores validated by converting real designs to code and measuring pixel-level accuracy -Rule scores aren't guesswork. They're validated through a 4-agent debate pipeline that converts real Figma nodes to code and measures actual implementation difficulty. +### Scores You Can Trust -1. **Runner** analyzes the design and flags issues -2. **Converter** converts the flagged nodes to actual code -3. **Critic** challenges whether the scores match the real difficulty -4. **Arbitrator** makes the final call — adjust or keep +Rule scores aren't guesswork. A 6-agent calibration pipeline converts real Figma designs to HTML, measures pixel-level similarity (via `visual-compare`), and adjusts scores based on actual implementation difficulty. -- A node that's hard to implement → rule score goes up -- A node that's easy to implement despite the flag → rule score goes down +- Design that's hard to implement accurately → rule score goes **up** +- Design that's easy despite the flag → rule score goes **down** -The rules themselves run deterministically on every analysis — no tokens consumed. The AI debate validates scores when new fixtures are added, not on every run. See [`docs/CALIBRATION.md`](docs/CALIBRATION.md). +The pipeline runs on community fixtures, not on every analysis. See [`docs/CALIBRATION.md`](docs/CALIBRATION.md). --- ## Getting Started -| If you want to... | Use | -|---|---| -| Just try it | **[Web App](https://let-sunny.github.io/canicode/)** — paste a URL, no install | -| Analyze inside Figma | **[Figma Plugin](https://www.figma.com/community/plugin/1617144221046795292/canicode)** (under review) | -| Use with Claude Code / Cursor | **MCP Server** or **Skill** — see below | -| Generate code from design | **`canicode implement`** — analysis + design tree + assets + prompt | -| Add to CI/CD | **[GitHub Action](https://github.com/marketplace/actions/canicode-action)** | -| Full control | **CLI** | +**Quickest way:** **[Open the web app](https://let-sunny.github.io/canicode/)** — paste a Figma URL, get a report. -

-CLI vs MCP (feature comparison) +**For your workflow:** -Same detail as in [`CLAUDE.md`](CLAUDE.md); summarized here for quick reference. +```bash +# CLI — one command +npx canicode analyze "https://www.figma.com/design/ABC123/MyDesign?node-id=1-234" -| Feature | CLI (REST API) | MCP (Figma MCP) | -|---------|:-:|:-:| -| Node structure | ✅ Full tree | ✅ XML metadata | -| Style values | ✅ Raw Figma JSON | ✅ React+Tailwind code | -| Component metadata (name, desc) | ✅ | ❌ | -| Component master trees | ✅ `componentDefinitions` | ❌ | -| Annotations (dev mode) | ❌ private beta | ✅ `data-annotations` | -| Screenshots | ✅ via API | ✅ `get_screenshot` | -| FIGMA_TOKEN required | ✅ | ❌ | +# MCP Server — works with Claude Code, Cursor, Claude Desktop +claude mcp add canicode -- npx -y -p canicode canicode-mcp +``` -**When to use which:** -- Accurate component analysis (style overrides, missing-component rules) → **CLI with FIGMA_TOKEN** -- Quick structure/style checks or annotation-aware flows → **MCP** -- Offline/CI → **CLI with saved fixtures** (`save-fixture`) +
+All channels + +| Channel | Best for | +|---------|----------| +| **[Web App](https://let-sunny.github.io/canicode/)** | Quick check, no install | +| **[Figma Plugin](https://www.figma.com/community/plugin/1617144221046795292/canicode)** | Analyze inside Figma (under review) | +| **MCP Server** | Claude Code / Cursor / Claude Desktop integration | +| **Claude Code Skill** | Lightweight, no MCP install | +| **CLI** | Full control, CI/CD, offline analysis | +| **`canicode implement`** | Generate code-ready package (analysis + assets + prompt) | +| **[GitHub Action](https://github.com/marketplace/actions/canicode-action)** | PR gate with score threshold |
+--- + +## What It Checks + +| Category | Rules | What it measures | +|----------|:-----:|------------------| +| **Structure** | 9 | Can AI read the layout? (Auto Layout, nesting, positioning, responsive) | +| **Token** | 7 | Can AI reproduce exact values? (colors, fonts, shadows, spacing) | +| **Component** | 4 | Is the design efficient for AI context? (reuse, variants, descriptions) | +| **Naming** | 5 | Can AI infer meaning? (semantic names, conventions) | +| **Behavior** | 4 | Can AI know what happens? (overflow, truncation, wrap, interactions) | + +Each issue is classified: **Blocking** > **Risk** > **Missing Info** > **Suggestion**. + +--- + +## Installation +
CLI @@ -114,31 +112,6 @@ Hitting 429 errors? Make sure the file is in a paid workspace. Or use MCP (no to
-
-Design to Code (prepare implementation package) - -```bash -canicode implement ./fixtures/my-design -canicode implement "https://www.figma.com/design/ABC/File?node-id=1-234" --prompt ./my-react-prompt.md --image-scale 3 -``` - -Outputs a ready-to-use package for AI code generation: -- `analysis.json` — issues + scores -- `design-tree.txt` — DOM-like tree with CSS styles + token estimate -- `images/` — PNG assets with human-readable names (`hero-banner@2x.png`) -- `vectors/` — SVG assets -- `PROMPT.md` — code generation prompt (default: HTML+CSS, or your custom prompt) - -| Option | Default | Description | -|--------|---------|-------------| -| `--prompt` | built-in HTML+CSS | Path to your custom prompt file for any stack | -| `--image-scale` | `2` | Image export scale: `2` for PC, `3` for mobile | -| `--output` | `./canicode-implement/` | Output directory | - -Feed `design-tree.txt` + `PROMPT.md` to your AI assistant (Claude, Cursor, etc.) to generate code. - -
-
MCP Server (Claude Code / Cursor / Claude Desktop) @@ -170,6 +143,51 @@ MCP and CLI use separate rate limit pools — switching to MCP won't affect your
+
+CLI vs MCP (feature comparison) + +| Feature | CLI (REST API) | MCP (Figma MCP) | +|---------|:-:|:-:| +| Node structure | Full tree | XML metadata | +| Style values | Raw Figma JSON | React+Tailwind code | +| Component metadata (name, desc) | Yes | No | +| Component master trees | Yes | No | +| Annotations (dev mode) | No (private beta) | Yes | +| Screenshots | Yes | Yes | +| FIGMA_TOKEN required | Yes | No | + +**When to use which:** +- Accurate component analysis → **CLI with FIGMA_TOKEN** +- Quick checks or annotation-aware flows → **MCP** +- Offline/CI → **CLI with saved fixtures** (`save-fixture`) + +
+ +
+Design to Code (prepare implementation package) + +```bash +canicode implement ./fixtures/my-design +canicode implement "https://www.figma.com/design/ABC/File?node-id=1-234" --prompt ./my-react-prompt.md --image-scale 3 +``` + +Outputs a ready-to-use package for AI code generation: +- `analysis.json` — issues + scores +- `design-tree.txt` — DOM-like tree with CSS styles + token estimate +- `images/` — PNG assets with human-readable names (`hero-banner@2x.png`) +- `vectors/` — SVG assets +- `PROMPT.md` — code generation prompt (default: HTML+CSS, or your custom prompt) + +| Option | Default | Description | +|--------|---------|-------------| +| `--prompt` | built-in HTML+CSS | Path to your custom prompt file for any stack | +| `--image-scale` | `2` | Image export scale: `2` for PC, `3` for mobile | +| `--output` | `./canicode-implement/` | Output directory | + +Feed `design-tree.txt` + `PROMPT.md` to your AI assistant (Claude, Cursor, etc.) to generate code. + +
+
Claude Code Skill (lightweight, no MCP install) @@ -227,17 +245,9 @@ pnpm lint # type check For architecture details, see [`CLAUDE.md`](CLAUDE.md). For calibration pipeline, see [`docs/CALIBRATION.md`](docs/CALIBRATION.md). -## Roadmap - -- [x] **Phase 1** — 32 rules, density-based scoring, HTML reports, presets, scoped analysis -- [x] **Phase 2** — 4-agent calibration pipeline, `/calibrate-loop` debate loop -- [x] **Phase 3** — Config overrides, MCP server, Claude Skills -- [x] **Phase 4** — Figma comment from report (per-issue "Comment" button in HTML report, posts to Figma node via API) -- [x] **Phase 5** — Custom rules with pattern matching (node name/type/attribute conditions) -- [x] **Phase 6** — Screenshot comparison (`visual-compare` CLI: Figma vs AI-generated code, pixel-level diff) -- [x] **Phase 7** — Calibration pipeline upgrade (visual-compare + Gap Analyzer for objective score validation) -- [x] **Phase 8** — Rule discovery pipeline (6-agent debate: researcher → designer → implementer → A/B visual validation → evaluator → critic) -- [ ] **Ongoing** — Rule refinement via calibration + gap analysis on community fixtures +## Contributing + +**[Share your Figma design](https://github.com/let-sunny/canicode/discussions/new?category=share-your-figma)** to help calibrate scores against real-world designs. ## Support