Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
192 changes: 101 additions & 91 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,90 +8,88 @@
<a href="https://www.npmjs.com/package/canicode"><img src="https://img.shields.io/npm/v/canicode.svg" alt="npm version"></a>
<a href="https://github.com/let-sunny/canicode/actions/workflows/ci.yml"><img src="https://github.com/let-sunny/canicode/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://github.com/let-sunny/canicode/actions/workflows/release.yml"><img src="https://github.com/let-sunny/canicode/actions/workflows/release.yml/badge.svg" alt="Release"></a>
<a href="https://let-sunny.github.io/canicode/"><img src="https://img.shields.io/badge/Try_it-GitHub_Pages-blue" alt="GitHub Pages"></a>
<a href="https://www.figma.com/community/plugin/1617144221046795292/canicode"><img src="https://img.shields.io/badge/Figma_Plugin-under_review-orange" alt="Figma Plugin"></a>
<a href="https://github.com/let-sunny/canicode#mcp-server-claude-code--cursor--claude-desktop"><img src="https://img.shields.io/badge/MCP_Registry-published-green" alt="MCP Registry"></a>
<a href="https://github.com/marketplace/actions/canicode-action"><img src="https://img.shields.io/badge/GitHub_Action-Marketplace-2088FF" alt="GitHub Action"></a>
</p>

<p align="center">The design linter that scores how easily your Figma design can be implemented by AI or developersbefore a single line of code is written.</p>
<p align="center"><strong>Predicts where your Figma design will break when AI implements itscored against real code conversion difficulty.</strong></p>

<p align="center">No AI tokens consumed per analysis. Rules run deterministically — AI only validated the scores during development.</p>

<p align="center"><strong><a href="https://github.com/let-sunny/canicode/discussions/new?category=share-your-figma">Share your Figma design</a></strong> to help improve scoring accuracy.</p>

<p align="center"><strong><a href="https://let-sunny.github.io/canicode/">Try it in your browser</a></strong> — no install needed.</p>
<p align="center">
<strong><a href="https://let-sunny.github.io/canicode/">Try it in your browser</a></strong> — no install needed.
</p>

<p align="center">
<img src="docs/images/screenshot.gif" alt="CanICode Report" width="720">
</p>

---

## How It Works

32 rules. 6 categories. Every node in the Figma tree.
## Why CanICode

| Category | Rules | What it checks |
|----------|-------|----------------|
| Layout | 9 | Auto-layout usage, responsive behavior |
| Design Token | 7 | Color/font/shadow tokenization, spacing consistency |
| Component | 3 | Component reuse, detached instances |
| Naming | 5 | Semantic names, default names, naming conventions |
| AI Readability | 5 | Structure clarity, z-index reliance, empty frames |
| Handoff Risk | 3 | Hardcoded values, truncation handling, interaction coverage |
AI code generators (Claude, Cursor, GPT) can turn a Figma design into working code — but they fail predictably on certain patterns: missing Auto Layout, raw color values, unnamed layers, ambiguous nesting.

Each issue is classified: **Blocking** > **Risk** > **Missing Info** > **Suggestion**.
CanICode finds these patterns **before** you generate code, so you can fix the design instead of debugging the output.

### Rule Scores Validated by AI
- **29 rules** across 5 dimensions: Structure, Token, Component, Naming, Behavior
- **Deterministic** — no AI tokens consumed per analysis, runs in milliseconds
- **Calibrated** — scores validated by converting real designs to code and measuring pixel-level accuracy

Rule scores aren't guesswork. They're validated through a 4-agent debate pipeline that converts real Figma nodes to code and measures actual implementation difficulty.
### Scores You Can Trust

1. **Runner** analyzes the design and flags issues
2. **Converter** converts the flagged nodes to actual code
3. **Critic** challenges whether the scores match the real difficulty
4. **Arbitrator** makes the final call — adjust or keep
Rule scores aren't guesswork. A 6-agent calibration pipeline converts real Figma designs to HTML, measures pixel-level similarity (via `visual-compare`), and adjusts scores based on actual implementation difficulty.

- A node that's hard to implement → rule score goes up
- A node that's easy to implement despite the flag → rule score goes down
- Design that's hard to implement accurately → rule score goes **up**
- Design that's easy despite the flag → rule score goes **down**

The rules themselves run deterministically on every analysis — no tokens consumed. The AI debate validates scores when new fixtures are added, not on every run. See [`docs/CALIBRATION.md`](docs/CALIBRATION.md).
The pipeline runs on community fixtures, not on every analysis. See [`docs/CALIBRATION.md`](docs/CALIBRATION.md).

---

## Getting Started

| If you want to... | Use |
|---|---|
| Just try it | **[Web App](https://let-sunny.github.io/canicode/)** — paste a URL, no install |
| Analyze inside Figma | **[Figma Plugin](https://www.figma.com/community/plugin/1617144221046795292/canicode)** (under review) |
| Use with Claude Code / Cursor | **MCP Server** or **Skill** — see below |
| Generate code from design | **`canicode implement`** — analysis + design tree + assets + prompt |
| Add to CI/CD | **[GitHub Action](https://github.com/marketplace/actions/canicode-action)** |
| Full control | **CLI** |
**Quickest way:** **[Open the web app](https://let-sunny.github.io/canicode/)** — paste a Figma URL, get a report.

<details>
<summary><strong>CLI vs MCP</strong> (feature comparison)</summary>
**For your workflow:**

Same detail as in [`CLAUDE.md`](CLAUDE.md); summarized here for quick reference.
```bash
# CLI — one command
npx canicode analyze "https://www.figma.com/design/ABC123/MyDesign?node-id=1-234"

| Feature | CLI (REST API) | MCP (Figma MCP) |
|---------|:-:|:-:|
| Node structure | ✅ Full tree | ✅ XML metadata |
| Style values | ✅ Raw Figma JSON | ✅ React+Tailwind code |
| Component metadata (name, desc) | ✅ | ❌ |
| Component master trees | ✅ `componentDefinitions` | ❌ |
| Annotations (dev mode) | ❌ private beta | ✅ `data-annotations` |
| Screenshots | ✅ via API | ✅ `get_screenshot` |
| FIGMA_TOKEN required | ✅ | ❌ |
# MCP Server — works with Claude Code, Cursor, Claude Desktop
claude mcp add canicode -- npx -y -p canicode canicode-mcp
```

**When to use which:**
- Accurate component analysis (style overrides, missing-component rules) → **CLI with FIGMA_TOKEN**
- Quick structure/style checks or annotation-aware flows → **MCP**
- Offline/CI → **CLI with saved fixtures** (`save-fixture`)
<details>
<summary><strong>All channels</strong></summary>

| Channel | Best for |
|---------|----------|
| **[Web App](https://let-sunny.github.io/canicode/)** | Quick check, no install |
| **[Figma Plugin](https://www.figma.com/community/plugin/1617144221046795292/canicode)** | Analyze inside Figma (under review) |
| **MCP Server** | Claude Code / Cursor / Claude Desktop integration |
| **Claude Code Skill** | Lightweight, no MCP install |
| **CLI** | Full control, CI/CD, offline analysis |
| **`canicode implement`** | Generate code-ready package (analysis + assets + prompt) |
| **[GitHub Action](https://github.com/marketplace/actions/canicode-action)** | PR gate with score threshold |

</details>

---

## What It Checks

| Category | Rules | What it measures |
|----------|:-----:|------------------|
| **Structure** | 9 | Can AI read the layout? (Auto Layout, nesting, positioning, responsive) |
| **Token** | 7 | Can AI reproduce exact values? (colors, fonts, shadows, spacing) |
| **Component** | 4 | Is the design efficient for AI context? (reuse, variants, descriptions) |
| **Naming** | 5 | Can AI infer meaning? (semantic names, conventions) |
| **Behavior** | 4 | Can AI know what happens? (overflow, truncation, wrap, interactions) |

Each issue is classified: **Blocking** > **Risk** > **Missing Info** > **Suggestion**.

---

## Installation

<details>
<summary><strong>CLI</strong></summary>

Expand All @@ -114,31 +112,6 @@ Hitting 429 errors? Make sure the file is in a paid workspace. Or use MCP (no to

</details>

<details>
<summary><strong>Design to Code</strong> (prepare implementation package)</summary>

```bash
canicode implement ./fixtures/my-design
canicode implement "https://www.figma.com/design/ABC/File?node-id=1-234" --prompt ./my-react-prompt.md --image-scale 3
```

Outputs a ready-to-use package for AI code generation:
- `analysis.json` — issues + scores
- `design-tree.txt` — DOM-like tree with CSS styles + token estimate
- `images/` — PNG assets with human-readable names (`hero-banner@2x.png`)
- `vectors/` — SVG assets
- `PROMPT.md` — code generation prompt (default: HTML+CSS, or your custom prompt)

| Option | Default | Description |
|--------|---------|-------------|
| `--prompt` | built-in HTML+CSS | Path to your custom prompt file for any stack |
| `--image-scale` | `2` | Image export scale: `2` for PC, `3` for mobile |
| `--output` | `./canicode-implement/` | Output directory |

Feed `design-tree.txt` + `PROMPT.md` to your AI assistant (Claude, Cursor, etc.) to generate code.

</details>

<details>
<summary><strong>MCP Server</strong> (Claude Code / Cursor / Claude Desktop)</summary>

Expand Down Expand Up @@ -170,6 +143,51 @@ MCP and CLI use separate rate limit pools — switching to MCP won't affect your

</details>

<details>
<summary><strong>CLI vs MCP</strong> (feature comparison)</summary>

| Feature | CLI (REST API) | MCP (Figma MCP) |
|---------|:-:|:-:|
| Node structure | Full tree | XML metadata |
| Style values | Raw Figma JSON | React+Tailwind code |
| Component metadata (name, desc) | Yes | No |
| Component master trees | Yes | No |
| Annotations (dev mode) | No (private beta) | Yes |
| Screenshots | Yes | Yes |
| FIGMA_TOKEN required | Yes | No |

**When to use which:**
- Accurate component analysis → **CLI with FIGMA_TOKEN**
- Quick checks or annotation-aware flows → **MCP**
- Offline/CI → **CLI with saved fixtures** (`save-fixture`)

</details>

<details>
<summary><strong>Design to Code</strong> (prepare implementation package)</summary>

```bash
canicode implement ./fixtures/my-design
canicode implement "https://www.figma.com/design/ABC/File?node-id=1-234" --prompt ./my-react-prompt.md --image-scale 3
```

Outputs a ready-to-use package for AI code generation:
- `analysis.json` — issues + scores
- `design-tree.txt` — DOM-like tree with CSS styles + token estimate
- `images/` — PNG assets with human-readable names (`hero-banner@2x.png`)
- `vectors/` — SVG assets
- `PROMPT.md` — code generation prompt (default: HTML+CSS, or your custom prompt)

| Option | Default | Description |
|--------|---------|-------------|
| `--prompt` | built-in HTML+CSS | Path to your custom prompt file for any stack |
| `--image-scale` | `2` | Image export scale: `2` for PC, `3` for mobile |
| `--output` | `./canicode-implement/` | Output directory |

Feed `design-tree.txt` + `PROMPT.md` to your AI assistant (Claude, Cursor, etc.) to generate code.

</details>

<details>
<summary><strong>Claude Code Skill</strong> (lightweight, no MCP install)</summary>

Expand Down Expand Up @@ -227,17 +245,9 @@ pnpm lint # type check

For architecture details, see [`CLAUDE.md`](CLAUDE.md). For calibration pipeline, see [`docs/CALIBRATION.md`](docs/CALIBRATION.md).

## Roadmap

- [x] **Phase 1** — 32 rules, density-based scoring, HTML reports, presets, scoped analysis
- [x] **Phase 2** — 4-agent calibration pipeline, `/calibrate-loop` debate loop
- [x] **Phase 3** — Config overrides, MCP server, Claude Skills
- [x] **Phase 4** — Figma comment from report (per-issue "Comment" button in HTML report, posts to Figma node via API)
- [x] **Phase 5** — Custom rules with pattern matching (node name/type/attribute conditions)
- [x] **Phase 6** — Screenshot comparison (`visual-compare` CLI: Figma vs AI-generated code, pixel-level diff)
- [x] **Phase 7** — Calibration pipeline upgrade (visual-compare + Gap Analyzer for objective score validation)
- [x] **Phase 8** — Rule discovery pipeline (6-agent debate: researcher → designer → implementer → A/B visual validation → evaluator → critic)
- [ ] **Ongoing** — Rule refinement via calibration + gap analysis on community fixtures
## Contributing

**[Share your Figma design](https://github.com/let-sunny/canicode/discussions/new?category=share-your-figma)** to help calibrate scores against real-world designs.

## Support

Expand Down
Loading