feat(mcp-server): embed MCP server in Docker image with skills and quickstart guide#1425
feat(mcp-server): embed MCP server in Docker image with skills and quickstart guide#1425douenergy merged 28 commits intoCanner:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis PR integrates an MCP server into the repository, adds mcp-server to CI/build contexts, refactors the ibis-server Dockerfile to a multi-stage wheel build, enables optional MCP startup from the ibis-server entrypoint with graceful shutdown, expands MDL schema and Pydantic DTOs, and adds skills, docs, and install tooling. Changes
Sequence Diagram(s)sequenceDiagram
participant Entrypoint as Entrypoint\n(Ibis container)
participant Gunicorn as Gunicorn\n(App server)
participant MCPProc as MCP Process\n(python -m app.wren)
participant FastMCP as FastMCP\n(MCP runtime)
participant IbisHTTP as Ibis HTTP\n(ibis-server)
participant OS as OS Signal
Entrypoint->>Entrypoint: read ENABLE_MCP_SERVER, MCP_* envs
alt ENABLE_MCP_SERVER = true
Entrypoint->>MCPProc: start MCP (background)
MCPProc->>FastMCP: init (transport, host, port)
FastMCP-->>MCPProc: register tools (setup_connection, deploy_manifest, mdl_validate_manifest)
end
Entrypoint->>Gunicorn: start Gunicorn (background)
Entrypoint->>Entrypoint: record PIDs, wait for children
par Normal operation
IbisHTTP->>FastMCP: invoke tools (deploy_manifest / setup_connection / mdl_validate_manifest)
FastMCP-->>IbisHTTP: responses
and Shutdown
OS->>Entrypoint: SIGTERM/SIGINT
Entrypoint->>Entrypoint: cleanup()
Entrypoint->>MCPProc: terminate MCP
Entrypoint->>Gunicorn: terminate Gunicorn
Entrypoint->>OS: exit
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
When ENABLE_MCP_SERVER=true, the ibis-server container automatically starts a Wren MCP server alongside the main FastAPI service, allowing AI agents to connect via streamable-http transport on port 9000. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- New wren-agent Python library for agentic MDL generation - PydanticAI agent with SQLAlchemy DB exploration tools - Pydantic MDL models matching official WrenAI schema - SkillStore: tag filtering + TF-IDF semantic search - MemoryStore: persistent JSON key-value store - validate_mdl: JSON Schema + ibis-server dry-plan validation - 58 tests passing - mcp-server: register MDL agent tools via mdl_tools.py - mdl_chat / mdl_reset_session / mdl_list_sessions tools - Graceful fallback if wren-agent not installed - MDL_AGENT_* env vars for skills, memory, max_skills - Updated README with installation guide and workflow example
Replace the wren-agent-backed mdl_chat/reset/list tools with six flat MCP tools that the calling AI agent orchestrates directly: - mdl_connect_database — SQLAlchemy connection per session_id - mdl_list_tables — table listing - mdl_get_column_info — column metadata (type, nullable, PK, FK) - mdl_get_column_stats — distinct/null count, min/max - mdl_get_sample_data — sample rows (max 20) - mdl_validate_manifest — JSON Schema + optional ibis-server dry-plan Removes wren-agent dependency from mcp-server; adds sqlalchemy and jsonschema as optional mdl extra. DB driver installed separately via just install-mdl-driver <postgres|mysql|duckdb>. Updated README, .env.example, justfile accordingly.
…tools
- Register @mcp.prompt('generate_mdl') with optional connection_string arg
Claude Desktop exposes this as a slash command /generate_mdl
Defines the 6-step workflow: connect → explore → build → validate → deploy
- Each tool return value now includes a '→ Next:' hint:
mdl_connect_database → mdl_get_column_info + mdl_get_sample_data
mdl_get_column_info → mdl_get_sample_data
mdl_get_sample_data → mdl_validate_manifest
mdl_validate_manifest → deploy
Root causes of 'Sorry, no response was returned': 1. Manifest missing dataSource field - Pydantic silently dropped it 2. relationships/views had no defaults - ValidationError on empty views 3. Relationship used join_type/join_condition instead of joinType/condition Fixes: add dataSource field, default all list fields to [], fix Relationship field names to match official MDL schema, add populate_by_name=True, add missing fields (notNull, isHidden, refSql etc)
Save/load MDL as a dbt-like YAML project directory: wren_project.yml + models/*.yml + relationships.yml + views/metrics.yml Update generate_mdl prompt to include Step 6 (save project). Add pyyaml>=6.0 to optional mdl deps.
…oy from file - Expand dto.py with full MDL types (RLAC, CLAC, Metric, EnumDefinition, etc.) - Add build_mdl_project tool to assemble YAML project into target/mdl.json - Change deploy() to accept mdl_file_path instead of inline Manifest object - Update mdl.schema.json with complete relationship, metric, view, and enum schemas - Add sessionProperty $def and column-level access control schema Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…endencies MDL generation workflow is now captured in two agentskills.io skills: - skills/generate-mdl/SKILL.md — generate MDL via ibis-server metadata endpoints - skills/mdl-project/SKILL.md — save/load/build YAML project directories Schema discovery (list_remote_tables, list_remote_constraints) routes through ibis-server instead of direct SQLAlchemy connections, eliminating the need to install database-specific drivers on the MCP server side. Changes: - Delete mcp-server/app/mdl_tools.py (SQLAlchemy-based tools removed) - Add mdl_validate_manifest tool directly in wren.py (ibis-server dry-plan) - Remove sqlalchemy, jsonschema, pyyaml optional dependencies from pyproject.toml - Update README to reflect new skill-based MDL generation workflow Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds portable agentskills.io-format skill for Wren SQL generation, including BigQuery dialect, datetime, type, and correction reference docs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Split Rust wheel build into two strategies via WHEEL_SOURCE build arg: - local: maturin+zig cross-compiles Linux wheel on host (reuses cargo cache) - docker: BuildKit cache mounts for incremental in-Docker compilation - just docker-build auto-selects local path when platform matches host, falls back to docker path for cross-platform/multi-arch builds - Remove redundant runtime poetry install (was being overwritten anyway) - Move jupyter deps to builder stage to fix silent install bug - Combine all apt-get layers in runtime stage into one - Add platform parameter to docker-build for multi-arch support - Document build strategies and prerequisites in README Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The /v3/connector/dry-plan endpoint expects `manifestStr` not `manifest_str`. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Step-by-step workflow for spinning up Wren Engine via Docker compose and connecting Claude Code (or Cline/Cursor) over streamable-http, including workspace mounting and localhost→host.docker.internal fix. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…options Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Covers port conflicts (lsof detection + remap workaround), container log diagnostics, MCP endpoint verification via curl, and localhost vs host.docker.internal connection issues. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
9a0969f to
b4e2f6b
Compare
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (9)
mcp-server/justfile (1)
14-28: Consider adding more driver options.The
install-mdl-drivertarget supports postgres, mysql, and duckdb, but the.env.exampleandSKILL.mddocumentation mention additional data sources like BigQuery, Snowflake, MSSQL, ClickHouse, Trino, and Oracle. If SQLAlchemy-based MDL tooling supports these, consider adding them.📝 Example extension
install-mdl-driver driver: #!/usr/bin/env bash case "{{driver}}" in postgres) uv pip install psycopg2-binary ;; mysql) uv pip install pymysql ;; duckdb) uv pip install duckdb-engine ;; + mssql) uv pip install pyodbc ;; + snowflake) uv pip install snowflake-sqlalchemy ;; + bigquery) uv pip install sqlalchemy-bigquery ;; *) echo "Unknown driver: {{driver}}. Supported: postgres, mysql, duckdb" && exit 1 ;; esacThat said, the current set may be intentionally limited based on tested/supported drivers. Based on learnings, these are standalone utility scripts and their dependencies belong in dev dependencies, which this approach correctly handles by installing on-demand rather than in production.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/justfile` around lines 14 - 28, The install-mdl-driver target currently only handles postgres, mysql, and duckdb via the case "{{driver}}" in the install-mdl-driver recipe; extend this case to include the additional drivers referenced in docs (.env.example / SKILL.md) by adding branches for bigquery (google-cloud-bigquery or sqlalchemy-bigquery), snowflake (snowflake-connector-python and snowflake-sqlalchemy), mssql (pyodbc or pymssql), clickhouse (clickhouse-sqlalchemy or clickhouse-driver), trino (trino), and oracle (cx_Oracle or oracledb) and map each to the appropriate pip package names, ensuring the default unknown-driver message remains; update the list in the install-mdl-driver target so running just install-mdl-driver <driver> installs the matching dev dependency for those data sources.skills/mdl-project/SKILL.md (1)
17-27: Consider adding language specifiers to fenced code blocks.Static analysis flags these code blocks as missing language specifiers. For directory structures and workflow diagrams, you can use
textorplaintext:📝 Example fix for line 17
-``` +```text my_project/ ├── wren_project.yml # Project metadata (catalog, schema, data_source)The same applies to code blocks at lines 30 and 230.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/mdl-project/SKILL.md` around lines 17 - 27, Update the fenced code blocks that show directory trees and diagrams so they include a language specifier (e.g., use ```text or ```plaintext) instead of bare triple backticks; specifically, modify the directory-structure block (the snippet starting with "my_project/" and similar blocks referenced later) and the other two code fences noted in the review to add the language token to satisfy static analysis and improve rendering.skills/install.sh (1)
53-62: Replace installed skill directories atomically.Both helpers delete the existing directory before copying the new one. If
cp -rfails midway, the user loses the previously working skill. Copy into a temporary directory andmvit into place only after the copy succeeds.Also applies to: 64-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/install.sh` around lines 53 - 62, The install_from_local helper currently removes the existing dest_dir before copying, risking data loss if cp fails; change install_from_local (and the analogous helper around lines 64-77) to copy into a temporary directory (use mktemp -d), verify the cp -r completes successfully, then atomically replace the target with mv tmpdir -> dest_dir (and only remove the old dest_dir after successful mv), and ensure tmpdir cleanup on error so the original skill directory is preserved if the copy fails.ibis-server/entrypoint.sh (1)
56-58: Add error handling tocdcommands.While
/mcp-serverand/appare guaranteed to exist in the container, adding error handling is defensive and satisfies shellcheck.♻️ Proposed fix
- cd /mcp-server && python -m app.wren & + cd /mcp-server || { echo "Failed to cd to /mcp-server"; exit 1; } + python -m app.wren & MCP_PID=$! - cd /app + cd /app || { echo "Failed to cd to /app"; exit 1; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibis-server/entrypoint.sh` around lines 56 - 58, The two bare cd commands in entrypoint.sh (cd /mcp-server and cd /app) lack error handling; update them to check for failure and abort with an error message and non-zero exit (e.g., use cd /mcp-server || { echo "Failed to cd /mcp-server"; exit 1; } and similarly for cd /app) so the script doesn't continue if the directory change fails; keep the MCP_PID assignment (MCP_PID=$!) immediately after launching the background python process to preserve behavior.mcp-server/README.md (2)
177-197: Add language specifier to the agent interaction code block.♻️ Proposed fix
-``` +```text User → "Generate an MDL for my PostgreSQL ecommerce database" ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/README.md` around lines 177 - 197, The fenced code block in README.md containing the agent interaction (the block starting with "User → \"Generate an MDL for my PostgreSQL ecommerce database\"" ) lacks a language specifier; update the opening fence from ``` to ```text to explicitly mark it as plain text (keep the closing ``` unchanged) so syntax highlighters and renderers treat the snippet correctly.
78-87: Add language specifiers to fenced code blocks.The code blocks are missing language specifiers, which affects syntax highlighting and linting compliance.
♻️ Proposed fix
-``` +```text deploy(mdl_file_path="/workspace/my_mdl.json")...
-
+textagent writes: mcp-server/workspace/my_project/wren_project.yml, models/*.yml, ...
agent compiles to: mcp-server/workspace/my_project/target/mdl.json
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/README.md` around lines 78 - 87, Update the README fenced code blocks to include language specifiers so linters and syntax highlighting work: change the block around the example call `deploy(mdl_file_path="/workspace/my_mdl.json")` and the block showing agent/file outputs to use a language tag (e.g., ```text) instead of bare ```; locate those blocks near the `deploy(mdl_file_path=...)` example and the `# agent writes: ...`/`# agent compiles to: ...` comments and add the language specifier to each opening fence.mcp-server/mdl.schema.json (1)
144-149: Consider simplifying the case-insensitive pattern.The regex pattern for
dataSourceis verbose. If the consuming code normalizes case anyway (asdto.pyusesLiteralwith uppercase values), you could simplify validation by documenting case-insensitivity and using a simpler pattern, or rely on application-layer normalization.However, this works correctly as-is and ensures schema-level validation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mcp-server/mdl.schema.json` around lines 144 - 149, The dataSource JSON schema uses a verbose case-insensitive regex; instead, simplify validation by either (A) replacing the long case-insensitive character classes in the "dataSource" pattern with a shorter alternation of the canonical UPPERCASE values (e.g., BIGQUERY|CLICKHOUSE|...|REDSHIFT) to match how dto.py normalizes/defines values, or (B) remove the pattern entirely and rely on application-layer normalization/validation in dto.py (with a clear comment in the schema that values are normalized to uppercase). Update the "dataSource" schema entry (symbol: dataSource) accordingly and ensure dto.py's Literal set remains the authoritative list of allowed uppercase values.ibis-server/Dockerfile (1)
8-8: Add--no-install-recommendsto reduce image size.Both
apt-get installcommands are missing the--no-install-recommendsflag, which pulls in unnecessary packages and increases image size.♻️ Proposed fix
-RUN apt-get update && apt-get -y install libpq-dev && rm -rf /var/lib/apt/lists/* +RUN apt-get update && apt-get -y install --no-install-recommends libpq-dev && rm -rf /var/lib/apt/lists/*Apply the same change to line 45.
Also applies to: 45-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibis-server/Dockerfile` at line 8, The RUN commands that call apt-get install in the Dockerfile (the lines using "apt-get update && apt-get -y install libpq-dev && rm -rf /var/lib/apt/lists/*" and the similar install on the later RUN) should add the --no-install-recommends flag to apt-get install to avoid pulling unnecessary recommended packages and reduce image size; update both RUN instructions to include --no-install-recommends (keep the existing -y and the trailing rm -rf /var/lib/apt/lists/*).skills/SKILLS.md (1)
141-147: Avoidskills/*in the “all skills” install example.That glob also copies non-skill files like
SKILLS.md,README.md,AUTHORING.md, andinstall.shinto~/.claude/skills/. Prefer listing the actual skill directories or pointing readers at the installer script.Suggested doc update
# All skills -cp -r skills/* ~/.claude/skills/ +cp -r skills/generate-mdl skills/mdl-project skills/wren-sql skills/wren-quickstart ~/.claude/skills/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/SKILLS.md` around lines 141 - 147, The "All skills" example uses the glob `cp -r skills/* ~/.claude/skills/` which accidentally copies non-skill files (SKILLS.md, README.md, AUTHORING.md, install.sh); update the SKILLS.md example to either list the actual skill directories explicitly (e.g., `cp -r skills/generate-mdl skills/other-skill ... ~/.claude/skills/`) or replace the example with a pointer that instructs users to run the provided installer script (refer to `install.sh`) instead of using the `skills/*` glob so only real skill folders are installed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ibis-server/justfile`:
- Line 57: Summary: The image-name variable was changed to
"ghcr.io/canner/wren-engine-ibis:mcp-1" but other references still use :latest.
Fix: search for the symbol image-name and any hard-coded occurrences of the
image tag ":latest" and update them to ":mcp-1" so all compose files and docs
consistently reference ghcr.io/canner/wren-engine-ibis:mcp-1; ensure any README
examples, compose.yaml service image fields, and docs mentioning the image tag
are updated and run a quick grep to confirm no leftover ":latest" references
remain.
In `@mcp-server/.env.example`:
- Around line 6-19: The "Core (required)" header is misleading because
CONNECTION_INFO_FILE and MDL_PATH are optional; update the .env example by
renaming or rewording that section header (e.g., "Core" or "Core
(required/optional)") and add brief inline comments next to CONNECTION_INFO_FILE
and MDL_PATH that state they are optional and that the server will continue if
they are unset (referencing the environment variable names CONNECTION_INFO_FILE
and MDL_PATH and the behavior in wren.py). Ensure nothing else changes so the
example still shows default placeholders.
- Around line 21-26: Update the MCP_TRANSPORT documentation in .env.example to
list both HTTP transport names ("sse" and "streamable-http") and add a short
note about context-dependent defaults: mention that wren.py (standalone)
defaults to "stdio" while the Docker entrypoint (entrypoint.sh) sets the
effective default to "streamable-http" when ENABLE_MCP_SERVER=true; modify the
MCP_TRANSPORT comment (around the current MCP_TRANSPORT example) to enumerate
the options and include the Docker-specific note referencing ENABLE_MCP_SERVER,
entrypoint.sh and wren.py so readers understand the differing defaults.
In `@mcp-server/docker/compose.yaml`:
- Around line 6-11: Ports are hard-coded to 9000 in both the ports mapping and
the MCP_PORT environment variable, causing duplicate places to change; update
the docker-compose entry so both the published port and the MCP_PORT env value
are driven from the same variable (e.g., use MCP_PORT with a default) so
changing the port in one place updates both the container listener and the
published host port; locate the ports line ("9000:9000") and the environment key
MCP_PORT and replace them to reference the same MCP_PORT variable (with a
sensible default) so they remain consistent.
In `@skills/AUTHORING.md`:
- Around line 12-19: Update the fenced code block that documents the skill
directory structure to include a language tag (`text`) after the opening
backticks so the linter (MD040) is satisfied; locate the block showing the tree
with SKILL.md, references/, and scripts/ and change the opening "```" to
"```text".
In `@skills/generate-mdl/SKILL.md`:
- Around line 20-22: The docs list ATHENA as a supported data source but the
"Connection info format" section and the expected payload for
setup_connection(...) do not include Athena's conn_info shape; update the
SKILL.md to either remove "ATHENA" from the list or add the Athena connection
format (including required fields such as AWS region, S3 staging dir/bucket,
access_key_id/secret_access_key or role_arn/session_token and any
profile/endpoint fields) in the Connection info format and show an example
setup_connection(...) payload for ATHENA so callers know the exact conn_info
shape to send.
In `@skills/mdl-project/SKILL.md`:
- Around line 199-212: Remove the duplicate/malformed table header that precedes
the "MDL fields:" table: delete the first header row that reads "YAML field
(snake_case) | JSON field (camelCase)" (the one missing proper pipe delimiters)
so only the correctly formatted header remains above the table mapping fields
like `data_source` → `dataSource`; ensure the remaining header and separator row
for the table under the "MDL fields:" section are intact and use proper pipe
delimiters so the markdown table renders correctly.
In `@skills/README.md`:
- Around line 40-45: The fenced code block showing the slash commands
(/generate-mdl, /mdl-project, /wren-sql, /wren-quickstart) is missing a language
tag which triggers MD040; update that fence to include the text tag (e.g.,
```text) so the block is explicitly marked as plain text and the lint error is
resolved.
In `@skills/SKILLS.md`:
- Around line 55-65: The fenced code blocks in SKILLS.md are missing language
identifiers and trigger MD040; update the two shown blocks (the directory tree
and the three-path list) to include a language after the opening backticks
(e.g., change ``` to ```text) so markdownlint stops warning, and apply the same
fix to the other occurrence at lines 151-155; locate the blocks by their
contents (the my_project/ tree and the /generate-mdl /mdl-project /wren-sql
list) and add the language tag to each opening fence.
In `@skills/wren-quickstart/SKILL.md`:
- Around line 136-139: The "Claude Desktop (stdio fallback)" section references
a non-existent skill `wren-mcp-usage`; update this by either replacing the
`wren-mcp-usage` reference with a real existing doc/skill (or its correct slug)
or inline the minimal stdio proxy setup steps into this section. Locate the
"Claude Desktop (stdio fallback)" heading and the `wren-mcp-usage` identifier in
SKILL.md and: (a) if a canonical doc exists, change the link/text to that exact
skill/doc name and ensure any path/slug matches existing docs; or (b) add the
concise stdio setup instructions directly under the heading (commands, config,
and brief usage notes) so users are not sent to a dead end.
- Around line 54-60: The document references two different image tags
(ghcr.io/canner/wren-engine-ibis:mcp-2 and
ghcr.io/canner/wren-engine-ibis:latest) which will confuse users; pick one tag
and make it consistent across the quickstart by updating the image reference in
"Step 3" and the compose.yaml example (and any other occurrences around the
noted region) so both use the same tag string (e.g., replace all instances with
ghcr.io/canner/wren-engine-ibis:latest or
ghcr.io/canner/wren-engine-ibis:mcp-2).
In `@skills/wren-sql/references/bigquery.md`:
- Around line 37-50: The documentation incorrectly states that BigQuery
disallows GROUP BY aliases; update the section so it declares that GROUP BY can
reference SELECT aliases (e.g., "GROUP BY alias1" with alias1 from "SELECT col1
AS alias1") and list that as the preferred example, move the "GROUP BY col1"
variant as equivalent, and mark the ordinal form ("GROUP BY 1" used with
very_long_column AS alias) as the less-preferred/maintainability-warning
example; adjust the example order and wording accordingly to reflect that alias1
is valid and the ordinal reference is discouraged.
- Around line 30-35: Update the BigQuery doc comments to use BigQuery type
names: change the PARSE_DATETIME comment to indicate it returns DATETIME and
change the PARSE_TIMESTAMP comment to indicate it returns TIMESTAMP (note
TIMESTAMP does not preserve timezone as a separate type). Edit the examples
around PARSE_DATETIME and PARSE_TIMESTAMP to reference DATETIME and TIMESTAMP
respectively (identify by the symbols PARSE_DATETIME and PARSE_TIMESTAMP) and
remove any mention of a non-existent "TIMESTAMP WITH TIME ZONE" type (the block
referencing that type around the later section, identify by the phrase
"TIMESTAMP WITH TIME ZONE") so the documentation only lists supported BigQuery
types.
- Around line 5-19: The current guidance incorrectly suggests casting to
TIMESTAMP for month/year arithmetic; update the text to explain that BigQuery
TIMESTAMP does not support MONTH/YEAR with interval arithmetic and replace the
"Fix — cast to TIMESTAMP first" section with instructions to use DATETIME_ADD or
DATE_ADD and then convert back to TIMESTAMP (mention using
TIMESTAMP(DATETIME(...)) with DATETIME_ADD to preserve time-of-day, or
TIMESTAMP(DATE(...)) with DATE_ADD which sets time to midnight), and provide the
two example transformations referencing TIMESTAMP, DATETIME_ADD, DATE_ADD,
TIMESTAMP(DATETIME(...)) and TIMESTAMP(DATE(...)) as the correct workarounds.
- Around line 64-83: Update the documentation examples to use BigQuery's actual
argument order and correct workaround: replace the incorrect `(part, start,
end)` examples with the correct `DATE_DIFF(end, start, part)` form (referencing
DATE_DIFF) and stop recommending TIMESTAMP_DIFF for YEAR/MONTH; instead cast
timestamps to DATE (using DATE(...) or CAST(... AS DATE)) and call
DATE_DIFF(DATE(end_ts), DATE(start_ts), YEAR/MONTH) rather than TIMESTAMP_DIFF.
Ensure all example snippets that show DATE_DIFF, TIMESTAMP_DIFF, and DATE/CAST
reflect this corrected ordering and the DATE_DIFF workaround for YEAR/MONTH.
In `@skills/wren-sql/SKILL.md`:
- Around line 29-30: Move the SAFE_CAST handling out of the generic/core SQL
rules and into the BigQuery-specific dialect section; update any code that
references SAFE_CAST in the core logic (e.g., rule names or validation in the
Wren Engine skill definition that currently assumes ANSI-like SQL and checks
`dataSource`) so that SAFE_CAST is only emitted/allowed when `dataSource`
indicates BigQuery, and ensure other backends use their respective functions
(e.g., TRY_CAST for DuckDB/DataFusion, none for Postgres) by adding conditional
dialect branches where the SQL generator/validator checks `dataSource`.
---
Nitpick comments:
In `@ibis-server/Dockerfile`:
- Line 8: The RUN commands that call apt-get install in the Dockerfile (the
lines using "apt-get update && apt-get -y install libpq-dev && rm -rf
/var/lib/apt/lists/*" and the similar install on the later RUN) should add the
--no-install-recommends flag to apt-get install to avoid pulling unnecessary
recommended packages and reduce image size; update both RUN instructions to
include --no-install-recommends (keep the existing -y and the trailing rm -rf
/var/lib/apt/lists/*).
In `@ibis-server/entrypoint.sh`:
- Around line 56-58: The two bare cd commands in entrypoint.sh (cd /mcp-server
and cd /app) lack error handling; update them to check for failure and abort
with an error message and non-zero exit (e.g., use cd /mcp-server || { echo
"Failed to cd /mcp-server"; exit 1; } and similarly for cd /app) so the script
doesn't continue if the directory change fails; keep the MCP_PID assignment
(MCP_PID=$!) immediately after launching the background python process to
preserve behavior.
In `@mcp-server/justfile`:
- Around line 14-28: The install-mdl-driver target currently only handles
postgres, mysql, and duckdb via the case "{{driver}}" in the install-mdl-driver
recipe; extend this case to include the additional drivers referenced in docs
(.env.example / SKILL.md) by adding branches for bigquery (google-cloud-bigquery
or sqlalchemy-bigquery), snowflake (snowflake-connector-python and
snowflake-sqlalchemy), mssql (pyodbc or pymssql), clickhouse
(clickhouse-sqlalchemy or clickhouse-driver), trino (trino), and oracle
(cx_Oracle or oracledb) and map each to the appropriate pip package names,
ensuring the default unknown-driver message remains; update the list in the
install-mdl-driver target so running just install-mdl-driver <driver> installs
the matching dev dependency for those data sources.
In `@mcp-server/mdl.schema.json`:
- Around line 144-149: The dataSource JSON schema uses a verbose
case-insensitive regex; instead, simplify validation by either (A) replacing the
long case-insensitive character classes in the "dataSource" pattern with a
shorter alternation of the canonical UPPERCASE values (e.g.,
BIGQUERY|CLICKHOUSE|...|REDSHIFT) to match how dto.py normalizes/defines values,
or (B) remove the pattern entirely and rely on application-layer
normalization/validation in dto.py (with a clear comment in the schema that
values are normalized to uppercase). Update the "dataSource" schema entry
(symbol: dataSource) accordingly and ensure dto.py's Literal set remains the
authoritative list of allowed uppercase values.
In `@mcp-server/README.md`:
- Around line 177-197: The fenced code block in README.md containing the agent
interaction (the block starting with "User → \"Generate an MDL for my
PostgreSQL ecommerce database\"" ) lacks a language specifier; update the
opening fence from ``` to ```text to explicitly mark it as plain text (keep the
closing ``` unchanged) so syntax highlighters and renderers treat the snippet
correctly.
- Around line 78-87: Update the README fenced code blocks to include language
specifiers so linters and syntax highlighting work: change the block around the
example call `deploy(mdl_file_path="/workspace/my_mdl.json")` and the block
showing agent/file outputs to use a language tag (e.g., ```text) instead of bare
```; locate those blocks near the `deploy(mdl_file_path=...)` example and the `#
agent writes: ...`/`# agent compiles to: ...` comments and add the language
specifier to each opening fence.
In `@skills/install.sh`:
- Around line 53-62: The install_from_local helper currently removes the
existing dest_dir before copying, risking data loss if cp fails; change
install_from_local (and the analogous helper around lines 64-77) to copy into a
temporary directory (use mktemp -d), verify the cp -r completes successfully,
then atomically replace the target with mv tmpdir -> dest_dir (and only remove
the old dest_dir after successful mv), and ensure tmpdir cleanup on error so the
original skill directory is preserved if the copy fails.
In `@skills/mdl-project/SKILL.md`:
- Around line 17-27: Update the fenced code blocks that show directory trees and
diagrams so they include a language specifier (e.g., use ```text or
```plaintext) instead of bare triple backticks; specifically, modify the
directory-structure block (the snippet starting with "my_project/" and similar
blocks referenced later) and the other two code fences noted in the review to
add the language token to satisfy static analysis and improve rendering.
In `@skills/SKILLS.md`:
- Around line 141-147: The "All skills" example uses the glob `cp -r skills/*
~/.claude/skills/` which accidentally copies non-skill files (SKILLS.md,
README.md, AUTHORING.md, install.sh); update the SKILLS.md example to either
list the actual skill directories explicitly (e.g., `cp -r skills/generate-mdl
skills/other-skill ... ~/.claude/skills/`) or replace the example with a pointer
that instructs users to run the provided installer script (refer to
`install.sh`) instead of using the `skills/*` glob so only real skill folders
are installed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0fb07354-8dcc-45e1-84c2-07db5798c47b
⛔ Files ignored due to path filters (1)
mcp-server/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (30)
.github/workflows/build-image.yml.github/workflows/stable-release.yml.gitignoreibis-server/Dockerfileibis-server/README.mdibis-server/entrypoint.shibis-server/justfilemcp-server/.env.examplemcp-server/README.mdmcp-server/app/dto.pymcp-server/app/wren.pymcp-server/docker/compose.yamlmcp-server/etc/config.propertiesmcp-server/justfilemcp-server/mdl.schema.jsonmcp-server/pyproject.tomlmcp-server/workspace/.gitkeepskills/AUTHORING.mdskills/README.mdskills/SKILLS.mdskills/generate-mdl/SKILL.mdskills/install.shskills/mdl-project/SKILL.mdskills/wren-quickstart/SKILL.mdskills/wren-sql/SKILL.mdskills/wren-sql/references/bigquery.mdskills/wren-sql/references/correction.mdskills/wren-sql/references/datetime.mdskills/wren-sql/references/types.mdwren-core-py/.dockerignore
💤 Files with no reviewable changes (1)
- mcp-server/etc/config.properties
There was a problem hiding this comment.
🧹 Nitpick comments (1)
ibis-server/tests/routers/v2/connector/test_clickhouse.py (1)
317-330: Consider adding assertions for the response data.The test now correctly expects a 200 status code for alias join queries, but unlike other query tests in this file (e.g.,
test_query_join,test_query_to_one_relationship), it doesn't verify the actual response content. Adding assertions forresult["columns"],result["data"], andresult["dtypes"]would provide stronger coverage that the alias-join-to-CTE rewrite produces correct results, not just that it doesn't error.💡 Suggested improvement
assert response.status_code == 200 + result = response.json() + assert len(result["columns"]) == 1 + assert len(result["data"]) == 1 + assert result["data"][0][0] in ["O", "F", "P"] # Valid orderstatus values🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibis-server/tests/routers/v2/connector/test_clickhouse.py` around lines 317 - 330, Update the test_query_alias_join test to parse the JSON response body (e.g., result = response.json()) and add assertions mirroring other tests: assert result["columns"] equals the expected column list (["orderstatus"]), assert result["data"] contains the expected row value (e.g., [["O"]] or the actual expected first row), and assert result["dtypes"] includes the expected dtype for orderstatus; keep references to response and result in the test to validate the alias-join-to-CTE rewrite produces the correct columns, data, and dtypes rather than only checking status_code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@ibis-server/tests/routers/v2/connector/test_clickhouse.py`:
- Around line 317-330: Update the test_query_alias_join test to parse the JSON
response body (e.g., result = response.json()) and add assertions mirroring
other tests: assert result["columns"] equals the expected column list
(["orderstatus"]), assert result["data"] contains the expected row value (e.g.,
[["O"]] or the actual expected first row), and assert result["dtypes"] includes
the expected dtype for orderstatus; keep references to response and result in
the test to validate the alias-join-to-CTE rewrite produces the correct columns,
data, and dtypes rather than only checking status_code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c3c7f49b-9619-490f-8b62-5390366a313f
📒 Files selected for processing (1)
ibis-server/tests/routers/v2/connector/test_clickhouse.py
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
skills/wren-quickstart/SKILL.md (3)
160-162: Add language specifier to code block.The fenced code block should specify a language for proper rendering and linting compliance. Since this is a user prompt/instruction, use
textorplaintext.📝 Proposed fix
-``` +```text Use the health_check() tool to verify Wren Engine is reachable.</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/wren-quickstart/SKILL.mdaround lines 160 - 162, The fenced code
block containing "Use the health_check() tool to verify Wren Engine is
reachable." lacks a language specifier; update the opening fence fromtotext (or ```plaintext) so the block is rendered/linted correctly in SKILL.md
and any readers/tools will treat it as plain text.</details> --- `258-267`: **Add language specifier to code block.** The fenced code block showing connection templates should specify a language for proper rendering and linting compliance. Consider `text` or `plaintext`. <details> <summary>📝 Proposed fix</summary> ```diff -``` +```text POSTGRES : {"host": "host.docker.internal", "port": "5432", "user": "...", "password": "...", "database": "..."} MYSQL : {"host": "host.docker.internal", "port": "3306", "user": "...", "password": "...", "database": "..."} ``` (apply to entire block) </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/wren-quickstart/SKILL.mdaround lines 258 - 267, The fenced code
block in SKILL.md containing the connection templates (the block that lists
POSTGRES, MYSQL, MSSQL, CLICKHOUSE, TRINO, DUCKDB, BIGQUERY, SNOWFLAKE) lacks a
language specifier; update the opening triple-backtick to include a language
like text or plaintext (e.g., changetotext) so the block renders and
lints correctly—apply this change to the entire shown block.</details> --- `246-248`: **Add language specifier to code block.** The fenced code block should specify a language for proper rendering and linting compliance. <details> <summary>📝 Proposed fix</summary> ```diff -``` +```text deploy(mdl_file_path="/workspace/my_mdl.json") ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/wren-quickstart/SKILL.mdaround lines 246 - 248, The fenced code
block containing the call deploy(mdl_file_path="/workspace/my_mdl.json") is
missing a language specifier; update that fence to include a language (e.g.,
text orbash) so the block renders and lints correctly by changing the
opening backticks to include the chosen language for the deploy(...) example.</details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@ibis-server/justfile:
- Around line 66-68: The architecture detection using host_arch=$(uname -m) only
treats "arm64" as ARM64 and misses Linux's "aarch64"; update the conditional
logic that sets host_platform and rust_target to consider both "arm64" and
"aarch64" (e.g., check if host_arch equals "arm64" OR "aarch64") so that
host_platform becomes "linux/arm64" and rust_target becomes
"aarch64-unknown-linux-gnu" for either value; adjust the conditionals that
assign host_platform and rust_target accordingly (referencing host_arch,
host_platform, rust_target and the uname -m invocation).In
@skills/README.md:
- Around line 32-36: The fenced code block in skills/README.md is unlabeled and
triggers markdownlint MD040; add the language identifier "bash" after the
opening triple backticks for the block containing the cp commands (the code
fence starting withand its matching closing fence) so it becomes a labeled bash block and ensure the closingremains present.
Nitpick comments:
In@skills/wren-quickstart/SKILL.md:
- Around line 160-162: The fenced code block containing "Use the health_check()
tool to verify Wren Engine is reachable." lacks a language specifier; update the
opening fence fromtotext (or ```plaintext) so the block is
rendered/linted correctly in SKILL.md and any readers/tools will treat it as
plain text.- Around line 258-267: The fenced code block in SKILL.md containing the
connection templates (the block that lists POSTGRES, MYSQL, MSSQL, CLICKHOUSE,
TRINO, DUCKDB, BIGQUERY, SNOWFLAKE) lacks a language specifier; update the
opening triple-backtick to include a language like text or plaintext (e.g.,
changetotext) so the block renders and lints correctly—apply this
change to the entire shown block.- Around line 246-248: The fenced code block containing the call
deploy(mdl_file_path="/workspace/my_mdl.json") is missing a language specifier;
update that fence to include a language (e.g.,text orbash) so the block
renders and lints correctly by changing the opening backticks to include the
chosen language for the deploy(...) example.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `1c768d1f-408e-4270-aeb1-e3a6cf3a7d0f` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between bf55aef32affa616452314bba82411f36d00f234 and 58d1428b20ba6b127046860c4c4d615320edeefb. </details> <details> <summary>📒 Files selected for processing (7)</summary> * `ibis-server/justfile` * `mcp-server/.env.example` * `mcp-server/docker/compose.yaml` * `skills/AUTHORING.md` * `skills/README.md` * `skills/mdl-project/SKILL.md` * `skills/wren-quickstart/SKILL.md` </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (3)</summary> * mcp-server/.env.example * skills/mdl-project/SKILL.md * mcp-server/docker/compose.yaml </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
uname -m returns aarch64 on Linux ARM64, not arm64 (macOS only).
…ickstart guide (Canner#1425) Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
ENABLE_MCP_SERVER=true), enabling a single-container setup for AI agent workflowswren-quickstartskill with step-by-step Docker MCP setup, including troubleshooting for port conflicts and health check failureswren-sql,generate-mdl, andmdl-projectskills as reusable agent skill filesskills/install.shfor one-command skill installation, plus npx/manual alternatives documented inskills/README.mdbuild_mdl_project,mdl_save_project,mdl_load_projectMCP toolswren-agentmodule; replaces nested agent pattern with flat MCP toolsTest plan
docker compose up -dinmcp-server/docker/starts container successfullyhealth_check()via MCP returns a successfulSELECT 1resultlsof -i :9000/ port remap workaround resolves port conflict scenariosskills/install.shinstalls skills to~/.claude/skills/correctly/wren-quickstartskill guides user through end-to-end Docker MCP setup🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Bug Fixes