feat(wren): add wren docs connection-info CLI command#1507
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdds a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as "User"
participant CLI as "CLI\n(docs connection-info)"
participant Docs as "wren.docs\n(generators)"
participant Models as "Pydantic\nBaseConnectionInfo models"
User->>CLI: run docs connection-info [datasource] --format md/json --envelope?
CLI->>CLI: normalize format
alt format == "md"
CLI->>Docs: generate_markdown(datasource)
Docs->>Models: inspect models, fields, types, defaults, examples
Models-->>Docs: metadata
Docs-->>CLI: markdown string
else format == "json"
CLI->>Docs: generate_json_schema(datasource, envelope)
Docs->>Models: inspect models/schemas/examples
Models-->>Docs: schemas/examples
Docs-->>CLI: json string (optionally envelope)
else unsupported
CLI-->>User: stderr error + exit 1
end
CLI-->>User: print output
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Generate documentation for all data source connection info fields directly from Pydantic models. Supports Markdown and JSON Schema output formats, with optional filtering by data source name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Outputs connection info in {"datasource": ..., "properties": ...}
envelope format, matching the convention used by the MCP web UI.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f6cd7d3 to
06daf45
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
wren/src/wren/docs.py (2)
86-104: Consider addingfloattype mapping.The helper maps common types to JSON-like labels but doesn't handle
float. If any connection info field usesfloat, it would display asfloatinstead ofnumber. This is a minor consistency improvement.♻️ Optional fix
if annotation is int: return "integer" + if annotation is float: + return "number" if annotation is str: return "string"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wren/src/wren/docs.py` around lines 86 - 104, The _friendly_type helper currently maps int->"integer", bool->"boolean", and str/SecretStr->"string" but lacks a mapping for float; update the _friendly_type function to detect float and return "number" (add a branch similar to the int/bool cases, e.g., if annotation is float: return "number") so float-typed annotations render consistently as "number" (modify the _friendly_type function to include this check before the generic hasattr/__name__ handling).
259-268: Consider extracting duplicated datasource validation.The unknown datasource check and error JSON generation is duplicated between
generate_json_schema(lines 259-268) and_generate_raw_json_schema(lines 219-228). Consider extracting a helper to reduce duplication.♻️ Optional refactor
def _resolve_sources(datasource: str | None) -> dict[str, list[type[BaseConnectionInfo]]] | dict[str, Any]: """Resolve datasource to sources dict, or return error dict if unknown.""" if datasource: key = datasource.lower() if key not in DATASOURCE_MODELS: return {"error": f"Unknown data source: {datasource}", "available": sorted(DATASOURCE_MODELS.keys())} return {key: DATASOURCE_MODELS[key]} return DATASOURCE_MODELSThen check for
"error"key in the result and handle accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wren/src/wren/docs.py` around lines 259 - 268, Extract the duplicated datasource validation and error JSON creation used in generate_json_schema and _generate_raw_json_schema into a small helper (e.g., _resolve_sources) that accepts datasource: str|None and returns either the resolved mapping of datasource models or an error dict; update generate_json_schema and _generate_raw_json_schema to call this helper, detect the error result (e.g., presence of "error" key) and return json.dumps(...) only in that branch, otherwise proceed with the returned models—this removes the duplicated if datasource / key not in DATASOURCE_MODELS logic and centralizes the available keys list and error message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@wren/src/wren/docs.py`:
- Around line 86-104: The _friendly_type helper currently maps int->"integer",
bool->"boolean", and str/SecretStr->"string" but lacks a mapping for float;
update the _friendly_type function to detect float and return "number" (add a
branch similar to the int/bool cases, e.g., if annotation is float: return
"number") so float-typed annotations render consistently as "number" (modify the
_friendly_type function to include this check before the generic
hasattr/__name__ handling).
- Around line 259-268: Extract the duplicated datasource validation and error
JSON creation used in generate_json_schema and _generate_raw_json_schema into a
small helper (e.g., _resolve_sources) that accepts datasource: str|None and
returns either the resolved mapping of datasource models or an error dict;
update generate_json_schema and _generate_raw_json_schema to call this helper,
detect the error result (e.g., presence of "error" key) and return
json.dumps(...) only in that branch, otherwise proceed with the returned
models—this removes the duplicated if datasource / key not in DATASOURCE_MODELS
logic and centralizes the available keys list and error message.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 353ee574-731f-4c1e-a15d-92b27af5672a
📒 Files selected for processing (2)
wren/src/wren/cli.pywren/src/wren/docs.py
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@wren/src/wren/docs.py`:
- Around line 200-204: The function currently returns an error message string
when an unknown datasource is passed (see the branch that checks key not in
DATASOURCE_MODELS), which causes the CLI to exit 0; change these branches to
raise a non-zero CLI error (e.g., raise click.ClickException(message) or raise
SystemExit(1) with the same message) so the process exits with failure; update
the identical patterns around the other mentioned branches (the blocks at
~221-228 and ~261-268) to raise the exception instead of returning the error
string, referencing DATASOURCE_MODELS and the same error text.
- Around line 273-277: The current loop uses _build_example(model) which omits
optional fields without examples, causing envelope output to drop fields (e.g.,
DatabricksServicePrincipalConnectionInfo.azure_tenant_id). Change the
aggregation so that for each model in the models loop you collect fields from
the model's schema/properties (e.g., model.schema().get("properties", {}) or
model.__fields__) rather than the example object, and build the results entry by
iterating those schema properties and emitting each property's name and
example/default (or null) so all declared fields are present in results; update
the code around the results list and the inner loop that calls _build_example to
use the schema-driven property extraction instead.
- Around line 233-236: When only one datasource was requested, the current loop
builds schemas as {"ds_name": {...}}; change it to emit the schema object itself
when there is exactly one source with one model. In docs.py adjust the logic
around sources, schemas, ds_name, models and model_json_schema so that if
len(sources) == 1 and len(models) == 1 you set the returned value to
models[0].model_json_schema() (not a dict keyed by ds_name); otherwise preserve
the existing dict behavior for multiple sources.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b7302be5-3627-42cf-8712-a13d8813ce2c
📒 Files selected for processing (2)
wren/src/wren/cli.pywren/src/wren/docs.py
✅ Files skipped from review due to trivial changes (1)
- wren/src/wren/cli.py
- Unknown datasource now raises ValueError → CLI exits with code 1 - Single datasource JSON output returns schema directly without wrapper - Envelope mode includes all fields (including optional ones) - Add float → "number" type mapping - Extract _resolve_sources helper to deduplicate validation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@wren/src/wren/docs.py`:
- Around line 173-175: The Markdown table row is built by interpolating raw
variables (name, type_label, required, default, sensitive, alias, example_str)
into a single f-string in docs.py which breaks if any value contains '|' or
newlines; create and use a small helper (e.g., escape_md_cell) to sanitize each
cell before rendering by escaping pipe characters and replacing or normalizing
newlines (e.g., to '<br>' or a space) and then call that helper for each of the
variables passed into the lines.append f-string so the table cells are safe for
Markdown rendering.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 855a6c4a-6580-45b7-b35d-be5f7b58aebd
📒 Files selected for processing (2)
wren/src/wren/cli.pywren/src/wren/docs.py
🚧 Files skipped from review as they are similar to previous changes (1)
- wren/src/wren/cli.py
Pipe and newline characters in field values could break table rendering. Add _escape_md_cell helper for safe interpolation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@wren/src/wren/docs.py`:
- Around line 196-209: _build_example currently omits optional fields that lack
examples, which drops discriminant/defaulted union tags; update the logic in
_build_example to include optional fields when they have a default value rather
than skipping them: after checking field_info.examples and before continuing on
not field_info.is_required(), test for a present default (e.g.,
field_info.default not being the undefined sentinel or a default factory value)
and set example[key] = that default (or its result), otherwise keep the current
behavior of skipping optional fields without examples; reference
model.model_fields, field_info.examples, field_info.is_required(), and
field_info.default/default_factory in your change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
Discriminated union tags like bigquery_type and databricks_type were dropped from examples because they are optional with non-null defaults. Now _build_example includes optional fields that have non-null defaults. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
wren docs connection-infosubcommand that generates documentation for all data source connection info fields directly from Pydantic models--format md(Markdown table with field type, required, default, sensitive, alias, example) and--format json(JSON Schema)--envelopeflag to output JSON in{"datasource": ..., "properties": ...}envelope format matching the MCP web UI conventionwren docs connection-info postgresUsage
Test plan
wren docs connection-infooutputs all data sources in Markdown--format jsonoutputs valid JSON Schema--envelopeproduces{"datasource": ..., "properties": ...}formatjust lintpasses🤖 Generated with Claude Code
Summary by CodeRabbit