MGMT-21730: add proper description for each argument in the inputSchema#95
Conversation
|
@andrej1991: This pull request references MGMT-21730 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughAdds Pydantic Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: andrej1991 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/hold |
|
@andrej1991: This pull request references MGMT-21730 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
1746f04 to
50cf6c0
Compare
|
@andrej1991: This pull request references MGMT-21730 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server.py (1)
561-575: Stop logging presigned URLs (secrets).The info log currently includes the full presigned URL via %s with result. This leaks credentials to logs. Remove the URL from logs.
- log.info( - "Successfully retrieved presigned URL for cluster %s credentials file %s - %s", - cluster_id, - file_name, - result, - ) + log.info( + "Successfully retrieved presigned URL metadata for cluster %s credentials file %s", + cluster_id, + file_name, + )
🧹 Nitpick comments (11)
server.py (11)
140-163: Mark cluster_id as UUID in schema.Add JSON Schema hint so tools/LLMs stop inventing non‑UUIDs.
-async def cluster_info(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to retrieve information for. This is typically a UUID string.")]) -> str: +async def cluster_info(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to retrieve information for. This is typically a UUID string.", json_schema_extra={"format": "uuid"})]) -> str:
201-221: Same UUID hint for cluster_events.-async def cluster_events(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to get events for.")]) -> str: +async def cluster_events(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to get events for.", json_schema_extra={"format": "uuid"})]) -> str:
225-227: Add UUID hints for both IDs.-async def host_events(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster containing the host.")], - host_id: Annotated[str, Field(description="The unique identifier of the specific host to get events for.")] +async def host_events(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster containing the host.", json_schema_extra={"format": "uuid"})], + host_id: Annotated[str, Field(description="The unique identifier of the specific host to get events for.", json_schema_extra={"format": "uuid"})]
253-307: Docstring return type doesn’t match actual return.Function returns a JSON string, not a dict. Align the docstring.
- Returns: - dict: A JSON containing ISO download URLs and optional + Returns: + str: A JSON-formatted string containing ISO download URLs and optional @@ - [{ + [ + { url: <download-url> expires_at: <expiration-timestamp> (if available) - }] + } + ]
312-317: Polish field descriptions; consider enum + alias normalization.
- Minor typos: “cluster.Set” → “cluster. Set”, double space before “production”.
- Consider enumerating cpu_architecture to guide the LLM and prevent “none”.
- single_node: Annotated[bool, Field(description="Whether to create a single-node cluster.Set to True for edge deployments or resource-constrained environments. Set to False for production high-availability clusters with multiple control plane nodes.")], - ssh_public_key: Annotated[str | None, Field(default=None, description="SSH public key for accessing cluster nodes.")] = None, - cpu_architecture: Annotated[str | None, Field(default="x86_64", description="The CPU architecture for the cluster. Defaults to 'x86_64' if not specified. Valid options are: x86_64, aarch64, arm64, ppc64le, s390x.")] = "x86_64" + single_node: Annotated[bool, Field(description="Whether to create a single-node cluster. Set to True for edge deployments or resource-constrained environments. Set to False for production high-availability clusters with multiple control plane nodes.")], + ssh_public_key: Annotated[str | None, Field(default=None, description="SSH public key for accessing cluster nodes.")] = None, + cpu_architecture: Annotated[Literal["x86_64","aarch64","arm64","ppc64le","s390x"] | None, Field(default="x86_64", description="The CPU architecture for the cluster. Defaults to 'x86_64' if not specified. Valid options are: x86_64, aarch64 (preferred), arm64 (alias), ppc64le, s390x.") ] = "x86_64"If keeping “arm64” for UX, normalize before use to backend-preferred “aarch64”:
# near start of create_cluster() if cpu_architecture and cpu_architecture.lower() == "arm64": cpu_architecture = "aarch64"
396-398: Strip trailing whitespace to placate linters; consider IP hint.
- Fix pylint C0303 on these lines. Optionally add ip format hint.
-async def set_cluster_vips(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to configure.")], - api_vip: Annotated[str, Field(description="The IP address for the cluster API endpoint. This is where kubectl and other management tools will connect.")], - ingress_vip: Annotated[str, Field(description="The IP address for ingress traffic to applications running in the cluster.")]) -> str: +async def set_cluster_vips(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to configure.")], + api_vip: Annotated[str, Field(description="The IP address for the cluster API endpoint. This is where kubectl and other management tools will connect.")], + ingress_vip: Annotated[str, Field(description="The IP address for ingress traffic to applications running in the cluster.")]) -> str:
433-459: Add UUID hint for install_cluster input.-async def install_cluster(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to install.")]) -> str: +async def install_cluster(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to install.", json_schema_extra={"format": "uuid"})]) -> str:
504-505: Don’t hardcode operator bundle names in schema text.These drift. Prefer referencing list_operator_bundles(), or validate at runtime and return a helpful error if invalid.
-async def add_operator_bundle_to_cluster(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to configure.")], - bundle_name: Annotated[str, Field(description="The name of the operator bundle to add. The available operator bundle names are 'virtualization' and 'openshift-ai'")]) -> str: +async def add_operator_bundle_to_cluster(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to configure.")], + bundle_name: Annotated[str, Field(description="The name of the operator bundle to add. Use list_operator_bundles() to discover valid names.")]) -> str:
533-534: Constrain file_name to known values; also fix trailing whitespace warnings around this hunk.This tightens the inputSchema and reduces LLM misuse.
-async def cluster_credentials_download_url(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to get credentials for.")], - file_name: Annotated[str, Field(description="The type of credential file to download. Valid options are: kubeconfig (Standard kubeconfig file for cluster access), kubeconfig-noingress (Kubeconfig without ingress configuration), kubeadmin-password (The kubeadmin user password file).")]) -> str: +async def cluster_credentials_download_url(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to get credentials for.", json_schema_extra={"format": "uuid"})], + file_name: Annotated[Literal["kubeconfig","kubeconfig-noingress","kubeadmin-password"], Field(description="The credential file to download.")]) -> str:Additionally, remove logging of the presigned URL object (it contains a secret); see next comment.
617-619: Constrain role to allowed values; strip trailing whitespace.Ensures the schema stays aligned with supported roles and fixes C0303.
-async def set_host_role(host_id: Annotated[str, Field(description="The unique identifier of the host to configure.")], - cluster_id: Annotated[str, Field(description="The unique identifier of the cluster containing the host.")], - role: Annotated[str, Field(description="The role to assign to the host. Valid options are: auto-assign (Let the installer automatically determine the role), master (Control plane node - API server, etcd, scheduler), worker (Compute node for running application workloads).")]) -> str: +async def set_host_role(host_id: Annotated[str, Field(description="The unique identifier of the host to configure.", json_schema_extra={"format": "uuid"})], + cluster_id: Annotated[str, Field(description="The unique identifier of the cluster containing the host.", json_schema_extra={"format": "uuid"})], + role: Annotated[Literal["auto-assign","master","worker"], Field(description="The role to assign to the host.")]) -> str:
657-658: Trim trailing whitespace; consider basic SSH key validation hint.Fix C0303 here. Optionally add a weak pattern to reduce bad keys in tool calls.
-async def set_cluster_ssh_key(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to update.")], - ssh_public_key: Annotated[str, Field(description="The SSH public key to set for the cluster. This should be a valid SSH public key in OpenSSH format.")]) -> str: +async def set_cluster_ssh_key(cluster_id: Annotated[str, Field(description="The unique identifier of the cluster to update.", json_schema_extra={"format": "uuid"})], + ssh_public_key: Annotated[str, Field(description="The SSH public key to set for the cluster. This should be a valid SSH public key in OpenSSH format.")]) -> str:Optional example:
ssh_public_key: Annotated[str, Field(pattern=r"^ssh-(rsa|ed25519|ecdsa-sha2-nistp(256|384|521))\s+[A-Za-z0-9+/=]+(\s+.+)?$")]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
server.py(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
server.py (1)
service_client/assisted_service_api.py (2)
install_cluster(367-382)add_operator_bundle_to_cluster(419-446)
🪛 GitHub Actions: Ruff
server.py
[error] 11-11: Ruff check failed: F401 - 'typing.Optional' imported but unused in server.py:11. 1 issue(s) fixable with 'ruff --fix'. Command: 'uv tool run ruff check .'
🪛 GitHub Actions: Black
server.py
[error] 1-1: Black formatting check failed. 1 file would be reformatted. Failing command: 'uv tool run black --check .'. Run 'black .' to fix code style.
🪛 GitHub Actions: Python linter
server.py
[warning] 396-396: pylint: Trailing whitespace (C0303).
[warning] 397-397: pylint: Trailing whitespace (C0303).
[warning] 504-504: pylint: Trailing whitespace (C0303).
[warning] 533-533: pylint: Trailing whitespace (C0303).
[warning] 617-617: pylint: Trailing whitespace (C0303).
[warning] 618-618: pylint: Trailing whitespace (C0303).
[warning] 657-657: pylint: Trailing whitespace (C0303).
[warning] 23-23: pylint: wrong-import-order (C0411) - standard import 'typing.Annotated' should be placed before third-party imports.
[warning] 24-24: pylint: wrong-import-order (C0411) - 'pydantic.Field' should be placed before first-party imports.
[warning] 11-11: pylint: unused-import (W0611) - 'Optional' imported from typing is unused.
🔇 Additional comments (1)
server.py (1)
23-25: Fix import order, remove unused Optional, and add Literal for enum-like inputs.
- Reorder imports (stdlib → third‑party → first‑party), drop unused typing.Optional, consolidate typing imports to Any, Annotated, Literal, and move pydantic.Field into third‑party imports.
Apply:
@@ -import json -import os -import asyncio -from typing import Optional, Any +import json +import os +import asyncio +from typing import Any, Annotated, Literal @@ -import requests -import uvicorn -from assisted_service_client import models -from mcp.server.fastmcp import FastMCP - - -from service_client import InventoryClient -from service_client.logger import log -from metrics import metrics, track_tool_usage, initiate_metrics - -from typing import Annotated -from pydantic import Field +import requests +import uvicorn +from pydantic import Field +from assisted_service_client import models +from mcp.server.fastmcp import FastMCP +from service_client import InventoryClient +from service_client.logger import log +from metrics import metrics, track_tool_usage, initiate_metricsSandbox couldn't run the lint/format tools (uv missing). Run locally to auto-fix and verify: ruff --fix server.py; black server.py; pylint server.py -sn -rn.
|
/hold |
50cf6c0 to
57b6f3e
Compare
|
@andrej1991: This pull request references MGMT-21730 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/test eval-test |
57b6f3e to
cecc09c
Compare
|
@andrej1991: This pull request references MGMT-21730 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
server.py (6)
272-279: Docstring return-type mismatch (returns JSON string, not dict)Function returns
json.dumps(iso_info)(str), but docstring saysdict. Fix to prevent confusion.Apply:
- Returns: - dict: A JSON containing ISO download URLs and optional - expiration times. Each ISO's information is formatted as: - [{ - url: <download-url> - expires_at: <expiration-timestamp> (if available) - }] + Returns: + str: A JSON array where each element is: + { + "url": "<download-url>", + "expires_at": "<expiration-timestamp>" # optional + }Also applies to: 287-293
338-367: Polish: fix grammar and avoid duplicate defaults in Field and signature
- Typo/spacing in
single_nodedescription (“cluster.Set …”).- Defaults are specified twice (in
Field(default=...)and in the signature). Keep one source of truth (signature) to avoid schema drift.Apply:
- single_node: Annotated[ + single_node: Annotated[ bool, Field( - description="Whether to create a single-node cluster.Set to True for edge deployments or resource-constrained environments. Set to False for production high-availability clusters with multiple control plane nodes." + description="Whether to create a single-node cluster. Set to True for edge deployments or resource-constrained environments; set to False for production high-availability clusters with multiple control plane nodes." ), ], - ssh_public_key: Annotated[ - str | None, - Field(default=None, description="SSH public key for accessing cluster nodes."), - ] = None, + ssh_public_key: Annotated[ + str | None, + Field(description="SSH public key for accessing cluster nodes."), + ] = None, - cpu_architecture: Annotated[ - str, - Field( - default="x86_64", - description="The CPU architecture for the cluster. Defaults to 'x86_64' if not specified. Valid options are: x86_64, aarch64, arm64, ppc64le, s390x.", - ), - ] = "x86_64", + cpu_architecture: Annotated[ + str, + Field( + description="The CPU architecture for the cluster. Defaults to 'x86_64' if not specified. Valid options are: x86_64, aarch64, arm64, ppc64le, s390x.", + ), + ] = "x86_64",Optional refinement (once accepted values are confirmed): narrow the type to a Literal so the tool surfaces a constrained enum to the LLM.
- from typing import Any, Annotated + from typing import Any, Annotated, Literal @@ - cpu_architecture: Annotated[ - str, + cpu_architecture: Annotated[ + Literal["x86_64", "arm64", "aarch64", "ppc64le", "s390x"], Field( description="The CPU architecture for the cluster. Defaults to 'x86_64' if not specified. Valid options are: x86_64, aarch64, arm64, ppc64le, s390x.", ), ] = "x86_64",Please verify Assisted Service accepts both "arm64" and "aarch64"; if not, drop the alias that isn’t supported.
446-462: Prefer IP address types for VIPs; coerce to str on useUsing
IPvAnyAddresstightens validation and produces better schema hints, while we keep runtime behavior by converting to str in the client call.- from pydantic import Field + from pydantic import Field, IPvAnyAddress @@ - api_vip: Annotated[ - str, + api_vip: Annotated[ + IPvAnyAddress, Field( description="The IP address for the cluster API endpoint. This is where kubectl and other management tools will connect." ), ], - ingress_vip: Annotated[ - str, + ingress_vip: Annotated[ + IPvAnyAddress, Field( description="The IP address for ingress traffic to applications running in the cluster." ), ], @@ - result = await client.update_cluster( - cluster_id, api_vip=api_vip, ingress_vip=ingress_vip - ) + result = await client.update_cluster( + cluster_id, api_vip=str(api_vip), ingress_vip=str(ingress_vip) + )Also applies to: 488-491
572-582: Avoid hardcoding bundle names in schema textAvailable bundles can vary by environment/version. Suggest describing as examples and pointing to
list_operator_bundles()for the source of truth.- Field( - description="The name of the operator bundle to add. The available operator bundle names are 'virtualization' and 'openshift-ai'" - ), + Field( + description="The name of the operator bundle to add. Examples: 'virtualization', 'openshift-ai'. Use list_operator_bundles() to discover available bundles in your environment." + ),Optional: if you want schema-level guidance without locking to two values, consider adding
json_schema_extra={"examples": ["virtualization", "openshift-ai"]}.
706-720: Constrain host role to an enumThis improves the input schema and avoids invalid role values.
- from typing import Any, Annotated + from typing import Any, Annotated, Literal @@ - role: Annotated[ - str, + role: Annotated[ + Literal["auto-assign", "master", "worker"], Field( description="The role to assign to the host. Valid options are: auto-assign (Let the installer automatically determine the role), master (Control plane node - API server, etcd, scheduler), worker (Compute node for running application workloads)." ), ],
758-768: SSH key description: include modern key exampleRecommend adding an ed25519 example (rsa may be disabled in some orgs) to help users provide valid keys.
- description="The SSH public key to set for the cluster. This should be a valid SSH public key in OpenSSH format." + description="The SSH public key to set for the cluster. Provide a valid OpenSSH public key (e.g., 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA... user@host')."
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
server.py(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
server.py (1)
service_client/assisted_service_api.py (2)
install_cluster(367-382)add_operator_bundle_to_cluster(419-446)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Red Hat Konflux / assisted-service-mcp-saas-main-on-pull-request
🔇 Additional comments (4)
server.py (4)
138-145: LGTM: clearer input schema for cluster_idAnnotation and description look good.
206-211: LGTM: cluster_events parameter docsAccurate and concise.
235-246: LGTM: host_events parameter docsClear distinction between cluster_id and host_id.
497-501: LGTM: install_cluster parameter docsClear and consistent with service_client signature.
| from typing import Any, Annotated | ||
|
|
||
| import requests | ||
| import uvicorn | ||
| from pydantic import Field |
There was a problem hiding this comment.
🧩 Analysis chain
Validate runtime supports Annotated/| unions and Pydantic annotations
This file now relies on Python 3.10+ (str | None) and Pydantic support for Annotated[...] fields. Please confirm CI/runtime versions to avoid runtime/type-model mismatches.
Run:
🏁 Script executed:
#!/bin/bash
# Verify Python and Pydantic versions declared in repo
fd -a 'pyproject.toml|setup.cfg|setup.py|requirements*.txt|Pipfile|poetry.lock|.tool-versions|.python-version|Dockerfile*' -HI \
| sed 's/^/--- /'
echo
rg -n -C2 -e 'python_requires|requires-python|tool.poetry.dependencies.*python|FROM .*python|pydantic\s*[<=>]' \
pyproject.toml setup.cfg setup.py requirements*.txt Pipfile poetry.lock .tool-versions .python-version Dockerfile* 2>/dev/null || trueLength of output: 485
Align Python runtime and dependency versions
- Mismatch: pyproject.toml declares requires-python = ">=3.13" (pyproject.toml:6) but Dockerfile uses a Python 3.11 base image (Dockerfile:1) — update the Dockerfile or lower the declared Python requirement so they match.
- server.py uses typing.Annotated and PEP‑604 unions (str | None) — ensure CI/runtime >=3.10 and consistent with the declared version.
- No explicit Pydantic pin found in scanned manifests — add/pin a Pydantic version that supports your Annotated usage and verify CI installs that version.
- Verify .python-version (repo root) and any CI images/runner configs also match the chosen target.
| async def cluster_credentials_download_url( | ||
| cluster_id: Annotated[ | ||
| str, | ||
| Field( | ||
| description="The unique identifier of the cluster to get credentials for." | ||
| ), | ||
| ], | ||
| file_name: Annotated[ | ||
| str, | ||
| Field( | ||
| description="The type of credential file to download. Valid options are: kubeconfig (Standard kubeconfig file for cluster access), kubeconfig-noingress (Kubeconfig without ingress configuration), kubeadmin-password (The kubeadmin user password file)." | ||
| ), | ||
| ], | ||
| ) -> str: |
There was a problem hiding this comment.
Do not log presigned URLs; restrict file_name to an enum
- Logging
resultleaks a time‑limited secret URL into logs. Remove it. - Constrain
file_nameto aLiteral[...]so tools/LLMs choose valid options.
- from typing import Any, Annotated
+ from typing import Any, Annotated, Literal
@@
- file_name: Annotated[
- str,
+ file_name: Annotated[
+ Literal["kubeconfig", "kubeconfig-noingress", "kubeadmin-password"],
Field(
description="The type of credential file to download. Valid options are: kubeconfig (Standard kubeconfig file for cluster access), kubeconfig-noingress (Kubeconfig without ingress configuration), kubeadmin-password (The kubeadmin user password file)."
),
],
@@
- log.info(
- "Successfully retrieved presigned URL for cluster %s credentials file %s - %s",
- cluster_id,
- file_name,
- result,
- )
+ log.info(
+ "Successfully retrieved presigned URL for cluster %s credentials file %s",
+ cluster_id,
+ file_name,
+ )Also applies to: 657-662
🤖 Prompt for AI Agents
In server.py around lines 610-623 (and similarly around 657-662), the handler
currently accepts file_name as a free string and logs the presigned URL result;
replace the file_name Annotated[str, Field(...)] with an
Annotated[Literal["kubeconfig","kubeconfig-noingress","kubeadmin-password"],
Field(...)] to constrain allowed values, and remove any logging statements that
record or print the generated presigned URL (result) so the time-limited secret
is not written to logs; ensure validation/error handling still returns safe
messages without exposing the URL.
|
/test eval-test |
|
@andrej1991: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Symptom: Sometimes when creating a new cluster and the cpu architecture is not set, then in the tool call it defaults to 'none'.
Possible root cause:
The inputSchema is the primary source of the arguments and their description for the LLM.
With the FastMCP python library the mcp.server.fastmcp.FastMCP.tool decorator automatically creates the inputSchema from the signature of the decorated function.
Now the inputSchema does not have any documentation for any of the arguments, so the LLM has to guess based on the description of the tool.
Linked issue: https://issues.redhat.com/browse/MGMT-21730
Summary by CodeRabbit