fix: satisfy env codacy diff typing#53
Conversation
Load coverage security helpers through the shared quality import surface so the script no longer relies on an analyzer-hostile ad hoc local import path, and add a regression test proving the shared helper bindings are reused. Co-authored-by: Codex <noreply@openai.com>
Route the coverage gate through the shared security import surface and add the missing test type annotation that Codacy still reports on main. Co-authored-by: Codex <noreply@openai.com>
Normalize the touched quality helper and test files back to LF-only line endings so Codacy does not flag mixed EOL noise on the PR diff. Co-authored-by: Codex <noreply@openai.com>
Wrap the new shared-helper identity assertions so the coverage helper follow-up stays free of new line-length noise while preserving the regression added for PR #52. Co-authored-by: Codex <noreply@openai.com>
Document the public normalize_source_path helper and the shared security import test so the PR diff stays free of Codacy complaints after the shared-import refactor. Co-authored-by: Codex <noreply@openai.com>
Codacy's PR-diff analysis on PR #52 flagged the Python 3.8-incompatible builtin generic in the DeepScan branch test. Switch it to typing.List so the native Codacy check agrees with the green wrapper gate. Co-authored-by: Codex <noreply@openai.com>
|
Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔 |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors quality scripts to load shared security helpers via dynamic import with a local fallback, exposes public normalize_source_path(raw_path: str), adds new quality impl/support modules and required-checks tooling, extracts WSL provider code, and restructures env_inspector_core service/listing/ops with request dataclasses and aliases; tests and CI workflow references updated. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Update env-inspector wrappers to the current quality-zero-platform main commit so the aggregate gate sees the same required contexts as the latest reusable workflows. Co-authored-by: Codex <noreply@openai.com>
Initialize Qlty in the repository so the updated quality-zero-platform workflow can run against a repo-local config on main and pull requests. Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (19)
tests/test_linux_support.py (1)
15-31: Test isolation pattern is sound, but consider deduplication.The module-level capture and
autousefixture pattern correctly ensures test isolation by resetting patched globals before each test. Since pytest imports all test modules during collection (before any tests execute), the module-level captures will correctly record the true originals.However, this exact pattern is duplicated in
tests/test_service_preview.py(lines 11-26). Consider extracting this fixture to aconftest.pyfile to avoid duplication and ensure consistent reset behavior across all tests.♻️ Optional: Extract to conftest.py
# tests/conftest.py import pytest import env_inspector_core.service as service_module from env_inspector_core.service import EnvInspectorService _ORIGINAL_PATH_EXISTS = service_module._path_exists _ORIGINAL_READ_TEXT_IF_EXISTS = service_module._read_text_if_exists _ORIGINAL_WRITE_TEXT_FILE = EnvInspectorService._write_text_file _ORIGINAL_WHICH = service_module.which _ORIGINAL_RUN = service_module.run _ORIGINAL_PATH_HOME = service_module.Path.home `@pytest.fixture`(autouse=True) def _reset_service_globals(monkeypatch: pytest.MonkeyPatch) -> None: monkeypatch.setattr(service_module, "_path_exists", _ORIGINAL_PATH_EXISTS) monkeypatch.setattr(service_module, "_read_text_if_exists", _ORIGINAL_READ_TEXT_IF_EXISTS) monkeypatch.setattr(EnvInspectorService, "_write_text_file", staticmethod(_ORIGINAL_WRITE_TEXT_FILE)) monkeypatch.setattr(service_module, "which", _ORIGINAL_WHICH) monkeypatch.setattr(service_module, "run", _ORIGINAL_RUN) monkeypatch.setattr(service_module.Path, "home", _ORIGINAL_PATH_HOME)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_linux_support.py` around lines 15 - 31, Duplicate test-reset fixture and module-level originals (_ORIGINAL_PATH_EXISTS, _ORIGINAL_READ_TEXT_IF_EXISTS, _ORIGINAL_WRITE_TEXT_FILE, _ORIGINAL_WHICH, _ORIGINAL_RUN, _ORIGINAL_PATH_HOME and the autouse fixture _reset_service_globals) should be extracted from tests/test_linux_support.py into a shared tests/conftest.py; move the module-level captures and the pytest.fixture implementation there so both tests/test_linux_support.py and tests/test_service_preview.py can import the originals from the same place and simply remove the duplicated block in those files, keeping the exact symbol names (service_module, EnvInspectorService, _reset_service_globals, and the _ORIGINAL_* constants) and retaining monkeypatch.setattr calls shown to restore _path_exists, _read_text_if_exists, EnvInspectorService._write_text_file (wrapped with staticmethod), which, run, and Path.home.env_inspector_core/service_restore.py (1)
8-19: Missing required keyword arguments raiseKeyErrorinstead ofTypeError.When a required argument like
targetis missing,kwargs.pop("target")raisesKeyError, which is less descriptive than aTypeErrorthat names the missing parameter. Consider usingkwargs.pop("target", None)followed by explicit validation.💡 Proposed validation pattern
def restore_dotenv_target(*args, **kwargs) -> None: if args: raise TypeError("restore_dotenv_target accepts keyword arguments only.") - target = kwargs.pop("target") - text = kwargs.pop("text") - scope_roots = kwargs.pop("scope_roots") - parse_scoped_dotenv_target_fn = kwargs.pop("parse_scoped_dotenv_target_fn") - write_scoped_text_file_fn = kwargs.pop("write_scoped_text_file_fn") + target = kwargs.pop("target", None) + text = kwargs.pop("text", None) + scope_roots = kwargs.pop("scope_roots", None) + parse_scoped_dotenv_target_fn = kwargs.pop("parse_scoped_dotenv_target_fn", None) + write_scoped_text_file_fn = kwargs.pop("write_scoped_text_file_fn", None) if kwargs: unexpected = ", ".join(sorted(kwargs)) raise TypeError(f"Unexpected keyword argument(s): {unexpected}") + if target is None or text is None or scope_roots is None: + raise TypeError("target, text, and scope_roots are required.") + if parse_scoped_dotenv_target_fn is None or write_scoped_text_file_fn is None: + raise TypeError("parse_scoped_dotenv_target_fn and write_scoped_text_file_fn are required.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_restore.py` around lines 8 - 19, The function restore_dotenv_target currently uses kwargs.pop("param") which raises KeyError for missing required args; change to kwargs.pop("param", None) for each required parameter (target, text, scope_roots, parse_scoped_dotenv_target_fn, write_scoped_text_file_fn) and then validate: collect any that are None and raise a TypeError naming the missing parameter(s) (e.g., raise TypeError("Missing required keyword argument(s): target, text")). Keep the final unexpected-kwargs check as-is.env_inspector_core/service.py (2)
425-447: Double normalization of request data.
_file_updatenormalizes the input via_normalize_target_operation_request_helperand creates aTargetOperationRequest, then passesrequest=requestto child methods like_update_dotenv_file. Each child method also calls_normalize_target_operation_request_helper, resulting in redundant normalization.Since child methods already accept the
TargetOperationRequestobject viarequest=...kwarg, they could skip re-normalization when a request object is already provided.💡 Example optimization for _update_dotenv_file
def _update_dotenv_file( self, *args: Any, apply_changes: bool, **kwargs: Any, ) -> Tuple[str, str, str | None, bool, str | None]: - request_data = _normalize_target_operation_request_helper(*args, **kwargs) - request = TargetOperationRequest( - target=request_data["target"], - key=request_data["key"], - value=request_data["value"], - action=request_data["action"], - scope_roots=request_data["scope_roots"], - ) + if args and isinstance(args[0], TargetOperationRequest): + request = args[0] + elif "request" in kwargs and isinstance(kwargs["request"], TargetOperationRequest): + request = kwargs["request"] + else: + request_data = _normalize_target_operation_request_helper(*args, **kwargs) + request = TargetOperationRequest( + target=request_data["target"], + key=request_data["key"], + value=request_data["value"], + action=request_data["action"], + scope_roots=request_data["scope_roots"], + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service.py` around lines 425 - 447, _file_update currently normalizes inputs then constructs a TargetOperationRequest and passes it to child methods, but each child (_update_dotenv_file, _update_linux_file, _update_wsl_file, _update_powershell_file) re-runs _normalize_target_operation_request_helper causing redundant work; update those child methods to detect when a TargetOperationRequest is passed (request parameter) and skip calling _normalize_target_operation_request_helper, using the provided request directly, and ensure their signatures/types accept TargetOperationRequest so _file_update can continue to create and forward the request without triggering double normalization.
717-745: Unconventional method binding pattern at module level.Binding methods to
EnvInspectorServiceat module level (outside the class definition) is functional but unconventional. This makes it harder to understand the class's full interface from its definition and may confuse static analysis tools and IDEs.Consider one of these alternatives:
- Define methods directly in the class body
- Use mixins or inheritance for logically grouped methods
- Document this pattern prominently with a comment explaining the rationale
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service.py` around lines 717 - 745, The class API is being attached via module-level assignments (e.g., EnvInspectorService.which, EnvInspectorService.run, EnvInspectorService.get_powershell_profile_paths, EnvInspectorService._linux_etc_environment_path, EnvInspectorService.available_targets, etc.), which is unconventional; fix by moving these bound callables into the EnvInspectorService class body (or create mixin classes and add them to EnvInspectorService's bases) so methods are declared with proper def/@staticmethod/@classmethod/@property as appropriate, or if you must keep this pattern add a prominent comment above these assignments explaining why and listing which functions are being aliased; update references for _service_aliases.<function_name> and helper functions (e.g., _apply_row_filters_helper, _diff_text_helper, _powershell_target_for_path_helper) accordingly.env_inspector_core/providers.py (1)
347-354: Consider improving readability of the spec-driven loop.The nested list comprehension with tuple unpacking is functional but dense. The
spec[1].exists()filter referencing by index reduces clarity.💡 Alternative with named tuple or clearer structure
- for source_type, path, parser, precedence_rank, requires_privilege in [ - spec - for spec in ( - (SOURCE_LINUX_BASHRC, bashrc, parse_bash_exports, 20, False), - (SOURCE_LINUX_ETC_ENV, etc_env, parse_etc_environment, 30, True), - ) - if spec[1].exists() - ]: + specs = ( + (SOURCE_LINUX_BASHRC, bashrc, parse_bash_exports, 20, False), + (SOURCE_LINUX_ETC_ENV, etc_env, parse_etc_environment, 30, True), + ) + for source_type, path, parser, precedence_rank, requires_privilege in specs: + if not path.exists(): + continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers.py` around lines 347 - 354, The for-loop building specs is hard to read due to tuple-index filtering; refactor the spec list into a clearer structure (e.g., a namedtuple or dict) and filter by a readable attribute instead of spec[1].exists(). Replace the current comprehension with a list of objects like Spec(source_type=SOURCE_LINUX_BASHRC, path=bashrc, parser=parse_bash_exports, precedence_rank=20, requires_privilege=False) and then iterate: for spec in specs if spec.path.exists(): unpack using spec.source_type, spec.path, spec.parser, spec.precedence_rank, spec.requires_privilege so callers like parse_bash_exports and parse_etc_environment are obvious and the existence check uses spec.path.exists().env_inspector_core/service_aliases.py (2)
90-102: Unusual method binding pattern – functions usecls/selfbut are defined at module level.
linux_etc_environment_pathandwrite_linux_etc_environment_with_privilegeuseclsas the first parameter, expecting to be bound as classmethods later. This works (seeservice.pyline 729) but is unconventional and could confuse maintainers.Consider documenting this pattern or using a more conventional approach like defining these within the class or using a decorator.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_aliases.py` around lines 90 - 102, The two module-level functions linux_etc_environment_path and write_linux_etc_environment_with_privilege are written to expect a class (using the parameter name cls) but are defined at module scope; make them proper class methods on the appropriate class (the Service class that defines _LINUX_ETC_ENV_PATH, which, run, and _write_text_file) by moving their definitions into that class and adding the `@classmethod` decorator (keep the same signatures and internal calls to cls._LINUX_ETC_ENV_PATH, cls.which, cls.run, cls._write_text_file), or alternatively, if you prefer to keep them at module level, rename the first parameter to service_cls and add a clear docstring explaining they must be bound via classmethod in service.py; ensure all call sites (e.g., service.py usage) still invoke them as class methods.
19-21: Confusing import chain fornormalize_target_operation_request.This imports
normalize_target_operation_requestfromservice_ops, butservice_ops.pyimports it fromservice_ops_requestwithout using it locally (appears to be a re-export). Consider importing directly fromservice_ops_requestfor clarity.💡 Direct import
-from .service_ops import normalize_target_operation_request as _normalize_target_operation_request_helper +from .service_ops_request import normalize_target_operation_request as _normalize_target_operation_request_helper🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_aliases.py` around lines 19 - 21, The import in service_aliases.py brings in normalize_target_operation_request via service_ops, which only re-exports it; replace the indirect import with a direct import from the original module (service_ops_request) to remove the confusing chain: import normalize_target_operation_request from service_ops_request (keeping the local alias name _normalize_target_operation_request_helper), and remove or stop relying on the intermediate re-export from service_ops to clarify the dependency.env_inspector_core/providers_wsl.py (3)
17-17: Duplicate constant definition.
_HELPER_DISTRO_REis also defined inproviders.py(line 83). Consider consolidating to a single location (e.g.,constants.py) to avoid drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers_wsl.py` at line 17, Duplicate regex constant _HELPER_DISTRO_RE exists; extract it into a single constants module (e.g., create or use constants.py) and define _HELPER_DISTRO_RE = re.compile(r"^(docker-desktop|docker-desktop-data)$", re.IGNORECASE) there, then update providers_wsl and the other consumer (providers) to import and use that constant instead of redefining it; ensure imports are added (from constants import _HELPER_DISTRO_RE) and remove the local definitions so both modules reference the shared symbol.
128-135: Consider validatingmax_depthbounds.While
max_depthis typed asint, there's no validation that it's a reasonable positive value. A negative or extremely large value could cause unexpected behavior in thefindcommand.💡 Proposed validation
def scan_dotenv_files(self, distro: str, root_path: str, max_depth: int) -> List[str]: + if max_depth < 0: + raise ValueError("max_depth must be non-negative") quoted_root = shlex.quote(root_path) command = ( f"find {quoted_root} -maxdepth {max_depth} -type f "🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers_wsl.py` around lines 128 - 135, The scan_dotenv_files function should validate the max_depth parameter before embedding it into the shell command: ensure max_depth is an integer >= 0 and within a sane upper bound (e.g. define a MAX_DEPTH constant like 10 or 20), and either clamp it to that bound or raise a ValueError for out-of-range values; also coerce to int (str(int(max_depth))) when building the command to avoid injection or malformed input. Update scan_dotenv_files (and any callers if needed) to perform this check on the max_depth variable and use the validated/clamped value in the command string while keeping shlex.quote(root_path) as-is.
111-126: Error handling discards context from first write attempt.When both root and sudo attempts fail, only the last error (
root_error) is captured in the exception chain. The error from the first attempt (if it was root) is overwritten when sudo also fails.💡 Preserve both errors
def write_file_with_privilege(self, distro: str, path: str, content: str) -> None: quoted_path = shlex.quote(path) attempts = ( ["-d", distro, "-u", "root", "-e", "bash", "-lc", f"cat > {quoted_path}"], ["-d", distro, "-e", "bash", "-lc", f"sudo tee {quoted_path} >/dev/null"], ) - root_error: RuntimeError | None = None + errors: List[RuntimeError] = [] for args in attempts: try: self._run(args, input_text=content) return except RuntimeError as exc: - root_error = exc + errors.append(exc) raise RuntimeError( - "Failed to write with both root and sudo fallback. Run app as admin or configure sudo/root access." - ) from root_error + f"Failed to write with both root and sudo fallback: {'; '.join(str(e) for e in errors)}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers_wsl.py` around lines 111 - 126, The write_file_with_privilege function currently overwrites the first attempt's exception (root_error) when both attempts fail; change the error handling to preserve both exceptions by collecting them into a list (e.g., errors = []), appending each caught RuntimeError inside the for args loop (instead of overwriting root_error), and after the loop raise the final RuntimeError("Failed to write...") from errors[-1] while also preserving the earlier exception (for example by setting errors[-1].__context__ = errors[0] or otherwise chaining them) so both the root and sudo errors are available for debugging; update references to root_error and attempts accordingly.scripts/quality/_codacy_zero_impl.py (2)
128-129: Empty code block can be removed.The
elif handled and open_issues == 0: passblock does nothing and was flagged by SonarCloud. Remove it or restructure the conditionals for clarity.♻️ Restructure conditionals
if handled and open_issues is None: findings.append("Codacy response did not include a parseable total issue count.") - elif handled and open_issues == 0: - pass - elif handled: + elif handled and open_issues and open_issues != 0: findings.append(f"Codacy reports {open_issues} open issues (expected 0).") sample_payload = request_json_fn(request=replace(request, limit=20, method="POST", data={})) findings.extend(sample_findings_fn(sample_payload))Or alternatively:
♻️ Use explicit positive check
if handled and open_issues is None: findings.append("Codacy response did not include a parseable total issue count.") - elif handled and open_issues == 0: - pass - elif handled: + elif handled and open_issues is not None and open_issues > 0: findings.append(f"Codacy reports {open_issues} open issues (expected 0).") sample_payload = request_json_fn(request=replace(request, limit=20, method="POST", data={})) findings.extend(sample_findings_fn(sample_payload))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_impl.py` around lines 128 - 129, Remove the no-op branch "elif handled and open_issues == 0: pass" in the function/flow that uses the handled and open_issues variables; either delete that elif entirely and let the surrounding if/else handle the cases, or collapse its condition into the preceding/next branch so there is no empty block (locate the conditional that references handled and open_issues and adjust the logic accordingly).
108-110: Indirect function resolution adds complexity.The pattern of looking up functions via
getattr(public, "_request_json", _request_json)enables test aliasing but adds indirection. This is acceptable for testability, but consider documenting why this pattern is used.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_impl.py` around lines 108 - 110, The indirect resolution via request_json_fn = getattr(public, "_request_json", _request_json) and sample_findings_fn = getattr(public, "_sample_issue_findings", _sample_issue_findings) should be documented: add a concise comment above the public = _public_codacy_module() block explaining that getattr is used to allow test-time aliasing/mocking of the internal helpers (_request_json and _sample_issue_findings) while preserving default implementations, and note any expectations (e.g., callable signature) so future readers understand the indirection and its purpose.scripts/quality/check_deepscan_zero.py (1)
22-28: Design concern: Frozen dataclass with mutable list field.
DeepScanRequestis markedfrozen=Truebut contains a mutablefindings: List[str]that is modified in_fetch_open_issues(lines 125, 130, 133). While this technically works (frozen prevents attribute reassignment, not mutation of mutable objects), it's semantically misleading.Consider either:
- Remove
frozen=Truesince the object is effectively mutable viafindings- Pass
findingsseparately and return it from the function♻️ Option 1: Remove frozen
-@dataclass(frozen=True) +@dataclass class DeepScanRequest: host: str path: str query: dict token: str findings: List[str]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/check_deepscan_zero.py` around lines 22 - 28, DeepScanRequest is declared with frozen=True but contains a mutable field findings which _fetch_open_issues mutates; either remove frozen=True from the DeepScanRequest dataclass declaration so mutations to findings are semantically consistent, or refactor _fetch_open_issues to not mutate DeepScanRequest.findings (e.g., accept and return a separate findings list or build and return findings from _fetch_open_issues). Locate the dataclass DeepScanRequest and the function _fetch_open_issues to apply the chosen change and update callers accordingly.tests/test_qlty_config.py (1)
1-2: Minor: Python 2 compatibility imports are unnecessary.Since
tomllibrequires Python 3.11+, theabsolute_importanddivisionfuture imports serve no purpose. Consider removing them for cleaner code.♻️ Suggested cleanup
-from __future__ import absolute_import, division - from pathlib import Path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_qlty_config.py` around lines 1 - 2, Remove the obsolete Python 2 compatibility future imports at the top of tests/test_qlty_config.py (the lines importing absolute_import and division); since the project targets Python 3.11+ (and uses tomllib), simply delete those import lines to clean up the module-level imports.scripts/quality/check_required_checks.py (1)
9-10: Unused imports.The imports
timeandurllib.errorare no longer used in this wrapper since the polling/retry logic was moved to_required_checks_impl.py.♻️ Remove unused imports
import json import os import sys -import time -import urllib.error from datetime import datetime, timezone🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/check_required_checks.py` around lines 9 - 10, Remove the unused imports at the top of scripts/quality/check_required_checks.py: delete the `import time` and `import urllib.error` lines (they are vestigial after moving polling/retry logic into _required_checks_impl.py) and run the linter/formatter to ensure no leftover references remain.scripts/quality/_required_checks_impl.py (1)
78-87: Remove duplicate_parse_argsfunction and have the wrapper delegate to the impl module.The
_parse_argsimplementation incheck_required_checks.py(lines 38-47) is identical to the one in_required_checks_impl.py(lines 78-87). Since this is the impl module, consolidate by removing the wrapper's copy and calling the impl version instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_required_checks_impl.py` around lines 78 - 87, Remove the duplicate _parse_args implementation from check_required_checks.py and have the wrapper delegate to the implementation in _required_checks_impl.py: delete the local _parse_args function in check_required_checks.py and replace its usage with a call or import referencing _required_checks_impl._parse_args (or import _parse_args from scripts.quality._required_checks_impl) so the wrapper reuses the single implementation defined in _required_checks_impl.py (function name: _parse_args) and avoid duplicated argument parsing logic.scripts/quality/check_codacy_zero.py (1)
95-99: Missing stdout output for markdown consistency.This script writes markdown output to a file but does not print it to stdout, unlike other similar scripts (
check_sonar_zero.py,check_required_checks.py,check_quality_secrets.py,check_deepscan_zero.py), which all callprint(out_md.read_text(...), end="").If this is intentional (e.g., the calling workflow doesn't need stdout), no change is necessary. Otherwise, consider adding the print statement for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/check_codacy_zero.py` around lines 95 - 99, The markdown file is written to disk but not emitted to stdout for consistency with other scripts; after out_md.write_text(_render_md(payload), encoding="utf-8") add a print of the file contents (e.g. print(out_md.read_text(encoding="utf-8"), end="")) so the rendered markdown produced by _render_md(payload) is printed to stdout as well; ensure you use the same encoding ("utf-8") and preserve the existing return (return 0 if status == "pass" else 1).scripts/quality/_codacy_zero_support.py (1)
55-69: Sort__all__to satisfy Ruff RUF022.
__all__ordering is currently lint-noisy and may keep quality gates red.Proposed diff
__all__ = [ "CODACY_API_HOST", "CODACY_REQUEST_EXCEPTIONS", "TOTAL_KEYS", "CodacyRequest", "_fetch_sample_payload", "_format_issue_sample", "_first_text", "_provider_candidates", "_request_json", "_sample_issue_findings", "encode_identifier", "request_json_https", "safe_output_path_in_workspace", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_support.py` around lines 55 - 69, Sort the __all__ list in scripts/quality/_codacy_zero_support.py to satisfy Ruff RUF022 by ordering the exported names alphabetically; update the existing entries such as "CODACY_API_HOST", "CODACY_REQUEST_EXCEPTIONS", "TOTAL_KEYS", "CodacyRequest", "_fetch_sample_payload", "_format_issue_sample", "_first_text", "_provider_candidates", "_request_json", "_sample_issue_findings", "encode_identifier", "request_json_https", and "safe_output_path_in_workspace" into a single alphabetized sequence (preserve exact string values and quotes) so the module-level __all__ is lexically sorted.scripts/quality/_required_checks_http.py (1)
207-231: Sort__all__to clear Ruff RUF022 noise.The export list is unsorted and will keep lint warnings active.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_required_checks_http.py` around lines 207 - 231, The module-level __all__ export list is unsorted and triggers Ruff RUF022; sort the __all__ list entries alphabetically (by string) while preserving the existing quoted symbol names and commas so the list is still a valid tuple/list and contains the same identifiers (e.g., "GITHUB_API_HOST", "GitHubRequest", "_SHA_RE", "_TRANSIENT_HTTP_CODES", "_api_get_check_runs", etc.); update the __all__ definition in the module to the alphabetically ordered sequence to clear the lint warning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@env_inspector_core/cli.py`:
- Around line 196-206: The exit code handling is fragile because a handler
(specifically _list_records) can return None which gets assigned to exit_code;
change _list_records to explicitly return 0 on success (so it behaves like other
handlers) and then remove the special-case branch that forces exit_code = 0 when
args.command == "list"; update the code paths that call handler(active_service,
args) to assume an int return and preserve the initial exit_code value (2) only
when handler is None or an exception occurs.
In `@env_inspector_core/service_aliases.py`:
- Around line 24-45: In registry_write, detect the invalid state where action ==
"set" but value is None and raise a clear exception (e.g., ValueError or
RuntimeError) before attempting any change; update the function (around the
action/value handling block in registry_write) to validate if action == "set"
and value is None and raise with a descriptive message like "Cannot set registry
key {key}: value is None", so callers receive an explicit error instead of
silently skipping the operation.
In `@env_inspector_core/service_ops.py`:
- Line 9: Remove the unused import normalize_target_operation_batch from the
import statement and keep only normalize_target_operation_request (which is
intentionally re-exported); update the import line that currently references
normalize_target_operation_batch and normalize_target_operation_request so it
imports just normalize_target_operation_request, leaving any existing re-export
logic for normalize_target_operation_request unchanged.
In `@env_inspector_gui/models.py`:
- Around line 179-180: The helper currently forwards an empty list from
maybe_choose_dotenv_targets() which should be treated as cancellation; modify
the logic in models.py where scoped_targets = maybe_choose_dotenv_targets(key,
list(targets)) to return None when scoped_targets is an empty list so
EnvInspectorController._resolve_operation_inputs() receives None and properly
aborts; reference the maybe_choose_dotenv_targets call and ensure the function
returns None (not []) for the "user unchecked all targets" case.
- Around line 138-140: The current fallback picks context and distro
independently, which can pair a WSL context with the wrong distro; fix by
choosing context first and then deriving wsl_distro from that chosen context. In
other words compute context (using current_context, contexts[0], or
runtime_context) and then: if context starts with "wsl:" extract the distro
after the colon and set wsl_distro to that value (only if it exists in the
derived distros list or else ""), otherwise fall back to current_wsl_distro if
it exists in distros or to "" as a final fallback. Update the assignments around
the variables context, distros, current_wsl_distro, and wsl_distro (and ensure
EnvInspectorController._update_context_values() will therefore store a matching
pair).
In `@scripts/quality/_required_checks_http.py`:
- Around line 168-176: The _check_run_failure function currently treats any
completed check run with a conclusion not equal to "success" as a failure;
update it to treat conclusions in the accepted set {"success", "neutral",
"skipped"} as non-failures. In practice, change the logic that reads conclusion
= observed.get("conclusion") and the subsequent check (in function
_check_run_failure) so it only returns a failure string if conclusion is not one
of those three values, otherwise return None.
- Around line 150-164: The function _collect_contexts currently assigns
contexts[key] = value which overwrites newer entries with older duplicates;
change it to only set the context if it doesn't already exist by using
contexts.setdefault(key, value) when handling entries from
_check_run_context(run) and _status_context(status) so the first (newest)
occurrence is preserved; keep the existing iteration order (APIs return
newest-first) and apply setdefault in both loops to prevent stale overwrite.
In `@scripts/quality/check_deepscan_zero.py`:
- Around line 101-117: In _coerce_fetch_request, after constructing the
DeepScanRequest from kwargs (host, path, query, token, findings), validate that
no unexpected keyword arguments remain by checking kwargs for leftover keys and
raising a TypeError (e.g., "Unexpected keyword arguments: ...") if any are
present; update the function around the kwargs branch to pop the known fields
(host, path, query, token, findings) then inspect kwargs and raise an error
listing any unexpected keys before returning the DeepScanRequest.
In `@scripts/quality/check_sentry_zero.py`:
- Around line 29-33: Change SentryScanRequest to use an immutable sequence type
for projects (use Tuple[str, ...] instead of List[str]) and tighten validation
inside _coerce_scan_request: explicitly require and validate presence of keys
"org", "projects", and "token" (raise a clear ValueError if any are missing),
reject any unknown kwargs (raise TypeError rather than silently ignoring),
disallow string inputs for projects (raise ValueError if projects is a str) and
normalize/convert a valid iterable of project names into a tuple of strings for
the SentryScanRequest constructor; reference SentryScanRequest (fields org,
projects, token) and the _coerce_scan_request function when making these
changes.
---
Nitpick comments:
In `@env_inspector_core/providers_wsl.py`:
- Line 17: Duplicate regex constant _HELPER_DISTRO_RE exists; extract it into a
single constants module (e.g., create or use constants.py) and define
_HELPER_DISTRO_RE = re.compile(r"^(docker-desktop|docker-desktop-data)$",
re.IGNORECASE) there, then update providers_wsl and the other consumer
(providers) to import and use that constant instead of redefining it; ensure
imports are added (from constants import _HELPER_DISTRO_RE) and remove the local
definitions so both modules reference the shared symbol.
- Around line 128-135: The scan_dotenv_files function should validate the
max_depth parameter before embedding it into the shell command: ensure max_depth
is an integer >= 0 and within a sane upper bound (e.g. define a MAX_DEPTH
constant like 10 or 20), and either clamp it to that bound or raise a ValueError
for out-of-range values; also coerce to int (str(int(max_depth))) when building
the command to avoid injection or malformed input. Update scan_dotenv_files (and
any callers if needed) to perform this check on the max_depth variable and use
the validated/clamped value in the command string while keeping
shlex.quote(root_path) as-is.
- Around line 111-126: The write_file_with_privilege function currently
overwrites the first attempt's exception (root_error) when both attempts fail;
change the error handling to preserve both exceptions by collecting them into a
list (e.g., errors = []), appending each caught RuntimeError inside the for args
loop (instead of overwriting root_error), and after the loop raise the final
RuntimeError("Failed to write...") from errors[-1] while also preserving the
earlier exception (for example by setting errors[-1].__context__ = errors[0] or
otherwise chaining them) so both the root and sudo errors are available for
debugging; update references to root_error and attempts accordingly.
In `@env_inspector_core/providers.py`:
- Around line 347-354: The for-loop building specs is hard to read due to
tuple-index filtering; refactor the spec list into a clearer structure (e.g., a
namedtuple or dict) and filter by a readable attribute instead of
spec[1].exists(). Replace the current comprehension with a list of objects like
Spec(source_type=SOURCE_LINUX_BASHRC, path=bashrc, parser=parse_bash_exports,
precedence_rank=20, requires_privilege=False) and then iterate: for spec in
specs if spec.path.exists(): unpack using spec.source_type, spec.path,
spec.parser, spec.precedence_rank, spec.requires_privilege so callers like
parse_bash_exports and parse_etc_environment are obvious and the existence check
uses spec.path.exists().
In `@env_inspector_core/service_aliases.py`:
- Around line 90-102: The two module-level functions linux_etc_environment_path
and write_linux_etc_environment_with_privilege are written to expect a class
(using the parameter name cls) but are defined at module scope; make them proper
class methods on the appropriate class (the Service class that defines
_LINUX_ETC_ENV_PATH, which, run, and _write_text_file) by moving their
definitions into that class and adding the `@classmethod` decorator (keep the same
signatures and internal calls to cls._LINUX_ETC_ENV_PATH, cls.which, cls.run,
cls._write_text_file), or alternatively, if you prefer to keep them at module
level, rename the first parameter to service_cls and add a clear docstring
explaining they must be bound via classmethod in service.py; ensure all call
sites (e.g., service.py usage) still invoke them as class methods.
- Around line 19-21: The import in service_aliases.py brings in
normalize_target_operation_request via service_ops, which only re-exports it;
replace the indirect import with a direct import from the original module
(service_ops_request) to remove the confusing chain: import
normalize_target_operation_request from service_ops_request (keeping the local
alias name _normalize_target_operation_request_helper), and remove or stop
relying on the intermediate re-export from service_ops to clarify the
dependency.
In `@env_inspector_core/service_restore.py`:
- Around line 8-19: The function restore_dotenv_target currently uses
kwargs.pop("param") which raises KeyError for missing required args; change to
kwargs.pop("param", None) for each required parameter (target, text,
scope_roots, parse_scoped_dotenv_target_fn, write_scoped_text_file_fn) and then
validate: collect any that are None and raise a TypeError naming the missing
parameter(s) (e.g., raise TypeError("Missing required keyword argument(s):
target, text")). Keep the final unexpected-kwargs check as-is.
In `@env_inspector_core/service.py`:
- Around line 425-447: _file_update currently normalizes inputs then constructs
a TargetOperationRequest and passes it to child methods, but each child
(_update_dotenv_file, _update_linux_file, _update_wsl_file,
_update_powershell_file) re-runs _normalize_target_operation_request_helper
causing redundant work; update those child methods to detect when a
TargetOperationRequest is passed (request parameter) and skip calling
_normalize_target_operation_request_helper, using the provided request directly,
and ensure their signatures/types accept TargetOperationRequest so _file_update
can continue to create and forward the request without triggering double
normalization.
- Around line 717-745: The class API is being attached via module-level
assignments (e.g., EnvInspectorService.which, EnvInspectorService.run,
EnvInspectorService.get_powershell_profile_paths,
EnvInspectorService._linux_etc_environment_path,
EnvInspectorService.available_targets, etc.), which is unconventional; fix by
moving these bound callables into the EnvInspectorService class body (or create
mixin classes and add them to EnvInspectorService's bases) so methods are
declared with proper def/@staticmethod/@classmethod/@property as appropriate, or
if you must keep this pattern add a prominent comment above these assignments
explaining why and listing which functions are being aliased; update references
for _service_aliases.<function_name> and helper functions (e.g.,
_apply_row_filters_helper, _diff_text_helper,
_powershell_target_for_path_helper) accordingly.
In `@scripts/quality/_codacy_zero_impl.py`:
- Around line 128-129: Remove the no-op branch "elif handled and open_issues ==
0: pass" in the function/flow that uses the handled and open_issues variables;
either delete that elif entirely and let the surrounding if/else handle the
cases, or collapse its condition into the preceding/next branch so there is no
empty block (locate the conditional that references handled and open_issues and
adjust the logic accordingly).
- Around line 108-110: The indirect resolution via request_json_fn =
getattr(public, "_request_json", _request_json) and sample_findings_fn =
getattr(public, "_sample_issue_findings", _sample_issue_findings) should be
documented: add a concise comment above the public = _public_codacy_module()
block explaining that getattr is used to allow test-time aliasing/mocking of the
internal helpers (_request_json and _sample_issue_findings) while preserving
default implementations, and note any expectations (e.g., callable signature) so
future readers understand the indirection and its purpose.
In `@scripts/quality/_codacy_zero_support.py`:
- Around line 55-69: Sort the __all__ list in
scripts/quality/_codacy_zero_support.py to satisfy Ruff RUF022 by ordering the
exported names alphabetically; update the existing entries such as
"CODACY_API_HOST", "CODACY_REQUEST_EXCEPTIONS", "TOTAL_KEYS", "CodacyRequest",
"_fetch_sample_payload", "_format_issue_sample", "_first_text",
"_provider_candidates", "_request_json", "_sample_issue_findings",
"encode_identifier", "request_json_https", and "safe_output_path_in_workspace"
into a single alphabetized sequence (preserve exact string values and quotes) so
the module-level __all__ is lexically sorted.
In `@scripts/quality/_required_checks_http.py`:
- Around line 207-231: The module-level __all__ export list is unsorted and
triggers Ruff RUF022; sort the __all__ list entries alphabetically (by string)
while preserving the existing quoted symbol names and commas so the list is
still a valid tuple/list and contains the same identifiers (e.g.,
"GITHUB_API_HOST", "GitHubRequest", "_SHA_RE", "_TRANSIENT_HTTP_CODES",
"_api_get_check_runs", etc.); update the __all__ definition in the module to the
alphabetically ordered sequence to clear the lint warning.
In `@scripts/quality/_required_checks_impl.py`:
- Around line 78-87: Remove the duplicate _parse_args implementation from
check_required_checks.py and have the wrapper delegate to the implementation in
_required_checks_impl.py: delete the local _parse_args function in
check_required_checks.py and replace its usage with a call or import referencing
_required_checks_impl._parse_args (or import _parse_args from
scripts.quality._required_checks_impl) so the wrapper reuses the single
implementation defined in _required_checks_impl.py (function name: _parse_args)
and avoid duplicated argument parsing logic.
In `@scripts/quality/check_codacy_zero.py`:
- Around line 95-99: The markdown file is written to disk but not emitted to
stdout for consistency with other scripts; after
out_md.write_text(_render_md(payload), encoding="utf-8") add a print of the file
contents (e.g. print(out_md.read_text(encoding="utf-8"), end="")) so the
rendered markdown produced by _render_md(payload) is printed to stdout as well;
ensure you use the same encoding ("utf-8") and preserve the existing return
(return 0 if status == "pass" else 1).
In `@scripts/quality/check_deepscan_zero.py`:
- Around line 22-28: DeepScanRequest is declared with frozen=True but contains a
mutable field findings which _fetch_open_issues mutates; either remove
frozen=True from the DeepScanRequest dataclass declaration so mutations to
findings are semantically consistent, or refactor _fetch_open_issues to not
mutate DeepScanRequest.findings (e.g., accept and return a separate findings
list or build and return findings from _fetch_open_issues). Locate the dataclass
DeepScanRequest and the function _fetch_open_issues to apply the chosen change
and update callers accordingly.
In `@scripts/quality/check_required_checks.py`:
- Around line 9-10: Remove the unused imports at the top of
scripts/quality/check_required_checks.py: delete the `import time` and `import
urllib.error` lines (they are vestigial after moving polling/retry logic into
_required_checks_impl.py) and run the linter/formatter to ensure no leftover
references remain.
In `@tests/test_linux_support.py`:
- Around line 15-31: Duplicate test-reset fixture and module-level originals
(_ORIGINAL_PATH_EXISTS, _ORIGINAL_READ_TEXT_IF_EXISTS,
_ORIGINAL_WRITE_TEXT_FILE, _ORIGINAL_WHICH, _ORIGINAL_RUN, _ORIGINAL_PATH_HOME
and the autouse fixture _reset_service_globals) should be extracted from
tests/test_linux_support.py into a shared tests/conftest.py; move the
module-level captures and the pytest.fixture implementation there so both
tests/test_linux_support.py and tests/test_service_preview.py can import the
originals from the same place and simply remove the duplicated block in those
files, keeping the exact symbol names (service_module, EnvInspectorService,
_reset_service_globals, and the _ORIGINAL_* constants) and retaining
monkeypatch.setattr calls shown to restore _path_exists, _read_text_if_exists,
EnvInspectorService._write_text_file (wrapped with staticmethod), which, run,
and Path.home.
In `@tests/test_qlty_config.py`:
- Around line 1-2: Remove the obsolete Python 2 compatibility future imports at
the top of tests/test_qlty_config.py (the lines importing absolute_import and
division); since the project targets Python 3.11+ (and uses tomllib), simply
delete those import lines to clean up the module-level imports.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: facb843f-edb5-40d1-a888-6dfd6fd110f7
📒 Files selected for processing (31)
.qlty/qlty.tomlenv_inspector_core/cli.pyenv_inspector_core/providers.pyenv_inspector_core/providers_wsl.pyenv_inspector_core/service.pyenv_inspector_core/service_aliases.pyenv_inspector_core/service_listing.pyenv_inspector_core/service_ops.pyenv_inspector_core/service_ops_request.pyenv_inspector_core/service_privileged.pyenv_inspector_core/service_restore.pyenv_inspector_core/service_wsl.pyenv_inspector_gui/controller.pyenv_inspector_gui/dialogs.pyenv_inspector_gui/models.pyenv_inspector_gui/table_logic.pyscripts/quality/_codacy_zero_impl.pyscripts/quality/_codacy_zero_support.pyscripts/quality/_required_checks_http.pyscripts/quality/_required_checks_impl.pyscripts/quality/assert_coverage_100.pyscripts/quality/check_codacy_zero.pyscripts/quality/check_deepscan_zero.pyscripts/quality/check_required_checks.pyscripts/quality/check_sentry_zero.pyscripts/security_helpers.pytests/test_gui_table_logic.pytests/test_linux_support.pytests/test_pr39_owned_coverage.pytests/test_qlty_config.pytests/test_service_preview.py
✅ Files skipped from review due to trivial changes (1)
- .qlty/qlty.toml
| exit_code = 2 | ||
| if handler is None: | ||
| print(f"Unknown command: {args.command}", file=sys.stderr) | ||
| return 2 | ||
| try: | ||
| return handler(active_service, args) | ||
| except ValueError as exc: | ||
| print(str(exc), file=sys.stderr) | ||
| return 2 | ||
| else: | ||
| try: | ||
| exit_code = handler(active_service, args) | ||
| if args.command == "list": | ||
| exit_code = 0 | ||
| except ValueError as exc: | ||
| print(str(exc), file=sys.stderr) | ||
| return exit_code |
There was a problem hiding this comment.
Fragile exit code handling when handler returns None.
The logic has a subtle issue: when handler(active_service, args) returns None (as _list_records does), exit_code is assigned None. The special case on lines 202-203 fixes this for the "list" command, but if any other handler were to return None in the future, the function would return None instead of a valid exit code.
Consider making _list_records return 0 for consistency with other handlers, which would eliminate the special case:
🛠️ Proposed fix
def _list_records(service: EnvInspectorService, args: argparse.Namespace) -> None:
+def _list_records(service: EnvInspectorService, args: argparse.Namespace) -> int:
_reject_raw_secret_stdout(args)
_emit_stdout_rows(_stdout_safe_rows(service, args), output=args.output)
+ return 0Then remove the special case:
try:
exit_code = handler(active_service, args)
- if args.command == "list":
- exit_code = 0
except ValueError as exc:
print(str(exc), file=sys.stderr)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/cli.py` around lines 196 - 206, The exit code handling is
fragile because a handler (specifically _list_records) can return None which
gets assigned to exit_code; change _list_records to explicitly return 0 on
success (so it behaves like other handlers) and then remove the special-case
branch that forces exit_code = 0 when args.command == "list"; update the code
paths that call handler(active_service, args) to assume an int return and
preserve the initial exit_code value (2) only when handler is None or an
exception occurs.
| def registry_write(self, *args: Any, apply_changes: bool, **kwargs: Any): | ||
| request_data = _normalize_target_operation_request_helper(*args, **kwargs) | ||
| target = request_data["target"] | ||
| key = request_data["key"] | ||
| value = request_data["value"] | ||
| action = request_data["action"] | ||
| if self.win_provider is None: | ||
| raise RuntimeError("Windows registry provider unavailable.") | ||
| scope = WindowsRegistryProvider.USER_SCOPE if target == "windows:user" else WindowsRegistryProvider.MACHINE_SCOPE | ||
| current = self.win_provider.list_scope(scope) | ||
| before = json.dumps(current, indent=2, sort_keys=True) | ||
| if action == "set" and value is not None: | ||
| if apply_changes: | ||
| self.win_provider.set_scope_value(scope, key, value) | ||
| current[key] = value | ||
| elif action == "remove": | ||
| if apply_changes: | ||
| self.win_provider.remove_scope_value(scope, key) | ||
| current.pop(key, None) | ||
| after = json.dumps(current, indent=2, sort_keys=True) | ||
| requires_priv = target == "windows:machine" | ||
| return before, after, None, requires_priv, None |
There was a problem hiding this comment.
registry_write doesn't handle the case when action == "set" but value is None.
Line 35 checks if action == "set" and value is not None, silently skipping the set operation when value is None. This could lead to confusing behavior where a "set" action does nothing. Consider raising an error for this invalid state.
🛡️ Proposed validation
def registry_write(self, *args: Any, apply_changes: bool, **kwargs: Any):
request_data = _normalize_target_operation_request_helper(*args, **kwargs)
target = request_data["target"]
key = request_data["key"]
value = request_data["value"]
action = request_data["action"]
if self.win_provider is None:
raise RuntimeError("Windows registry provider unavailable.")
+ if action == "set" and value is None:
+ raise ValueError("Value is required for 'set' action.")
scope = WindowsRegistryProvider.USER_SCOPE if target == "windows:user" else WindowsRegistryProvider.MACHINE_SCOPE📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def registry_write(self, *args: Any, apply_changes: bool, **kwargs: Any): | |
| request_data = _normalize_target_operation_request_helper(*args, **kwargs) | |
| target = request_data["target"] | |
| key = request_data["key"] | |
| value = request_data["value"] | |
| action = request_data["action"] | |
| if self.win_provider is None: | |
| raise RuntimeError("Windows registry provider unavailable.") | |
| scope = WindowsRegistryProvider.USER_SCOPE if target == "windows:user" else WindowsRegistryProvider.MACHINE_SCOPE | |
| current = self.win_provider.list_scope(scope) | |
| before = json.dumps(current, indent=2, sort_keys=True) | |
| if action == "set" and value is not None: | |
| if apply_changes: | |
| self.win_provider.set_scope_value(scope, key, value) | |
| current[key] = value | |
| elif action == "remove": | |
| if apply_changes: | |
| self.win_provider.remove_scope_value(scope, key) | |
| current.pop(key, None) | |
| after = json.dumps(current, indent=2, sort_keys=True) | |
| requires_priv = target == "windows:machine" | |
| return before, after, None, requires_priv, None | |
| def registry_write(self, *args: Any, apply_changes: bool, **kwargs: Any): | |
| request_data = _normalize_target_operation_request_helper(*args, **kwargs) | |
| target = request_data["target"] | |
| key = request_data["key"] | |
| value = request_data["value"] | |
| action = request_data["action"] | |
| if self.win_provider is None: | |
| raise RuntimeError("Windows registry provider unavailable.") | |
| if action == "set" and value is None: | |
| raise ValueError("Value is required for 'set' action.") | |
| scope = WindowsRegistryProvider.USER_SCOPE if target == "windows:user" else WindowsRegistryProvider.MACHINE_SCOPE | |
| current = self.win_provider.list_scope(scope) | |
| before = json.dumps(current, indent=2, sort_keys=True) | |
| if action == "set" and value is not None: | |
| if apply_changes: | |
| self.win_provider.set_scope_value(scope, key, value) | |
| current[key] = value | |
| elif action == "remove": | |
| if apply_changes: | |
| self.win_provider.remove_scope_value(scope, key) | |
| current.pop(key, None) | |
| after = json.dumps(current, indent=2, sort_keys=True) | |
| requires_priv = target == "windows:machine" | |
| return before, after, None, requires_priv, None |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/service_aliases.py` around lines 24 - 45, In
registry_write, detect the invalid state where action == "set" but value is None
and raise a clear exception (e.g., ValueError or RuntimeError) before attempting
any change; update the function (around the action/value handling block in
registry_write) to validate if action == "set" and value is None and raise with
a descriptive message like "Cannot set registry key {key}: value is None", so
callers receive an explicit error instead of silently skipping the operation.
|
|
||
| from .models import OperationResult | ||
| from .secrets import mask_value | ||
| from .service_ops_request import normalize_target_operation_batch, normalize_target_operation_request |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if these functions are imported from service_ops elsewhere
rg -n "from .service_ops import.*normalize" --type=py
rg -n "from env_inspector_core.service_ops import.*normalize" --type=pyRepository: Prekzursil/env-inspector
Length of output: 214
🏁 Script executed:
# Search for any imports of normalize_target_operation_batch from service_ops
rg -n "from .service_ops import.*normalize_target_operation_batch" --type=py
rg -n "from env_inspector_core.service_ops import.*normalize_target_operation_batch" --type=py
# Check if there's an __all__ definition in service_ops.py
cat env_inspector_core/service_ops.py | head -30
# Also verify all imports from service_ops in the codebase
rg -n "from .service_ops import|from env_inspector_core.service_ops import" --type=pyRepository: Prekzursil/env-inspector
Length of output: 1190
Remove unused normalize_target_operation_batch import; normalize_target_operation_request is intentionally re-exported.
normalize_target_operation_request is re-exported for use in service_aliases.py, so it should be kept. However, normalize_target_operation_batch is imported but never used – remove it to clean up the imports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/service_ops.py` at line 9, Remove the unused import
normalize_target_operation_batch from the import statement and keep only
normalize_target_operation_request (which is intentionally re-exported); update
the import line that currently references normalize_target_operation_batch and
normalize_target_operation_request so it imports just
normalize_target_operation_request, leaving any existing re-export logic for
normalize_target_operation_request unchanged.
| distros = [context.split(":", 1)[1] for context in contexts if context.startswith("wsl:")] | ||
| context = current_context if current_context in contexts else (contexts[0] if contexts else runtime_context) | ||
| wsl_distro = current_wsl_distro if current_wsl_distro in distros else (distros[0] if distros else "") |
There was a problem hiding this comment.
Keep the resolved WSL context and distro in sync.
This fallback can pair a WSL context with the wrong distro. For example, if context resolves to wsl:Debian and current_wsl_distro is empty or stale, this still falls back to distros[0], which may be a different distro. EnvInspectorController._update_context_values() then stores both values together, so later WSL operations can run against the wrong distro.
🐛 Minimal fix
def resolve_context_selection(
*,
contexts: Sequence[str],
current_context: str,
current_wsl_distro: str,
runtime_context: str,
) -> ContextSelection:
distros = [context.split(":", 1)[1] for context in contexts if context.startswith("wsl:")]
context = current_context if current_context in contexts else (contexts[0] if contexts else runtime_context)
- wsl_distro = current_wsl_distro if current_wsl_distro in distros else (distros[0] if distros else "")
+ preferred_distro = context.split(":", 1)[1] if context.startswith("wsl:") else current_wsl_distro
+ wsl_distro = preferred_distro if preferred_distro in distros else (distros[0] if distros else "")
return ContextSelection(context=context, wsl_distro=wsl_distro, distros=distros)🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 139-139: Extract this nested conditional expression into an independent statement.
[warning] 140-140: Extract this nested conditional expression into an independent statement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_gui/models.py` around lines 138 - 140, The current fallback
picks context and distro independently, which can pair a WSL context with the
wrong distro; fix by choosing context first and then deriving wsl_distro from
that chosen context. In other words compute context (using current_context,
contexts[0], or runtime_context) and then: if context starts with "wsl:" extract
the distro after the colon and set wsl_distro to that value (only if it exists
in the derived distros list or else ""), otherwise fall back to
current_wsl_distro if it exists in distros or to "" as a final fallback. Update
the assignments around the variables context, distros, current_wsl_distro, and
wsl_distro (and ensure EnvInspectorController._update_context_values() will
therefore store a matching pair).
| scoped_targets = maybe_choose_dotenv_targets(key, list(targets)) | ||
| return scoped_targets |
There was a problem hiding this comment.
Treat an empty dotenv scope selection as cancellation.
_maybe_choose_dotenv_targets() can return [] when the user unchecks every .env target. This helper forwards that unchanged, and EnvInspectorController._resolve_operation_inputs() only aborts on None, so preview/apply can still run with targets=[].
🐛 Minimal fix
scoped_targets = maybe_choose_dotenv_targets(key, list(targets))
- return scoped_targets
+ if not scoped_targets:
+ return None
+ return list(scoped_targets)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_gui/models.py` around lines 179 - 180, The helper currently
forwards an empty list from maybe_choose_dotenv_targets() which should be
treated as cancellation; modify the logic in models.py where scoped_targets =
maybe_choose_dotenv_targets(key, list(targets)) to return None when
scoped_targets is an empty list so
EnvInspectorController._resolve_operation_inputs() receives None and properly
aborts; reference the maybe_choose_dotenv_targets call and ensure the function
returns None (not []) for the "user unchecked all targets" case.
| def _collect_contexts(check_runs_payload: Dict[str, Any], status_payload: Dict[str, Any]) -> Dict[str, Dict[str, str]]: | ||
| contexts: Dict[str, Dict[str, str]] = {} | ||
|
|
||
| for run in check_runs_payload.get("check_runs", []) or []: | ||
| entry = _check_run_context(run) | ||
| if entry: | ||
| key, value = entry | ||
| contexts[key] = value | ||
|
|
||
| for status in status_payload.get("statuses", []) or []: | ||
| entry = _status_context(status) | ||
| if entry: | ||
| key, value = entry | ||
| contexts[key] = value | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
`For GitHub REST API:
- In GET /repos/{owner}/{repo}/commits/{ref}/status, is
statusesreturned newest-first? - For duplicate contexts, should clients use only the latest status per context?
- In GET /repos/{owner}/{repo}/commits/{ref}/check-runs, what is the ordering for repeated check run names?`
💡 Result:
- Yes, the statuses array in GET /repos/{owner}/{repo}/commits/{ref}/status is returned newest-first (reverse chronological order), as confirmed by the official documentation for the related list statuses endpoint and consistent behavior described across sources. The combined status endpoint returns the latest status per context, ordered by creation time descending. 2) Yes, clients should use only the latest status per context for duplicate contexts. The combined status uses the latest status for each context to compute the overall state ("success if the latest status for all contexts is success"), and statuses are limited to 1000 per SHA/context with older ones effectively superseded. 3) Check-runs are returned ordered by completed_at timestamp descending when filter=latest (default), so newest-first. For repeated check run names across check suites, the list includes all (up to 1000 most recent suites), ordered by completion time; clients typically consider the most recent per name.
Citations:
- 1: https://docs.github.com/rest/commits/statuses
- 2: https://docs.github.com/en/rest/commits/statuses?apiVersion=2022-11-28
- 3: https://docs.github.com/en/rest/checks/runs
- 4: https://docs.github.com/rest/checks/runs
- 5: https://github.apidog.io/api-3489497
- 6: https://docs.github.com/en/enterprise-server@3.0/rest/reference/commits
- 7: https://developer.github.com/enterprise/2.13/v3/repos/statuses/
🏁 Script executed:
# First, let's examine the file containing the function
cat -n scripts/quality/_required_checks_http.py | head -200Repository: Prekzursil/env-inspector
Length of output: 7802
🏁 Script executed:
# Also search for test files related to this module
find . -name "*test*" -type f | grep -i "required_checks\|quality" | head -20Repository: Prekzursil/env-inspector
Length of output: 314
🏁 Script executed:
# Look for the helper functions _check_run_context and _status_context
rg "_check_run_context|_status_context" -A 10 scripts/quality/Repository: Prekzursil/env-inspector
Length of output: 8294
🏁 Script executed:
# Search for tests of _collect_contexts
rg "_collect_contexts" tests/Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Check the test_quality_http_builders.py file
cat tests/test_quality_http_builders.pyRepository: Prekzursil/env-inspector
Length of output: 3152
🏁 Script executed:
# Search more broadly for tests related to required_checks
find tests -name "*.py" -type f -exec grep -l "required_checks\|_collect_contexts" {} \;Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Let's also check if there are integration tests or fixtures with sample payloads
rg "check_runs.*status" tests/ -A 3 -B 3Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Look for any documentation or comments about duplicate handling
rg "duplicate\|stale\|overwrite" scripts/quality/Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Let's verify the order of processing and understand the data flow
# Check if there are any comments in the source about data ordering
rg "newest\|latest\|order" scripts/quality/_required_checks_http.py -iRepository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Let's also verify the return statement and usage of _collect_contexts
cat -n scripts/quality/_required_checks_http.py | tail -70 | head -40Repository: Prekzursil/env-inspector
Length of output: 1545
🏁 Script executed:
# Check how _collect_contexts is called - what data flows into it
rg "_collect_contexts" scripts/quality/ -B 5 -A 5Repository: Prekzursil/env-inspector
Length of output: 5131
🏁 Script executed:
# Let's check the actual flow in _required_checks_impl.py to see how the results are used
cat -n scripts/quality/_required_checks_impl.py | grep -A 20 "_collect_contexts"Repository: Prekzursil/env-inspector
Length of output: 2111
🏁 Script executed:
# Check if there's any documentation about the expected behavior or ordering
rg "completed_at|created_at|updated_at" scripts/quality/ -iRepository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Let's verify the source field logic - check if there's a preference between check_run and status
rg "source.*check_run\|source.*status" scripts/quality/ -B 2 -A 2Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Check how the evaluated result uses these sources
cat -n scripts/quality/_required_checks_http.py | sed -n '186,205p'Repository: Prekzursil/env-inspector
Length of output: 860
🏁 Script executed:
# Let's create a simple test to confirm the overwrite behavior concern
python3 << 'EOF'
# Simulate the current behavior vs proposed fix
check_runs_payload = {
"check_runs": [
{"name": "test-suite", "status": "completed", "conclusion": "success"},
{"name": "test-suite", "status": "completed", "conclusion": "failure"}, # older duplicate
]
}
status_payload = {"statuses": []}
def current_behavior(check_runs, statuses):
contexts = {}
for run in check_runs.get("check_runs", []):
key = run.get("name")
if key:
value = {"state": run.get("status"), "conclusion": run.get("conclusion"), "source": "check_run"}
contexts[key] = value # Direct assignment
return contexts
def proposed_behavior(check_runs, statuses):
contexts = {}
for run in check_runs.get("check_runs", []):
key = run.get("name")
if key:
value = {"state": run.get("status"), "conclusion": run.get("conclusion"), "source": "check_run"}
contexts.setdefault(key, value) # Setdefault
return contexts
current = current_behavior(check_runs_payload, status_payload)
proposed = proposed_behavior(check_runs_payload, status_payload)
print("Current behavior (overwrites with stale):")
print(f" Conclusion: {current['test-suite']['conclusion']}")
print("\nProposed behavior (keeps newest):")
print(f" Conclusion: {proposed['test-suite']['conclusion']}")
print("\nBehavior differs?", current != proposed)
EOFRepository: Prekzursil/env-inspector
Length of output: 212
🏁 Script executed:
# Let's verify that GitHub API can actually return duplicate context names
# Check GitHub API documentation behavior about duplicates
python3 << 'EOF'
# The GitHub API documentation and behavior shows:
# 1. Check runs are paginated (up to 100 per request by default in the code)
# 2. A repository can have multiple check suites for the same commit
# 3. Each check suite can have multiple runs
# 4. Runs can have the same name across different suites
# Example scenario:
# - A CI pipeline runs test-suite on commit ABC
# - It completes with conclusion: "failure"
# - Developer pushes a fix to the same commit (force push or commit amendment)
# - CI re-runs test-suite with conclusion: "success"
# - Both check runs exist for the same commit
# - API returns them ordered by completed_at DESC (newest first)
# This means:
# - First in array: test-suite (success) - newest
# - Second in array: test-suite (failure) - older
# Current code behavior: overwrites success with failure (WRONG)
# Proposed code behavior: keeps success, ignores failure (CORRECT)
print("Duplicate check run names CAN occur when:")
print("1. Same check re-runs after a fix on the same commit")
print("2. Multiple check suites for the same commit")
print("3. Same test name in multiple workflows")
print("\nGitHub returns them newest-first (by completed_at DESC)")
print("Current code overwrites newest with oldest - INCORRECT")
print("Proposed code keeps newest - CORRECT")
EOFRepository: Prekzursil/env-inspector
Length of output: 396
🏁 Script executed:
# Check the query parameter used when fetching check-runs
rg "per_page\|per-page" scripts/quality/_required_checks_http.py -B 3 -A 3Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
# Let's confirm the per_page setting is there
cat -n scripts/quality/_required_checks_http.py | sed -n '108,118p'Repository: Prekzursil/env-inspector
Length of output: 482
Use setdefault to preserve latest context status instead of overwriting with stale duplicates.
GitHub's check-runs and statuses APIs return results newest-first (ordered by completion/creation timestamp descending). When duplicate context names exist—which occurs when the same check re-runs after a fix, across multiple check suites, or from different workflows—the current code overwrites newer entries with older ones via direct dictionary assignment. This retains stale states and can mis-evaluate required checks.
Proposed diff
def _collect_contexts(check_runs_payload: Dict[str, Any], status_payload: Dict[str, Any]) -> Dict[str, Dict[str, str]]:
contexts: Dict[str, Dict[str, str]] = {}
for run in check_runs_payload.get("check_runs", []) or []:
entry = _check_run_context(run)
if entry:
key, value = entry
- contexts[key] = value
+ contexts.setdefault(key, value)
for status in status_payload.get("statuses", []) or []:
entry = _status_context(status)
if entry:
key, value = entry
- contexts[key] = value
+ contexts.setdefault(key, value)
return contexts📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _collect_contexts(check_runs_payload: Dict[str, Any], status_payload: Dict[str, Any]) -> Dict[str, Dict[str, str]]: | |
| contexts: Dict[str, Dict[str, str]] = {} | |
| for run in check_runs_payload.get("check_runs", []) or []: | |
| entry = _check_run_context(run) | |
| if entry: | |
| key, value = entry | |
| contexts[key] = value | |
| for status in status_payload.get("statuses", []) or []: | |
| entry = _status_context(status) | |
| if entry: | |
| key, value = entry | |
| contexts[key] = value | |
| def _collect_contexts(check_runs_payload: Dict[str, Any], status_payload: Dict[str, Any]) -> Dict[str, Dict[str, str]]: | |
| contexts: Dict[str, Dict[str, str]] = {} | |
| for run in check_runs_payload.get("check_runs", []) or []: | |
| entry = _check_run_context(run) | |
| if entry: | |
| key, value = entry | |
| contexts.setdefault(key, value) | |
| for status in status_payload.get("statuses", []) or []: | |
| entry = _status_context(status) | |
| if entry: | |
| key, value = entry | |
| contexts.setdefault(key, value) | |
| return contexts |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/_required_checks_http.py` around lines 150 - 164, The
function _collect_contexts currently assigns contexts[key] = value which
overwrites newer entries with older duplicates; change it to only set the
context if it doesn't already exist by using contexts.setdefault(key, value)
when handling entries from _check_run_context(run) and _status_context(status)
so the first (newest) occurrence is preserved; keep the existing iteration order
(APIs return newest-first) and apply setdefault in both loops to prevent stale
overwrite.
| def _check_run_failure(context: str, observed: Dict[str, str]) -> Optional[str]: | ||
| state = observed.get("state") | ||
| if state != "completed": | ||
| return f"{context}: status={state}" | ||
|
|
||
| conclusion = observed.get("conclusion") | ||
| if conclusion != "success": | ||
| return f"{context}: conclusion={conclusion}" | ||
| return None |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For GitHub required status checks, which check run conclusions are considered passing (e.g., success, neutral, skipped)?
💡 Result:
For GitHub required status checks (used in branch protection rules), the check run conclusions considered passing are: success, neutral, and skipped. These allow merging into protected branches. Other conclusions like failure, cancelled, timed_out, action_required, or stale block merging. This is confirmed in official GitHub documentation: - Troubleshooting required status checks: "Successful check statuses are: success, skipped, and neutral." - About protected branches: "Required status checks must have a successful, skipped, or neutral status before collaborators can make changes to a protected branch." Note that "status" here refers to the completed check run's conclusion in the context of checks.
Citations:
- 1: https://docs.github.com/articles/about-status-checks
- 2: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/troubleshooting-required-status-checks
- 3: https://docs.github.com/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches
- 4: https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches
🏁 Script executed:
cat -n scripts/quality/_required_checks_http.py | sed -n '160,185p'Repository: Prekzursil/env-inspector
Length of output: 951
🏁 Script executed:
rg "_check_run_failure" --type py -B 2 -A 8Repository: Prekzursil/env-inspector
Length of output: 4777
🏁 Script executed:
# Check if there are any tests or documentation about this function
rg "_check_run_failure" --type py --type mdRepository: Prekzursil/env-inspector
Length of output: 713
🏁 Script executed:
cat -n scripts/quality/_required_checks_http.py | sed -n '195,210p'Repository: Prekzursil/env-inspector
Length of output: 587
🏁 Script executed:
# Search for documentation or context about these functions
rg -B 5 "def _check_run_failure|def _status_failure" scripts/quality/_required_checks_http.pyRepository: Prekzursil/env-inspector
Length of output: 389
🏁 Script executed:
# Check if there's any documentation about GitHub check types
rg "check_run|status" scripts/quality/_required_checks_http.py | head -20Repository: Prekzursil/env-inspector
Length of output: 1339
Allow GitHub-accepted non-failure conclusions for completed check runs.
Line 174 currently fails any completed check run not equal to "success". This can incorrectly fail checks that GitHub treats as acceptable. According to GitHub's official documentation, check run conclusions considered passing are: success, neutral, and skipped.
Proposed diff
def _check_run_failure(context: str, observed: Dict[str, str]) -> Optional[str]:
state = observed.get("state")
if state != "completed":
return f"{context}: status={state}"
conclusion = observed.get("conclusion")
- if conclusion != "success":
+ if conclusion not in {"success", "neutral", "skipped"}:
return f"{context}: conclusion={conclusion}"
return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/_required_checks_http.py` around lines 168 - 176, The
_check_run_failure function currently treats any completed check run with a
conclusion not equal to "success" as a failure; update it to treat conclusions
in the accepted set {"success", "neutral", "skipped"} as non-failures. In
practice, change the logic that reads conclusion = observed.get("conclusion")
and the subsequent check (in function _check_run_failure) so it only returns a
failure string if conclusion is not one of those three values, otherwise return
None.
| def _coerce_fetch_request(*args: Any, **kwargs: Any) -> DeepScanRequest: | ||
| if args: | ||
| if len(args) == 1 and isinstance(args[0], DeepScanRequest): | ||
| if kwargs: | ||
| raise TypeError("Pass either a request object or keyword arguments, not both.") | ||
| return args[0] | ||
| if len(args) == 5 and not kwargs: | ||
| host, path, query, token, findings = args | ||
| return DeepScanRequest(host=str(host), path=str(path), query=dict(query), token=str(token), findings=findings) | ||
| raise TypeError("Pass a request object or keyword arguments, not positional arguments.") | ||
| return DeepScanRequest( | ||
| host=str(kwargs.pop("host")), | ||
| path=str(kwargs.pop("path")), | ||
| query=dict(kwargs.pop("query")), | ||
| token=str(kwargs.pop("token")), | ||
| findings=kwargs.pop("findings"), | ||
| ) |
There was a problem hiding this comment.
Missing validation for unexpected keyword arguments.
The kwargs path in _coerce_fetch_request (lines 111-117) does not check for unexpected keys after popping the known fields. This could silently ignore typos or invalid arguments.
🛡️ Proposed fix to reject unexpected kwargs
def _coerce_fetch_request(*args: Any, **kwargs: Any) -> DeepScanRequest:
if args:
if len(args) == 1 and isinstance(args[0], DeepScanRequest):
if kwargs:
raise TypeError("Pass either a request object or keyword arguments, not both.")
return args[0]
if len(args) == 5 and not kwargs:
host, path, query, token, findings = args
return DeepScanRequest(host=str(host), path=str(path), query=dict(query), token=str(token), findings=findings)
raise TypeError("Pass a request object or keyword arguments, not positional arguments.")
- return DeepScanRequest(
+ request = DeepScanRequest(
host=str(kwargs.pop("host")),
path=str(kwargs.pop("path")),
query=dict(kwargs.pop("query")),
token=str(kwargs.pop("token")),
findings=kwargs.pop("findings"),
)
+ if kwargs:
+ raise TypeError(f"Unexpected keyword arguments: {', '.join(sorted(kwargs))}")
+ return request📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _coerce_fetch_request(*args: Any, **kwargs: Any) -> DeepScanRequest: | |
| if args: | |
| if len(args) == 1 and isinstance(args[0], DeepScanRequest): | |
| if kwargs: | |
| raise TypeError("Pass either a request object or keyword arguments, not both.") | |
| return args[0] | |
| if len(args) == 5 and not kwargs: | |
| host, path, query, token, findings = args | |
| return DeepScanRequest(host=str(host), path=str(path), query=dict(query), token=str(token), findings=findings) | |
| raise TypeError("Pass a request object or keyword arguments, not positional arguments.") | |
| return DeepScanRequest( | |
| host=str(kwargs.pop("host")), | |
| path=str(kwargs.pop("path")), | |
| query=dict(kwargs.pop("query")), | |
| token=str(kwargs.pop("token")), | |
| findings=kwargs.pop("findings"), | |
| ) | |
| def _coerce_fetch_request(*args: Any, **kwargs: Any) -> DeepScanRequest: | |
| if args: | |
| if len(args) == 1 and isinstance(args[0], DeepScanRequest): | |
| if kwargs: | |
| raise TypeError("Pass either a request object or keyword arguments, not both.") | |
| return args[0] | |
| if len(args) == 5 and not kwargs: | |
| host, path, query, token, findings = args | |
| return DeepScanRequest(host=str(host), path=str(path), query=dict(query), token=str(token), findings=findings) | |
| raise TypeError("Pass a request object or keyword arguments, not positional arguments.") | |
| request = DeepScanRequest( | |
| host=str(kwargs.pop("host")), | |
| path=str(kwargs.pop("path")), | |
| query=dict(kwargs.pop("query")), | |
| token=str(kwargs.pop("token")), | |
| findings=kwargs.pop("findings"), | |
| ) | |
| if kwargs: | |
| raise TypeError(f"Unexpected keyword arguments: {', '.join(sorted(kwargs))}") | |
| return request |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/check_deepscan_zero.py` around lines 101 - 117, In
_coerce_fetch_request, after constructing the DeepScanRequest from kwargs (host,
path, query, token, findings), validate that no unexpected keyword arguments
remain by checking kwargs for leftover keys and raising a TypeError (e.g.,
"Unexpected keyword arguments: ...") if any are present; update the function
around the kwargs branch to pop the known fields (host, path, query, token,
findings) then inspect kwargs and raise an error listing any unexpected keys
before returning the DeepScanRequest.
| @dataclass(frozen=True) | ||
| class SentryScanRequest: | ||
| org: str | ||
| projects: List[str] | ||
| token: str |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Where SentryScanRequest is created/consumed
rg -n -C2 '\bSentryScanRequest\s*\('
rg -n -C2 '\b_coerce_scan_request\s*\('
rg -n -C2 '\b_scan_projects\s*\('
# Any in-place mutations that would break if projects becomes tuple
rg -n -C2 '\bprojects\.(append|extend|insert|pop|remove|clear|sort|reverse)\b'
rg -n -C2 '\brequest\.projects\.(append|extend|insert|pop|remove|clear|sort|reverse)\b'Repository: Prekzursil/env-inspector
Length of output: 3661
🏁 Script executed:
cat -n scripts/quality/check_sentry_zero.py | sed -n '1,150p'Repository: Prekzursil/env-inspector
Length of output: 6551
Tighten request-object immutability and input validation in _coerce_scan_request.
Line 32 uses List[str] in a frozen dataclass, providing only shallow immutability. Lines 131–133 lack validation for missing required keys (will raise KeyError), unknown kwargs (silently ignored), and reject string inputs for projects (which become character arrays when passed to list()).
Proposed hardening patch
@@
`@dataclass`(frozen=True)
class SentryScanRequest:
org: str
- projects: List[str]
+ projects: Tuple[str, ...]
token: str
@@
def _coerce_scan_request(*args: Any, **kwargs: Any) -> SentryScanRequest:
if args:
if len(args) == 1 and isinstance(args[0], SentryScanRequest):
if kwargs:
raise TypeError("Pass either a request object or keyword arguments, not both.")
return args[0]
if len(args) == 3 and not kwargs:
org, projects, token = args
+ if isinstance(projects, (str, bytes)):
+ raise TypeError("'projects' must be a sequence of project slugs, not a string.")
- return SentryScanRequest(org=str(org), projects=list(projects), token=str(token))
+ return SentryScanRequest(
+ org=str(org),
+ projects=tuple(str(p) for p in projects),
+ token=str(token),
+ )
raise TypeError("Pass a request object or keyword arguments, not positional arguments.")
+ expected = {"org", "projects", "token"}
+ unknown = set(kwargs) - expected
+ if unknown:
+ raise TypeError(f"Unexpected keyword argument(s): {', '.join(sorted(unknown))}")
+ missing = expected - set(kwargs)
+ if missing:
+ raise TypeError(f"Missing required keyword argument(s): {', '.join(sorted(missing))}")
+
+ projects = kwargs["projects"]
+ if isinstance(projects, (str, bytes)):
+ raise TypeError("'projects' must be a sequence of project slugs, not a string.")
+
- return SentryScanRequest(
- org=str(kwargs.pop("org")),
- projects=list(kwargs.pop("projects")),
- token=str(kwargs.pop("token")),
- )
+ return SentryScanRequest(
+ org=str(kwargs["org"]),
+ projects=tuple(str(p) for p in projects),
+ token=str(kwargs["token"]),
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/check_sentry_zero.py` around lines 29 - 33, Change
SentryScanRequest to use an immutable sequence type for projects (use Tuple[str,
...] instead of List[str]) and tighten validation inside _coerce_scan_request:
explicitly require and validate presence of keys "org", "projects", and "token"
(raise a clear ValueError if any are missing), reject any unknown kwargs (raise
TypeError rather than silently ignoring), disallow string inputs for projects
(raise ValueError if projects is a str) and normalize/convert a valid iterable
of project names into a tuple of strings for the SentryScanRequest constructor;
reference SentryScanRequest (fields org, projects, token) and the
_coerce_scan_request function when making these changes.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
scripts/quality/_codacy_zero_impl.py (1)
148-155: Compute provider candidates once for deterministic behavior.
provider_candidates_fn(request.provider)is called twice (loop + error message). Caching once improves consistency and avoids duplicate work.Suggested refactor
- for candidate in provider_candidates_fn(request.provider): + candidates = list(provider_candidates_fn(request.provider)) + for candidate in candidates: handled, open_issues, findings, error = fetch_fn(request=replace(request, provider=candidate)) if handled: return open_issues, findings last_exc = error findings = [ - f"Codacy API endpoint was not found for provider(s): {', '.join(provider_candidates_fn(request.provider))}." + f"Codacy API endpoint was not found for provider(s): {', '.join(candidates)}." ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_impl.py` around lines 148 - 155, Compute provider_candidates_fn(request.provider) once and reuse it to avoid double evaluation and nondeterminism: call provider_candidates_fn(request.provider) at the top of the block and assign to a local variable (e.g., candidates), then iterate over candidates in the for loop that calls fetch_fn(request=replace(request, provider=candidate)), update last_exc as before, and finally use the same candidates variable when composing the findings message and join its items; ensure function names referenced are provider_candidates_fn, fetch_fn, replace, and the variables request, last_exc, findings.env_inspector_gui/models.py (1)
194-215: Consider extracting helper to reduce cyclomatic complexity.Static analysis flags complexity of 9 (limit 8). The function handles two distinct result shapes (batch vs single). Consider extracting the batch-result handling (lines 195-205) into a separate helper for slightly cleaner separation of concerns.
♻️ Optional refactor
+def _summarize_batch_result(action: str, results: List[Any]) -> OperationResultSummary: + failures = [item for item in results if isinstance(item, dict) and not item.get("success")] + if failures: + return OperationResultSummary( + status_message=None, + error_message=f"{action.title()} had failures:\n" + "\n".join(str(item.get("error_message", "")) for item in failures), + ) + return OperationResultSummary( + status_message=f"{action.title()} succeeded for {len(results)} targets", + error_message=None, + ) + + def summarize_operation_result(action: str, result: Mapping[str, Any]) -> OperationResultSummary: if isinstance(result, dict) and "results" in result: - failures = [item for item in result["results"] if isinstance(item, dict) and not item.get("success")] - if failures: - return OperationResultSummary( - status_message=None, - error_message=f"{action.title()} had failures:\n" + "\n".join(str(item.get("error_message", "")) for item in failures), - ) - return OperationResultSummary( - status_message=f"{action.title()} succeeded for {len(result['results'])} targets", - error_message=None, - ) + return _summarize_batch_result(action, result["results"]) if result.get("success"):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_gui/models.py` around lines 194 - 215, The function summarize_operation_result has two distinct code paths (batch vs single) increasing cyclomatic complexity; extract the batch handling (the branch that checks isinstance(result, dict) and "results" in result and builds failures/status_message) into a new helper function (e.g., summarize_batch_result(result: Mapping[str, Any], action: str) -> OperationResultSummary) and call it from summarize_operation_result, keeping existing behavior and return type (OperationResultSummary) and reusing the same symbols: summarize_operation_result, OperationResultSummary, and the "results" processing logic; ensure the helper performs the failures list creation and constructs the same status_message/error_message strings so behavior and types remain unchanged.env_inspector_core/providers.py (2)
30-30: Make theWslProvidershim explicit.If Line 30 is intentionally preserving
env_inspector_core.providers.WslProvider, mark it as a re-export (__all__or a linter suppression) so the compatibility shim does not keep tripping F401.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers.py` at line 30, The import of WslProvider and the collect_* symbols is being used as a compatibility shim but is tripping F401; make the shim explicit by either declaring an __all__ list that includes "WslProvider", "collect_wsl_dotenv_records", and "collect_wsl_records" (so these names are intentionally re-exported) or add a targeted linter suppression comment (e.g., # noqa: F401) next to the import; update the top-level module export accordingly so the WslProvider shim is explicit and the unused-import lint error is suppressed intentionally.
10-10: Remove the unuseddataclassimport.Line 10 is not referenced anywhere in this module, so it only keeps the F401 warning open.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers.py` at line 10, Remove the unused top-level import "dataclass" from the providers.py module: locate the statement "from dataclasses import dataclass" and delete it so the module no longer imports the unused symbol `dataclass` (this clears the F401 unused import warning).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@env_inspector_core/providers.py`:
- Around line 189-196: The WinregModule protocol is missing the QueryInfoKey
member used by providers.py (registry.QueryInfoKey(regkey)[1]); update the
WinregModule protocol declaration to include a QueryInfoKey attribute with the
correct callable signature (e.g., QueryInfoKey: Callable[[Any], Tuple[int, int,
int, int, int]]) so type checkers recognize the method and eliminate the
attr-defined error — modify the protocol where WinregModule is defined to add
that QueryInfoKey entry.
- Around line 84-86: The regex _POWERSHELL_ASSIGNMENT_RE currently only matches
lowercase "$env:" and will miss uppercase variants like "$Env:"; update its
compilation to include the re.IGNORECASE flag (e.g., pass re.IGNORECASE into
re.compile for _POWERSHELL_ASSIGNMENT_RE) so the pattern for the "Env:"
namespace is matched case-insensitively while keeping the same named groups
(?P<key>...) and (?P<value>...).
In `@env_inspector_gui/models.py`:
- Line 3: The import list in env_inspector_gui.models includes an unused symbol
Tuple; remove Tuple from the typing import statement (i.e., update the import
line that currently mentions Tuple) so the file only imports actually used types
(Any, Callable, Dict, Iterable, List, Mapping, Sequence) and rerun linting to
confirm the unused-import warning is resolved.
In `@scripts/quality/_codacy_zero_impl.py`:
- Line 7: The import line currently brings in both datetime and timezone but
only datetime is unused; modify the import to remove the unused symbol by
changing "from datetime import datetime, timezone" to import only the used
symbol (e.g., "from datetime import timezone") or adjust to a single-token
import that reflects actual usage, ensuring the unused name "datetime" is
removed from the module.
- Around line 130-131: The second Codacy request that builds sample_payload and
calls sample_findings_fn is unguarded and can raise on transient/API errors;
wrap the sample_payload creation and findings.extend(sample_findings_fn(...)) in
a try/except that catches exceptions from request_json_fn or sample_findings_fn
(e.g., Exception), log or record the error/context, and on failure skip adding
sample findings (leave findings unchanged) so the overall check continues
instead of crashing; reference the symbols request_json_fn, sample_findings_fn,
sample_payload, replace(request, ...), and findings.extend when locating the
code to protect.
---
Nitpick comments:
In `@env_inspector_core/providers.py`:
- Line 30: The import of WslProvider and the collect_* symbols is being used as
a compatibility shim but is tripping F401; make the shim explicit by either
declaring an __all__ list that includes "WslProvider",
"collect_wsl_dotenv_records", and "collect_wsl_records" (so these names are
intentionally re-exported) or add a targeted linter suppression comment (e.g., #
noqa: F401) next to the import; update the top-level module export accordingly
so the WslProvider shim is explicit and the unused-import lint error is
suppressed intentionally.
- Line 10: Remove the unused top-level import "dataclass" from the providers.py
module: locate the statement "from dataclasses import dataclass" and delete it
so the module no longer imports the unused symbol `dataclass` (this clears the
F401 unused import warning).
In `@env_inspector_gui/models.py`:
- Around line 194-215: The function summarize_operation_result has two distinct
code paths (batch vs single) increasing cyclomatic complexity; extract the batch
handling (the branch that checks isinstance(result, dict) and "results" in
result and builds failures/status_message) into a new helper function (e.g.,
summarize_batch_result(result: Mapping[str, Any], action: str) ->
OperationResultSummary) and call it from summarize_operation_result, keeping
existing behavior and return type (OperationResultSummary) and reusing the same
symbols: summarize_operation_result, OperationResultSummary, and the "results"
processing logic; ensure the helper performs the failures list creation and
constructs the same status_message/error_message strings so behavior and types
remain unchanged.
In `@scripts/quality/_codacy_zero_impl.py`:
- Around line 148-155: Compute provider_candidates_fn(request.provider) once and
reuse it to avoid double evaluation and nondeterminism: call
provider_candidates_fn(request.provider) at the top of the block and assign to a
local variable (e.g., candidates), then iterate over candidates in the for loop
that calls fetch_fn(request=replace(request, provider=candidate)), update
last_exc as before, and finally use the same candidates variable when composing
the findings message and join its items; ensure function names referenced are
provider_candidates_fn, fetch_fn, replace, and the variables request, last_exc,
findings.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea679c0b-edb7-4efc-b049-bd0fdb9bebe8
📒 Files selected for processing (3)
env_inspector_core/providers.pyenv_inspector_gui/models.pyscripts/quality/_codacy_zero_impl.py
| _POWERSHELL_ASSIGNMENT_RE = re.compile( | ||
| r"^\s*(?!#)\$env:(?P<key>[A-Za-z_]\w*)\s*=\s*(?P<value>.*?)\s*$" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
PowerShell official docs: is the Env:/$env:namespace case-insensitive, and are$env:PATHand$Env:PATH equivalent?
💡 Result:
No, the Env: / $env: namespace in PowerShell is not case-insensitive on all platforms. On Windows, $env:PATH and $Env:PATH are equivalent because Windows environment variables are case-insensitive. On macOS and Linux (Unix-based systems), they are case-sensitive and refer to different variables (e.g., $Env:Path and $Env:PATH are distinct). PowerShell variables themselves are case-insensitive on all systems, but environment variables respect the underlying OS behavior: case-insensitive on Windows, case-sensitive on Unix-like systems.
Citations:
- 1: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_case-sensitivity?view=powershell-7.5
- 2: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7.5
- 3: https://learn.microsoft.com/lb-lu/powershell/module/microsoft.powershell.core/about/about_case-sensitivity?view=powershell-7.5
- 4: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-environment-getenvironmentvariable#:~:text=Environment variable names are case,t seen by native libraries.
- 5: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7.4
🏁 Script executed:
# First, let's examine the file context
cat -n env_inspector_core/providers.py | head -100 | tail -30Repository: Prekzursil/env-inspector
Length of output: 918
🏁 Script executed:
# Check for any platform-specific logic or comments
rg -C 3 "_POWERSHELL_ASSIGNMENT_RE|_parse_powershell" env_inspector_core/providers.pyRepository: Prekzursil/env-inspector
Length of output: 870
🏁 Script executed:
# Check for tests related to this function
fd -t f "test.*\.py" | xargs rg -l "powershell|_POWERSHELL_ASSIGNMENT_RE|_parse_powershell" 2>/dev/null | head -5Repository: Prekzursil/env-inspector
Length of output: 238
🏁 Script executed:
# Check project structure and platform documentation
git ls-files | grep -E "(README|PLATFORM|SUPPORT|\.md$)" | head -10Repository: Prekzursil/env-inspector
Length of output: 396
🏁 Script executed:
# Search for PowerShell tests
rg -A 5 -B 2 "parse_powershell|_POWERSHELL_ASSIGNMENT" tests/Repository: Prekzursil/env-inspector
Length of output: 2180
🏁 Script executed:
# Check for any test data with Env: variants
rg -i "\$Env:" tests/ --no-headingRepository: Prekzursil/env-inspector
Length of output: 50
Match $Env: assignments case-insensitively.
PowerShell on Windows treats the Env: namespace case-insensitively, so profiles using $Env:PATH or other uppercase variants will be silently skipped and disappear from the collected records. Apply the re.IGNORECASE flag to match all forms of the namespace prefix.
Proposed fix
_POWERSHELL_ASSIGNMENT_RE = re.compile(
r"^\s*(?!#)\$env:(?P<key>[A-Za-z_]\w*)\s*=\s*(?P<value>.*?)\s*$",
+ re.IGNORECASE,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| _POWERSHELL_ASSIGNMENT_RE = re.compile( | |
| r"^\s*(?!#)\$env:(?P<key>[A-Za-z_]\w*)\s*=\s*(?P<value>.*?)\s*$" | |
| ) | |
| _POWERSHELL_ASSIGNMENT_RE = re.compile( | |
| r"^\s*(?!#)\$env:(?P<key>[A-Za-z_]\w*)\s*=\s*(?P<value>.*?)\s*$", | |
| re.IGNORECASE, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/providers.py` around lines 84 - 86, The regex
_POWERSHELL_ASSIGNMENT_RE currently only matches lowercase "$env:" and will miss
uppercase variants like "$Env:"; update its compilation to include the
re.IGNORECASE flag (e.g., pass re.IGNORECASE into re.compile for
_POWERSHELL_ASSIGNMENT_RE) so the pattern for the "Env:" namespace is matched
case-insensitively while keeping the same named groups (?P<key>...) and
(?P<value>...).
| sample_payload = request_json_fn(request=replace(request, limit=20, method="POST", data={})) | ||
| findings.extend(sample_findings_fn(sample_payload)) |
There was a problem hiding this comment.
Guard sample-issue fetch failures so the check does not crash.
Lines [130-131] perform a second Codacy request outside the protected try block. A transient/API failure here can abort execution instead of returning findings.
Suggested fix
- sample_payload = request_json_fn(request=replace(request, limit=20, method="POST", data={}))
- findings.extend(sample_findings_fn(sample_payload))
+ try:
+ sample_payload = request_json_fn(
+ request=replace(request, limit=20, method="POST", data={})
+ )
+ findings.extend(sample_findings_fn(sample_payload))
+ except urllib.error.HTTPError as exc:
+ findings.append(f"Failed to fetch sample issues: HTTP {exc.code}")
+ except CODACY_REQUEST_EXCEPTIONS as exc: # pragma: no cover - network/runtime surface
+ findings.append(f"Failed to fetch sample issues: {exc}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/_codacy_zero_impl.py` around lines 130 - 131, The second
Codacy request that builds sample_payload and calls sample_findings_fn is
unguarded and can raise on transient/API errors; wrap the sample_payload
creation and findings.extend(sample_findings_fn(...)) in a try/except that
catches exceptions from request_json_fn or sample_findings_fn (e.g., Exception),
log or record the error/context, and on failure skip adding sample findings
(leave findings unchanged) so the overall check continues instead of crashing;
reference the symbols request_json_fn, sample_findings_fn, sample_payload,
replace(request, ...), and findings.extend when locating the code to protect.
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (2)
env_inspector_gui/models.py (2)
190-191:⚠️ Potential issue | 🟠 MajorTreat an empty scoped target selection as cancellation.
The initial target-pick path already treats “nothing selected” as
None, but this scoped path forwards[]unchanged. That lets “user unchecked everything” escape as a real operation instead of aborting.Minimal fix
scoped_targets = maybe_choose_dotenv_targets(key, list(targets)) - return scoped_targets + if not scoped_targets: + return None + return list(scoped_targets)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_gui/models.py` around lines 190 - 191, The scoped target selection currently returns an empty list as a valid result; update the handling after calling maybe_choose_dotenv_targets(key, list(targets)) so that if scoped_targets is None or an empty list (scoped_targets == [] or simply falsy) you treat it as cancellation and return None, otherwise return scoped_targets; reference the scoped_targets variable and the maybe_choose_dotenv_targets call when making this change.
146-151:⚠️ Potential issue | 🟠 MajorKeep the selected WSL context and distro coupled.
wsl_distrois still resolved independently of the chosencontext, so a fallbackwsl:*context can still be returned with a different distro name.Minimal fix
- if current_wsl_distro in distros: - wsl_distro = current_wsl_distro - elif distros: - wsl_distro = distros[0] + if context.startswith("wsl:"): + selected_distro = context.split(":", 1)[1] + wsl_distro = selected_distro if selected_distro in distros else "" + elif current_wsl_distro in distros: + wsl_distro = current_wsl_distro else: wsl_distro = ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_gui/models.py` around lines 146 - 151, The WSL distro selection is decoupled from the chosen context: ensure wsl_distro is resolved with respect to the current context (e.g., only pick a distro when the selected context is a WSL context). Update the logic that sets wsl_distro (currently using current_wsl_distro and distros) to first check whether the selected context indicates WSL (e.g., startswith "wsl:" or matches the WSL context variable), and if so prefer the distro named in that context, else fall back to current_wsl_distro only when the context is WSL, otherwise set wsl_distro to "". Use the existing variables current_wsl_distro, distros and the context variable used by this module to locate and implement the conditional selection so WSL context and distro stay coupled.
🧹 Nitpick comments (11)
scripts/quality/_codacy_zero_impl.py (2)
2-2: Consider removing Python 2 compatibility imports.
absolute_importanddivisionare default behaviors in Python 3 and serve no purpose here. This is a minor cleanup opportunity.♻️ Suggested cleanup
-from __future__ import absolute_import, division +from __future__ import annotationsIf you need future annotations for postponed evaluation of type hints (PEP 563), use
annotationsinstead; otherwise, you can remove the line entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_impl.py` at line 2, Remove the unnecessary Python2 compatibility future imports by deleting the line "from __future__ import absolute_import, division"; if you relied on postponed evaluation of annotations instead, replace with "from __future__ import annotations", otherwise just remove the import to clean up the module _codacy_zero_impl.py.
157-161: Consider reusing_resolve_codacy_requestto eliminate duplication.Lines 158-161 duplicate the exact logic from
_resolve_codacy_request(lines 109-114). Reusing the helper would consolidate the request/kwargs resolution pattern.♻️ Suggested refactor
def _query_open_issues(request: CodacyRequest | None = None, **kwargs: Any) -> Tuple[int | None, List[str]]: - if request is None: - request = CodacyRequest(**kwargs) - elif kwargs: - raise TypeError("Pass either a CodacyRequest or keyword arguments, not both.") + request = _resolve_codacy_request(request, kwargs) last_exc: Exception | None = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_codacy_zero_impl.py` around lines 157 - 161, The request/kwargs resolution in _query_open_issues duplicates logic from _resolve_codacy_request; replace the manual if/elif block in _query_open_issues with a call to _resolve_codacy_request (passing request and kwargs) to obtain a validated CodacyRequest instance (and handle any exceptions it raises), then proceed using that resolved request; update any type hints/returns if needed to reflect the helper’s behavior and remove the duplicated conditional logic.scripts/quality/_required_checks_impl.py (1)
93-97: Consider de-duplicating required contexts while preserving order.Repeated
--required-contextvalues currently propagate intomissing/failedreporting duplicates, which can add noise.♻️ Proposed refinement
def _required_contexts(args: argparse.Namespace) -> List[str]: - required = [item.strip() for item in args.required_context if item.strip()] + seen = set() + required: List[str] = [] + for item in args.required_context: + name = item.strip() + if not name or name in seen: + continue + seen.add(name) + required.append(name) if not required: raise SystemExit("At least one --required-context is required") return required🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/quality/_required_checks_impl.py` around lines 93 - 97, The _required_contexts function should de-duplicate args.required_context while preserving original order to avoid duplicate entries in reporting; modify the function (referencing _required_contexts, args.required_context and the local variable required) to build a new list that filters out duplicates (e.g., iterate and use a seen set or dict.fromkeys on the trimmed items) before checking emptiness and returning, preserving the existing SystemExit behavior when the final list is empty.env_inspector_core/service_restore.py (3)
44-46: Redundant cast can be removed.
path_outis already a concretePathobject (notPurePath), sopath_out.parentreturns aPaththat has themkdirmethod. Thecast(Path, ...)is unnecessary and may have triggered the static analysis warning.♻️ Proposed fix
if target == bashrc_target: path_out = Path(Path.home(), ".bashrc") - bashrc_parent = cast(Path, path_out.parent) - bashrc_parent.mkdir(parents=True, exist_ok=True) + path_out.parent.mkdir(parents=True, exist_ok=True) path_out.write_text(text, encoding="utf-8") return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_restore.py` around lines 44 - 46, Remove the unnecessary cast around path_out.parent: path_out is a concrete Path so path_out.parent already returns a Path; replace the cast(Path, path_out.parent) usage and assign bashrc_parent = path_out.parent (or use path_out.parent.mkdir(...) directly) and then call bashrc_parent.mkdir(parents=True, exist_ok=True) to create directories. Ensure references to path_out, bashrc_parent, Path.home, .parent and mkdir are preserved.
146-156: Dispatch handlers receive all kwargs but only use a subset.The
handler(...)call passes all extracted function references (e.g.,restore_dotenv_target_fn,restore_linux_target_fn, etc.) to every dispatch handler, but each handler only uses its specific function. This works but is inconsistent with the strict "unexpected keyword" validation pattern used elsewhere in this file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_restore.py` around lines 146 - 156, The dispatch currently calls handler(...) with every possible restore_*_target_fn even though each handler only expects its specific function; update the invocation after _restore_dispatch(target) so you pass only the kwargs that handler actually uses: always pass target, text, and scope_roots, and then pass exactly one of restore_dotenv_target_fn, restore_linux_target_fn, restore_wsl_target_fn, restore_powershell_target_fn, or restore_windows_registry_target_fn depending on the target value (use the same target string/enum used by _restore_dispatch to select which function to forward). This keeps behavior unchanged but avoids unexpected-keyword issues consistent with the file's validation pattern.
173-190: Dispatch helpers lack return type annotations.The
_dispatch_*functions are missing return type annotations (should be-> None). While minor, adding them would improve consistency with the rest of the codebase and help static analysis tools.♻️ Add return type annotations
-def _dispatch_restore_dotenv(**kwargs) -> None: +def _dispatch_restore_dotenv(**kwargs: Any) -> None: kwargs["restore_dotenv_target_fn"](target=kwargs["target"], text=kwargs["text"], scope_roots=kwargs["scope_roots"]) -def _dispatch_restore_linux(**kwargs) -> None: +def _dispatch_restore_linux(**kwargs: Any) -> None: kwargs["restore_linux_target_fn"](target=kwargs["target"], text=kwargs["text"])(Apply similarly to other
_dispatch_*functions)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_restore.py` around lines 173 - 190, The dispatch helper functions _dispatch_restore_dotenv, _dispatch_restore_linux, _dispatch_restore_wsl, _dispatch_restore_powershell, and _dispatch_restore_windows should explicitly declare their return type as -> None to match project conventions and improve static analysis; update each function signature to include the return annotation (e.g., def _dispatch_restore_dotenv(**kwargs) -> None:) and ensure any other dispatch helpers follow the same pattern.env_inspector_core/providers_wsl.py (2)
112-127: Silent fallback may mask permission issues.
write_file_with_privilegetries root first, then sudo, and only raises an error if both fail. However,root_erroris overwritten on each attempt, so if the root attempt fails and then sudo also fails, only the sudo error is preserved in the exception chain. Consider capturing both errors for better debugging.♻️ Capture both errors
def write_file_with_privilege(self, distro: str, path: str, content: str) -> None: quoted_path = shlex.quote(path) attempts = ( ["-d", distro, "-u", "root", "-e", "bash", "-lc", f"cat > {quoted_path}"], ["-d", distro, "-e", "bash", "-lc", f"sudo tee {quoted_path} >/dev/null"], ) - root_error: RuntimeError | None = None + errors: List[RuntimeError] = [] for args in attempts: try: self._run(args, input_text=content) return except RuntimeError as exc: - root_error = exc + errors.append(exc) + detail = "; ".join(str(e) for e in errors) raise RuntimeError( - "Failed to write with both root and sudo fallback. Run app as admin or configure sudo/root access." - ) from root_error + f"Failed to write with both root and sudo fallback ({detail}). " + "Run app as admin or configure sudo/root access." + ) from errors[-1]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers_wsl.py` around lines 112 - 127, The write_file_with_privilege function currently overwrites root_error so only the last exception (sudo failure) is chained; capture and preserve both failures instead. Modify write_file_with_privilege to collect exceptions from each attempt (e.g., root_exc and sudo_exc or a list of exceptions) and when raising the final RuntimeError include both exceptions either by chaining (raise ... from root_exc and attach sudo_exc) or by building a message that embeds both error messages; reference the existing symbols write_file_with_privilege, attempts, self._run, and root_error when locating and implementing the fix.
205-206: Clever but potentially confusing slicing pattern.
batches[: 1 + int(include_etc)]is a clever way to include 1 or 2 batches, but it relies on the tuple ordering and isn't immediately obvious. Consider using a conditional or explicit logic for clarity.♻️ More explicit alternative
- for batch in batches[: 1 + int(include_etc)]: + batches_to_process = (batches[0],) if not include_etc else batches + for batch in batches_to_process: _append_wsl_records(rows, batch)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/providers_wsl.py` around lines 205 - 206, The slice batches[: 1 + int(include_etc)] is clever but unclear; update the loop in the function that calls _append_wsl_records so it explicitly selects batches to process: always process the first batch (batches[0] or batches[:1]) and, if include_etc is truthy and a second batch exists, also process the second batch (batches[1]); call _append_wsl_records(rows, batch) for each chosen batch to preserve behavior while making the intent clear.env_inspector_core/service_privileged.py (2)
62-65: Consider explicitKeyErrorhandling for required kwargs.Using
kwargs.pop("key")on missing keys raisesKeyError, which is less descriptive than aTypeErrorfor missing arguments. Other functions in this PR raise explicitTypeErrorfor missing required arguments.♻️ Proposed fix with explicit error handling
- fixed_path = kwargs.pop("fixed_path") - expected_path = kwargs.pop("expected_path") - text = kwargs.pop("text") - write_text_file = kwargs.pop("write_text_file") + try: + fixed_path = kwargs.pop("fixed_path") + expected_path = kwargs.pop("expected_path") + text = kwargs.pop("text") + write_text_file = kwargs.pop("write_text_file") + except KeyError as exc: + raise TypeError(f"Missing required keyword argument: {exc.args[0]}") from None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_privileged.py` around lines 62 - 65, The code currently uses kwargs.pop("fixed_path"), kwargs.pop("expected_path"), kwargs.pop("text"), kwargs.pop("write_text_file") which will raise KeyError for missing keys; change this to explicitly check for required kwargs and raise a clear TypeError instead (e.g., test for presence of "fixed_path", "expected_path", "text", "write_text_file" in kwargs and if any are missing raise TypeError("missing required argument: <name>") or collect all missing names and raise TypeError listing them), then assign the values from kwargs (or pop after validation) so subsequent code using fixed_path, expected_path, text, and write_text_file behaves the same but with a descriptive error.
58-70: Missing type annotations on the public entrypoint.The PR objective mentions satisfying Codacy typing requirements, but
write_linux_etc_environment_with_privilegeuses bare*args, **kwargswithout type hints. While the function validates arguments at runtime, adding*args: Any, **kwargs: Anywould make the signature explicit and align with the typing improvements in other files (e.g.,collect_wsl_rowsinservice_listing.py).♻️ Proposed fix
+from typing import Any, Callable, Optional -from typing import Callable, Optional -def write_linux_etc_environment_with_privilege(*args, **kwargs) -> None: +def write_linux_etc_environment_with_privilege(*args: Any, **kwargs: Any) -> None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_privileged.py` around lines 58 - 70, Update the public function signature for write_linux_etc_environment_with_privilege to use explicit typing for the varargs (e.g., change from *args, **kwargs to *args: Any, **kwargs: Any) and ensure Any is imported from typing if not already present; keep the runtime argument validation and return type of None unchanged so behavior is identical while satisfying static type checks.env_inspector_core/service_wsl.py (1)
64-89: Inconsistent argument pattern compared to other refactored functions.This function accepts exactly one positional argument while other functions in this PR (e.g.,
collect_wsl_rows,restore_*_target) reject all positional arguments. This inconsistency could be confusing. Consider documenting the design decision or aligning with the keyword-only pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@env_inspector_core/service_wsl.py` around lines 64 - 89, resolve_wsl_target currently accepts a single positional target while other refactored functions are keyword-only; make it keyword-only to be consistent by changing the signature to accept target and the helpers as keyword-only (e.g. def resolve_wsl_target(*, target, dotenv_prefix, validate_distro_name_fn, parse_wsl_dotenv_target_fn, linux_etc_env_path) ), remove the positional-args checks/consumption (the early args checks and args[0] usage), preserve the same runtime behavior (handle dotenv_prefix case and fallback to _resolve_standard_wsl_target) and keep the same error messages when kwargs remain unexpected; update any callers to pass target and the helper arguments by name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@env_inspector_core/providers_wsl.py`:
- Around line 129-136: scan_dotenv_files currently interpolates max_depth into a
shell command allowing injection or misuse; validate and sanitize max_depth in
scan_dotenv_files by ensuring it's an integer (coerce via int() or raise
ValueError on failure), reject negatives, and clamp it to a safe upper bound
(e.g., max_depth = min(int(max_depth), 20)) before converting to a string for
the command; keep the validation in the same function (referencing
scan_dotenv_files and the local variable max_depth) so only a safe, bounded
numeric value is interpolated into command.
- Around line 1-2: Add "from __future__ import annotations" at the top of the
module alongside the existing future imports so the inline union type
annotations (e.g., usages of "str | None", "bool | None", "RuntimeError | None",
"Set[str] | None") are evaluated as postponed annotations for Python 3.9
compatibility; ensure this import is the first future import in
env_inspector_core/providers_wsl.py to satisfy syntax requirements.
In `@env_inspector_core/service_ops_request.py`:
- Around line 31-48: The request-path helpers _target_operation_payload and
_target_operation_batch_payload currently return raw request attributes and can
raise on scope_roots=None and produce inconsistent types compared to the legacy
path; update both functions to coerce scalar fields (target, key, value, action)
to str (or None where appropriate) and to normalize sequence fields by mapping
each entry to str, e.g. scope_roots = None if request.scope_roots is None else
[str(s) for s in request.scope_roots] and similarly targets = None if
request.targets is None else [str(t) for t in request.targets]; ensure the same
str coercion is applied for
request.target/request.key/request.value/request.action in both
_target_operation_payload and _target_operation_batch_payload so outputs match
legacy shapes.
- Around line 69-74: The positional-argument branch is too restrictive and
silently drops extra args: remove the isinstance(args[0], str) guard so
positional values are considered regardless of type, map positional args to
field_names by taking args[i] for each index i and using kwargs.pop(name, None)
when missing, and add an explicit check that raises a clear error (e.g.,
ValueError) when len(args) > len(field_names) to reject overflow positional
arguments; update the tuple construction that currently references args, kwargs,
and field_names accordingly so the fallback branch remains unchanged.
In `@env_inspector_gui/models.py`:
- Around line 194-204: summarize_operation_result currently assumes
result["results"] is a materialized list; validate and normalize that payload
before iterating or calling len() to avoid errors for None/generators/other
iterables: check that result is a dict and that results := result.get("results")
is not None and is a Sized/Sequence (or if it's an iterable, materialize it to a
list), return an OperationResultSummary with an error_message if validation
fails, and then pass the validated list into _batch_failures(...) and len(...);
apply the same validation/normalization to the other identical block that
constructs OperationResultSummary (the one that also calls _batch_failures and
len on result["results"]).
- Around line 34-35: The `_coerce_number` branch that handles floats currently
does int(value) which raises on NaN/Inf; update the float handling (the
isinstance(value, float) branch in _coerce_number) to check for non-finite
values (e.g. using math.isfinite(value) or math.isnan()/math.isinf()) and treat
non-finite floats like other invalid inputs by returning the fallback/default
result instead of trying to cast, otherwise convert with int(value).
In `@scripts/quality/_required_checks_impl.py`:
- Around line 100-104: The _github_token function currently strips after the OR,
causing a whitespace-only GITHUB_TOKEN to short-circuit and ignore a valid
GH_TOKEN; fix by reading and stripping both environment variables separately
(e.g., get GITHUB_TOKEN and GH_TOKEN into two variables and call .strip() on
each), then set token to the first non-empty value (github_token or gh_token),
and keep the existing SystemExit if token is empty; update _github_token
accordingly.
---
Duplicate comments:
In `@env_inspector_gui/models.py`:
- Around line 190-191: The scoped target selection currently returns an empty
list as a valid result; update the handling after calling
maybe_choose_dotenv_targets(key, list(targets)) so that if scoped_targets is
None or an empty list (scoped_targets == [] or simply falsy) you treat it as
cancellation and return None, otherwise return scoped_targets; reference the
scoped_targets variable and the maybe_choose_dotenv_targets call when making
this change.
- Around line 146-151: The WSL distro selection is decoupled from the chosen
context: ensure wsl_distro is resolved with respect to the current context
(e.g., only pick a distro when the selected context is a WSL context). Update
the logic that sets wsl_distro (currently using current_wsl_distro and distros)
to first check whether the selected context indicates WSL (e.g., startswith
"wsl:" or matches the WSL context variable), and if so prefer the distro named
in that context, else fall back to current_wsl_distro only when the context is
WSL, otherwise set wsl_distro to "". Use the existing variables
current_wsl_distro, distros and the context variable used by this module to
locate and implement the conditional selection so WSL context and distro stay
coupled.
---
Nitpick comments:
In `@env_inspector_core/providers_wsl.py`:
- Around line 112-127: The write_file_with_privilege function currently
overwrites root_error so only the last exception (sudo failure) is chained;
capture and preserve both failures instead. Modify write_file_with_privilege to
collect exceptions from each attempt (e.g., root_exc and sudo_exc or a list of
exceptions) and when raising the final RuntimeError include both exceptions
either by chaining (raise ... from root_exc and attach sudo_exc) or by building
a message that embeds both error messages; reference the existing symbols
write_file_with_privilege, attempts, self._run, and root_error when locating and
implementing the fix.
- Around line 205-206: The slice batches[: 1 + int(include_etc)] is clever but
unclear; update the loop in the function that calls _append_wsl_records so it
explicitly selects batches to process: always process the first batch
(batches[0] or batches[:1]) and, if include_etc is truthy and a second batch
exists, also process the second batch (batches[1]); call
_append_wsl_records(rows, batch) for each chosen batch to preserve behavior
while making the intent clear.
In `@env_inspector_core/service_privileged.py`:
- Around line 62-65: The code currently uses kwargs.pop("fixed_path"),
kwargs.pop("expected_path"), kwargs.pop("text"), kwargs.pop("write_text_file")
which will raise KeyError for missing keys; change this to explicitly check for
required kwargs and raise a clear TypeError instead (e.g., test for presence of
"fixed_path", "expected_path", "text", "write_text_file" in kwargs and if any
are missing raise TypeError("missing required argument: <name>") or collect all
missing names and raise TypeError listing them), then assign the values from
kwargs (or pop after validation) so subsequent code using fixed_path,
expected_path, text, and write_text_file behaves the same but with a descriptive
error.
- Around line 58-70: Update the public function signature for
write_linux_etc_environment_with_privilege to use explicit typing for the
varargs (e.g., change from *args, **kwargs to *args: Any, **kwargs: Any) and
ensure Any is imported from typing if not already present; keep the runtime
argument validation and return type of None unchanged so behavior is identical
while satisfying static type checks.
In `@env_inspector_core/service_restore.py`:
- Around line 44-46: Remove the unnecessary cast around path_out.parent:
path_out is a concrete Path so path_out.parent already returns a Path; replace
the cast(Path, path_out.parent) usage and assign bashrc_parent = path_out.parent
(or use path_out.parent.mkdir(...) directly) and then call
bashrc_parent.mkdir(parents=True, exist_ok=True) to create directories. Ensure
references to path_out, bashrc_parent, Path.home, .parent and mkdir are
preserved.
- Around line 146-156: The dispatch currently calls handler(...) with every
possible restore_*_target_fn even though each handler only expects its specific
function; update the invocation after _restore_dispatch(target) so you pass only
the kwargs that handler actually uses: always pass target, text, and
scope_roots, and then pass exactly one of restore_dotenv_target_fn,
restore_linux_target_fn, restore_wsl_target_fn, restore_powershell_target_fn, or
restore_windows_registry_target_fn depending on the target value (use the same
target string/enum used by _restore_dispatch to select which function to
forward). This keeps behavior unchanged but avoids unexpected-keyword issues
consistent with the file's validation pattern.
- Around line 173-190: The dispatch helper functions _dispatch_restore_dotenv,
_dispatch_restore_linux, _dispatch_restore_wsl, _dispatch_restore_powershell,
and _dispatch_restore_windows should explicitly declare their return type as ->
None to match project conventions and improve static analysis; update each
function signature to include the return annotation (e.g., def
_dispatch_restore_dotenv(**kwargs) -> None:) and ensure any other dispatch
helpers follow the same pattern.
In `@env_inspector_core/service_wsl.py`:
- Around line 64-89: resolve_wsl_target currently accepts a single positional
target while other refactored functions are keyword-only; make it keyword-only
to be consistent by changing the signature to accept target and the helpers as
keyword-only (e.g. def resolve_wsl_target(*, target, dotenv_prefix,
validate_distro_name_fn, parse_wsl_dotenv_target_fn, linux_etc_env_path) ),
remove the positional-args checks/consumption (the early args checks and args[0]
usage), preserve the same runtime behavior (handle dotenv_prefix case and
fallback to _resolve_standard_wsl_target) and keep the same error messages when
kwargs remain unexpected; update any callers to pass target and the helper
arguments by name.
In `@scripts/quality/_codacy_zero_impl.py`:
- Line 2: Remove the unnecessary Python2 compatibility future imports by
deleting the line "from __future__ import absolute_import, division"; if you
relied on postponed evaluation of annotations instead, replace with "from
__future__ import annotations", otherwise just remove the import to clean up the
module _codacy_zero_impl.py.
- Around line 157-161: The request/kwargs resolution in _query_open_issues
duplicates logic from _resolve_codacy_request; replace the manual if/elif block
in _query_open_issues with a call to _resolve_codacy_request (passing request
and kwargs) to obtain a validated CodacyRequest instance (and handle any
exceptions it raises), then proceed using that resolved request; update any type
hints/returns if needed to reflect the helper’s behavior and remove the
duplicated conditional logic.
In `@scripts/quality/_required_checks_impl.py`:
- Around line 93-97: The _required_contexts function should de-duplicate
args.required_context while preserving original order to avoid duplicate entries
in reporting; modify the function (referencing _required_contexts,
args.required_context and the local variable required) to build a new list that
filters out duplicates (e.g., iterate and use a seen set or dict.fromkeys on the
trimmed items) before checking emptiness and returning, preserving the existing
SystemExit behavior when the final list is empty.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 215a44d9-ab72-429b-8b85-678f9cd8a7bb
📒 Files selected for processing (10)
env_inspector_core/providers_wsl.pyenv_inspector_core/service_listing.pyenv_inspector_core/service_ops.pyenv_inspector_core/service_ops_request.pyenv_inspector_core/service_privileged.pyenv_inspector_core/service_restore.pyenv_inspector_core/service_wsl.pyenv_inspector_gui/models.pyscripts/quality/_codacy_zero_impl.pyscripts/quality/_required_checks_impl.py
| from __future__ import absolute_import, division | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check minimum Python version requirement
cat pyproject.toml setup.py setup.cfg 2>/dev/null | grep -i pythonRepository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Find the providers_wsl.py file and check its location
find . -name "providers_wsl.py" -type fRepository: Prekzursil/env-inspector
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Search for project configuration files in all standard locations
ls -la | grep -E "pyproject|setup|requirements"
find . -maxdepth 2 -name "pyproject.toml" -o -name "setup.py" -o -name "setup.cfg"Repository: Prekzursil/env-inspector
Length of output: 140
🏁 Script executed:
#!/bin/bash
# List the directory structure to understand layout
tree -L 2 -I '__pycache__|*.egg-info|.git|node_modules' 2>/dev/null || find . -maxdepth 2 -type f -name "*.py" | head -20Repository: Prekzursil/env-inspector
Length of output: 2461
🏁 Script executed:
#!/bin/bash
# Read the beginning of providers_wsl.py to see imports and early code
head -50 ./env_inspector_core/providers_wsl.pyRepository: Prekzursil/env-inspector
Length of output: 1883
🏁 Script executed:
#!/bin/bash
# Check Python target version in ruff.toml and other configs
cat ./ruff.tomlRepository: Prekzursil/env-inspector
Length of output: 1145
🏁 Script executed:
#!/bin/bash
# Check if there's any setup.py or pyproject.toml in hidden or parent locations
find . -maxdepth 3 -name "setup.py" -o -name "pyproject.toml" 2>/dev/nullRepository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check lines around 28, 118, 174 for union syntax
sed -n '25,30p; 115,120p; 170,175p' ./env_inspector_core/providers_wsl.pyRepository: Prekzursil/env-inspector
Length of output: 669
🏁 Script executed:
#!/bin/bash
# Search for any Python version specification in the codebase
grep -r "python_requires\|python_version\|target.version\|target-version" . --include="*.toml" --include="*.cfg" --include="*.txt" 2>/dev/null | head -20Repository: Prekzursil/env-inspector
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check if any file currently has "annotations" in future imports
grep -r "from __future__ import.*annotations" ./env_inspector_core/ 2>/dev/nullRepository: Prekzursil/env-inspector
Length of output: 50
Add annotations future import for Python 3.9 compatibility.
This file uses union syntax (str | None, bool | None, RuntimeError | None, Set[str] | None) in type annotations at lines 23, 28, 118, and 174, which requires Python 3.10+ or from __future__ import annotations.
Proposed fix
-from __future__ import absolute_import, division
+from __future__ import absolute_import, annotations, division🧰 Tools
🪛 Pylint (4.0.5)
[refactor] 1-1: Useless option value for '--disable', 'R0201' was moved to an optional extension, see https://pylint.readthedocs.io/en/latest/whatsnew/2/2.14/summary.html#removed-checkers.
(R0022)
[refactor] 1-1: Useless option value for '--disable', 'W1618' was removed from pylint, see pylint-dev/pylint#4942.
(R0022)
[refactor] 1-1: Useless option value for '--disable', 'W1619' was removed from pylint, see pylint-dev/pylint#4942.
(R0022)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/providers_wsl.py` around lines 1 - 2, Add "from __future__
import annotations" at the top of the module alongside the existing future
imports so the inline union type annotations (e.g., usages of "str | None",
"bool | None", "RuntimeError | None", "Set[str] | None") are evaluated as
postponed annotations for Python 3.9 compatibility; ensure this import is the
first future import in env_inspector_core/providers_wsl.py to satisfy syntax
requirements.
| def scan_dotenv_files(self, distro: str, root_path: str, max_depth: int) -> List[str]: | ||
| quoted_root = shlex.quote(root_path) | ||
| command = ( | ||
| f"find {quoted_root} -maxdepth {max_depth} -type f " | ||
| "\\( -name '.env' -o -name '.env.*' \\) 2>/dev/null" | ||
| ) | ||
| text = self._run(["-d", distro, "-e", "bash", "-lc", command]) | ||
| return [line.strip() for line in text.splitlines() if line.strip()] |
There was a problem hiding this comment.
Command injection risk: max_depth is not validated.
The max_depth parameter is interpolated directly into the shell command without validation. While it's typed as int, a caller could pass a negative value or an extremely large value. More critically, if the type hint is not enforced and a string is passed, it could enable command injection.
🛡️ Proposed fix to validate max_depth
def scan_dotenv_files(self, distro: str, root_path: str, max_depth: int) -> List[str]:
+ if not isinstance(max_depth, int) or max_depth < 0:
+ raise ValueError(f"max_depth must be a non-negative integer, got {max_depth!r}")
quoted_root = shlex.quote(root_path)
command = (
f"find {quoted_root} -maxdepth {max_depth} -type f "🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/providers_wsl.py` around lines 129 - 136,
scan_dotenv_files currently interpolates max_depth into a shell command allowing
injection or misuse; validate and sanitize max_depth in scan_dotenv_files by
ensuring it's an integer (coerce via int() or raise ValueError on failure),
reject negatives, and clamp it to a safe upper bound (e.g., max_depth =
min(int(max_depth), 20)) before converting to a string for the command; keep the
validation in the same function (referencing scan_dotenv_files and the local
variable max_depth) so only a safe, bounded numeric value is interpolated into
command.
| def _target_operation_payload(request: Any) -> Dict[str, Any]: | ||
| return { | ||
| "target": request.target, | ||
| "key": request.key, | ||
| "value": request.value, | ||
| "action": request.action, | ||
| "scope_roots": list(request.scope_roots), | ||
| } | ||
|
|
||
|
|
||
| def _target_operation_batch_payload(request: Any) -> Dict[str, Any]: | ||
| return { | ||
| "action": request.action, | ||
| "key": request.key, | ||
| "value": request.value, | ||
| "targets": list(request.targets), | ||
| "scope_roots": None if request.scope_roots is None else list(request.scope_roots), | ||
| } |
There was a problem hiding this comment.
Request-object path bypasses normalization rules used by legacy inputs.
Lines 33-36 and Lines 43-46 return raw values, while legacy paths (Lines 96-99 and Lines 122-124) coerce fields to str/str | None. Line 37 also fails for scope_roots=None, unlike the legacy path fallback on Line 100. This makes output type/shape depend on call style.
Suggested fix
def _target_operation_payload(request: Any) -> Dict[str, Any]:
return {
- "target": request.target,
- "key": request.key,
- "value": request.value,
- "action": request.action,
- "scope_roots": list(request.scope_roots),
+ "target": _coerce_string(request.target),
+ "key": _coerce_string(request.key),
+ "value": _coerce_optional_string(request.value),
+ "action": _coerce_string(request.action),
+ "scope_roots": list(request.scope_roots or []),
}
def _target_operation_batch_payload(request: Any) -> Dict[str, Any]:
return {
- "action": request.action,
- "key": request.key,
- "value": request.value,
+ "action": _coerce_string(request.action),
+ "key": _coerce_string(request.key),
+ "value": _coerce_optional_string(request.value),
"targets": list(request.targets),
"scope_roots": None if request.scope_roots is None else list(request.scope_roots),
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/service_ops_request.py` around lines 31 - 48, The
request-path helpers _target_operation_payload and
_target_operation_batch_payload currently return raw request attributes and can
raise on scope_roots=None and produce inconsistent types compared to the legacy
path; update both functions to coerce scalar fields (target, key, value, action)
to str (or None where appropriate) and to normalize sequence fields by mapping
each entry to str, e.g. scope_roots = None if request.scope_roots is None else
[str(s) for s in request.scope_roots] and similarly targets = None if
request.targets is None else [str(t) for t in request.targets]; ensure the same
str coercion is applied for
request.target/request.key/request.value/request.action in both
_target_operation_payload and _target_operation_batch_payload so outputs match
legacy shapes.
| if args and isinstance(args[0], str): | ||
| return tuple( | ||
| args[index] if len(args) > index else kwargs.pop(name, None) | ||
| for index, name in enumerate(field_names) | ||
| ) | ||
| return tuple(kwargs.pop(name, None) for name in field_names) |
There was a problem hiding this comment.
Positional argument resolution is overly restrictive and drops overflow args silently.
Line 69 only accepts positional mapping when the first arg is str, so valid non-string inputs (later coercible) are ignored. Also, extra positional args beyond field_names are currently discarded instead of rejected, which can hide caller bugs.
Suggested fix
def _resolve_operation_inputs(
args: Tuple[Any, ...],
kwargs: Dict[str, Any],
field_names: Tuple[str, ...],
) -> Tuple[Any, ...]:
- if args and isinstance(args[0], str):
- return tuple(
- args[index] if len(args) > index else kwargs.pop(name, None)
- for index, name in enumerate(field_names)
- )
+ if args:
+ if len(args) > len(field_names):
+ raise TypeError("Unexpected positional arguments.")
+ return tuple(
+ args[index] if len(args) > index else kwargs.pop(name, None)
+ for index, name in enumerate(field_names)
+ )
return tuple(kwargs.pop(name, None) for name in field_names)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_core/service_ops_request.py` around lines 69 - 74, The
positional-argument branch is too restrictive and silently drops extra args:
remove the isinstance(args[0], str) guard so positional values are considered
regardless of type, map positional args to field_names by taking args[i] for
each index i and using kwargs.pop(name, None) when missing, and add an explicit
check that raises a clear error (e.g., ValueError) when len(args) >
len(field_names) to reject overflow positional arguments; update the tuple
construction that currently references args, kwargs, and field_names accordingly
so the fallback branch remains unchanged.
| elif isinstance(value, float): | ||
| result = int(value) |
There was a problem hiding this comment.
Catch non-finite floats in _coerce_number.
int(value) still raises for nan and inf, so this helper does not fully preserve its new “fallback to default on invalid input” behavior.
Minimal fix
elif isinstance(value, float):
- result = int(value)
+ try:
+ result = int(value)
+ except (OverflowError, ValueError):
+ result = default🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_gui/models.py` around lines 34 - 35, The `_coerce_number`
branch that handles floats currently does int(value) which raises on NaN/Inf;
update the float handling (the isinstance(value, float) branch in
_coerce_number) to check for non-finite values (e.g. using math.isfinite(value)
or math.isnan()/math.isinf()) and treat non-finite floats like other invalid
inputs by returning the fallback/default result instead of trying to cast,
otherwise convert with int(value).
| def summarize_operation_result(action: str, result: Mapping[str, Any]) -> OperationResultSummary: | ||
| if isinstance(result, dict) and "results" in result: | ||
| failures = _batch_failures(result["results"]) | ||
| if failures: | ||
| return OperationResultSummary( | ||
| status_message=None, | ||
| error_message=f"{action.title()} had failures:\n" + "\n".join(str(item.get("error_message", "")) for item in failures), | ||
| ) | ||
| return OperationResultSummary( | ||
| status_message=f"{action.title()} succeeded for {len(result['results'])} targets", | ||
| error_message=None, |
There was a problem hiding this comment.
Validate batch result payloads before iterating them.
env_inspector_gui/controller.py:480-486 forwards the raw result straight here. If result["results"] is None, a generator, or any other non-list payload, _batch_failures() and len(...) can raise or produce a bogus success count instead of surfacing a UI error.
Minimal fix
def summarize_operation_result(action: str, result: Mapping[str, Any]) -> OperationResultSummary:
if isinstance(result, dict) and "results" in result:
- failures = _batch_failures(result["results"])
+ results = result.get("results")
+ if not isinstance(results, list):
+ return OperationResultSummary(
+ status_message=None,
+ error_message=f"{action.title()} failed: malformed batch result payload",
+ )
+ failures = _batch_failures(results)
if failures:
return OperationResultSummary(
status_message=None,
error_message=f"{action.title()} had failures:\n" + "\n".join(str(item.get("error_message", "")) for item in failures),
)
return OperationResultSummary(
- status_message=f"{action.title()} succeeded for {len(result['results'])} targets",
+ status_message=f"{action.title()} succeeded for {len(results)} targets",
error_message=None,
)Also applies to: 218-219
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@env_inspector_gui/models.py` around lines 194 - 204,
summarize_operation_result currently assumes result["results"] is a materialized
list; validate and normalize that payload before iterating or calling len() to
avoid errors for None/generators/other iterables: check that result is a dict
and that results := result.get("results") is not None and is a Sized/Sequence
(or if it's an iterable, materialize it to a list), return an
OperationResultSummary with an error_message if validation fails, and then pass
the validated list into _batch_failures(...) and len(...); apply the same
validation/normalization to the other identical block that constructs
OperationResultSummary (the one that also calls _batch_failures and len on
result["results"]).
| def _github_token() -> str: | ||
| token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip() | ||
| if not token: | ||
| raise SystemExit("GITHUB_TOKEN or GH_TOKEN is required") | ||
| return token |
There was a problem hiding this comment.
Fix token fallback precedence on Line 101.
If GITHUB_TOKEN is set to whitespace, current logic ignores a valid GH_TOKEN and exits early.
🐛 Proposed fix
def _github_token() -> str:
- token = (os.environ.get("GITHUB_TOKEN", "") or os.environ.get("GH_TOKEN", "")).strip()
+ github_token = os.environ.get("GITHUB_TOKEN", "").strip()
+ gh_token = os.environ.get("GH_TOKEN", "").strip()
+ token = github_token or gh_token
if not token:
raise SystemExit("GITHUB_TOKEN or GH_TOKEN is required")
return token🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/_required_checks_impl.py` around lines 100 - 104, The
_github_token function currently strips after the OR, causing a whitespace-only
GITHUB_TOKEN to short-circuit and ignore a valid GH_TOKEN; fix by reading and
stripping both environment variables separately (e.g., get GITHUB_TOKEN and
GH_TOKEN into two variables and call .strip() on each), then set token to the
first non-empty value (github_token or gh_token), and keep the existing
SystemExit if token is empty; update _github_token accordingly.
Co-authored-by: Codex <noreply@openai.com>
Co-authored-by: Codex <noreply@openai.com>
Sync the coverage assertion helper to the current main-branch version so PR #53 stops conflicting on the one file that diverged after main advanced. Co-authored-by: Codex <noreply@openai.com>
|



Repo-local strict-zero follow-up for env-inspector.\n\n- tightens coverage helper validation\n- keeps the Codacy/DeepScan branch coverage regression tests aligned\n- leaves controller/ruleset changes out of scope\n\nLocal verification: python -m pytest -q (192 passed)
Summary by CodeRabbit
New Features
Tests
Chores