Skip to content

Add YAML linting and Python complexity tooling#757

Closed
forstmeier wants to merge 6 commits intomasterfrom
development-tooling-expansion
Closed

Add YAML linting and Python complexity tooling#757
forstmeier wants to merge 6 commits intomasterfrom
development-tooling-expansion

Conversation

@forstmeier
Copy link
Copy Markdown
Collaborator

@forstmeier forstmeier commented Feb 15, 2026

Overview

Changes

  • Add yamllint for YAML file linting and validation
  • Add yaml lint and 'yaml all' commands for YAML validation
  • Integrate YAML checks into GitHub Actions workflow
  • Add YAML checks to pre-commit configuration
  • Format long conditionals in workflow files for readability
  • Add document start markers to all YAML files per YAML best practices
  • Reformat long lines in workflow conditionals for readability
  • Ignore build artifacts and virtual environments
  • Set line-length maximum to 120 characters
  • Configure truthy values for GitHub Actions compatibility
  • Disable line-length checks for Pulumi encrypted secrets

Context

Adding YAML linting tools to improve code quality and consistency across configuration files.

Summary by CodeRabbit

  • New Features

    • New explicit portfolio error type improving error signals.
  • Improvements

    • Modularized and optimized data processing for faster, more maintainable behavior.
    • Added YAML linting and CI checks to catch config issues earlier.
    • Updated project description text.
  • Chores

    • Added development/config files (virtualenv requirement, lint configs, CI workflows).
    • Bumped several dependencies for stability and features.

Add comprehensive code quality and security checking tools to the development workflow:
- Replace radon with xenon for enforcing Python complexity thresholds
- Add pip-audit for Python dependency vulnerability scanning
- Add cargo-audit for Rust dependency vulnerability scanning
- Add yamllint for YAML file linting and validation

Update Mask commands to include new checks in development workflows:
- Add 'python complexity' command using xenon with strict thresholds
- Add 'python audit' command for dependency security scanning
- Add 'rust audit' command for dependency security scanning
- Add 'yaml lint' and 'yaml all' commands for YAML validation
- Integrate all new checks into respective 'all' commands

Update CI/CD and pre-commit hooks:
- Add YAML checks to GitHub Actions workflow
- Add YAML checks to pre-commit configuration
- Format long conditionals in workflow files for readability

Create yamllint configuration:
- Ignore build artifacts and virtual environments
- Set line-length maximum to 120 characters
- Configure truthy values for GitHub Actions compatibility
- Disable line-length checks for Pulumi encrypted secrets

Fix YAML formatting issues:
- Add document start markers to all YAML files per YAML best practices
- Reformat long lines in workflow conditionals for readability

All new tools exit with non-zero status on violations to properly fail CI/CD builds.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@forstmeier forstmeier self-assigned this Feb 15, 2026
@forstmeier forstmeier added the markdown Markdown code updates label Feb 15, 2026
@github-project-automation github-project-automation Bot moved this to To Do in Overview Feb 15, 2026
@forstmeier forstmeier added the yaml YAML code updates label Feb 15, 2026
@forstmeier forstmeier moved this from To Do to In Progress in Overview Feb 15, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 15, 2026

📝 Walkthrough

Walkthrough

This PR adds YAML linting and CI/workflow updates, Pulumi production config and runtime toolchain, tooling/config files (pip, markdownlint, pre-commit), bumps some dependencies, and refactors two services: modularizing equity price batch creation and restructuring portfolio rebalance logic with new error signaling.

Changes

Cohort / File(s) Summary
Tooling manifest & YAML linting
​.flox/env/manifest.toml, ​.yamllint, ​.flox/pip.ini, ​.pre-commit-config.yaml, maskfile.md
Re-added/added package-paths (including xenon), added pip virtualenv enforcement, added YAML lint config, added pre-commit YAML hook, and Mask workflow entries for YAML checks.
GitHub Actions
​.github/workflows/run_claude_code_coding_agent.yaml, ​.github/workflows/run_code_checks.yaml
Reformatted workflow conditionals; added run_yaml_code_checks job; changed coverage upload condition to require both Rust and Python checks.
Lint / Markdown config
​.markdownlint.yaml, ​.yamllint
Added YAML document start marker and YAML lint rules (line length, truthy values) and ignored paths.
Infrastructure / Pulumi / Compose
infrastructure/Pulumi.yaml, infrastructure/Pulumi.production.yaml, infrastructure/docker-compose.yaml
Added runtime.options.toolchain: uv, new production config with AWS region and fund-scoped keys/secrets/roles, and added YAML document start/version metadata to compose file.
Project metadata
pyproject.toml
Updated project description text.
Dependency updates
applications/datamanager/Cargo.toml
Bumped polars 0.51.0 → 0.53.0 and added features (temporal, serde, strings); updated testcontainers dev deps.
Equity Price Model refactor
applications/equitypricemodel/src/equitypricemodel/tide_data.py
Refactored batch creation into helpers (_select_batch_data, _validate_date_range, _collect_samples_from_ticker, _collect_all_samples, _create_batches), added DataType Literal, and updated get_batches signature and internal flow.
Portfolio Manager refactor
applications/portfoliomanager/src/portfoliomanager/server.py, applications/portfoliomanager/src/portfoliomanager/exceptions.py
Extracted open/close position helpers (_close_single_position, _close_all_positions, _open_single_position, _open_all_positions, _prepare_portfolio_data), introduced PortfolioDataError exception, added AlpacaAccount import, and reworked create_portfolio control flow and error paths.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server as Portfolio Manager Server
    participant Alpaca as Alpaca API
    participant DB as Data Store

    Client->>Server: create_portfolio(request)
    Server->>Server: _prepare_portfolio_data(request)
    Server->>DB: get_prior_portfolio_tickers(...)
    DB-->>Server: prior_tickers
    Server->>DB: get_current_predictions(...)
    DB-->>Server: predictions
    alt preparation fails
        Server->>Server: raise PortfolioDataError
        Server-->>Client: error response
    else preparation succeeds
        Server->>Alpaca: get_positions()
        Alpaca-->>Server: current_positions
        Server->>Server: _close_all_positions(current_positions)
        loop close each
            Server->>Alpaca: submit close order
            Alpaca-->>Server: close confirmation
        end
        Server->>Alpaca: get_account()
        Alpaca-->>Server: updated_account_state
        Server->>Server: _open_all_positions(target_positions, account_state)
        loop open each
            Server->>Alpaca: submit open order
            Alpaca-->>Server: open confirmation
        end
        Server->>Server: assemble result
        Server-->>Client: portfolio response
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately summarizes the main changes: adding YAML linting and Python complexity tooling as new development features.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into master

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch development-tooling-expansion

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive code quality and security audit tooling to the repository, including Python complexity analysis with xenon (replacing radon), dependency vulnerability scanning with pip-audit and cargo-audit, and YAML linting with yamllint. The changes also include reformatting long conditionals in GitHub Actions workflows for improved readability and adding document start markers to all YAML files to comply with YAML best practices.

Changes:

  • Replaced radon with xenon for Python complexity enforcement and added pip-audit for Python dependency security scanning
  • Added cargo-audit for Rust dependency security scanning
  • Added yamllint for YAML validation with custom configuration (120 character line length, GitHub Actions truthy value support)
  • Updated project description from "Open source quantitative hedge fund 🍊" to "The open source capital management company"

Reviewed changes

Copilot reviewed 12 out of 13 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
pyproject.toml Updated project description
maskfile.md Added rust audit, python complexity/audit, and yaml lint commands with integration into respective all commands
infrastructure/docker-compose.yaml Added YAML document start marker
infrastructure/Pulumi.yaml Added YAML document start marker
infrastructure/Pulumi.production.yaml Added yamllint line-length disable comment and document start marker
.yamllint New configuration file for yamllint with custom rules
.pre-commit-config.yaml Added YAML checks to pre-commit hooks
.markdownlint.yaml Added YAML document start marker
.github/workflows/run_code_checks.yaml Added YAML code checks job and reformatted multi-line conditional
.github/workflows/run_claude_code_coding_agent.yaml Reformatted multi-line conditional for readability
.flox/pip.ini New pip configuration file requiring virtualenv
.flox/env/manifest.toml Added xenon and pip-audit packages, reorganized yamllint position
.flox/env/manifest.lock Auto-generated lock file updates for new packages, removed deprecated profile section

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread .flox/pip.ini Outdated
Comment thread .flox/pip.ini Outdated
Comment thread pyproject.toml
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pyproject.toml (1)

6-16: 🛠️ Refactor suggestion | 🟠 Major

Add xenon, pip-audit, and yamllint to the [dependency-groups] dev section.

These tools are invoked directly in maskfile.md development tasks (mask development python complexity, mask development python audit, mask development yaml lint) but are missing from the dev dependencies. After uv sync --all-packages --all-groups, they won't be available for local development workflows. Add them to lines 30-31:

[dependency-groups]
dev = ["coverage>=7.8.0", "pytest>=8.3.5", "behave>=1.2.6", "xenon>=2.3.0", "pip-audit>=2.6.0", "yamllint>=1.35.0"]

Alternatively, update maskfile.md to invoke them via uvx (like vulture and ty) to avoid adding them as explicit dependencies.

🤖 Fix all issues with AI agents
In @.flox/env/manifest.toml:
- Around line 48-50: Add an entry for cargo-audit to the Flox manifest alongside
the existing tool mappings (e.g., yamllint.pkg-path, xenon.pkg-path,
pip-audit.pkg-path) by adding the line that maps the package path to
"cargo-audit" so Flox manages it the same way as
cargo-watch/cargo-nextest/cargo-llvm-cov.

In @.flox/pip.ini:
- Around line 1-2: Remove the two-space indentation at the start of both lines
so the INI uses conventional left-aligned entries; specifically update the
section header "[global]" and the key "require-virtualenv = true" to have no
leading whitespace.

In @.github/workflows/run_code_checks.yaml:
- Around line 102-108: The if condition is being treated as a non-empty literal
string because of the YAML literal block and the ${ { } } wrapper; remove the
pipe and the ${ { } } wrapper so the expression is evaluated by GitHub Actions.
Replace the current block-style line starting with "if: |" and the "${{ ... }}"
contents with a single-line condition like: if: always() &&
(needs.run_rust_code_checks.result == 'success' ||
needs.run_python_code_checks.result == 'success'); update the job that runs
upload_test_coverage to use that corrected if expression (referencing the same
needs.run_rust_code_checks.result and needs.run_python_code_checks.result
symbols).

In `@maskfile.md`:
- Around line 592-604: maskfile.md's complexity task invokes the xenon CLI but
xenon isn't listed as a dev dependency; update the root pyproject.toml by adding
"xenon" to the [project.optional-dependencies] or the
[tool.poetry.dev-dependencies]/[dependency-groups] dev group (whichever project
uses) so the tool is installed for CI and local runs, then run xenon against the
codebase to ensure thresholds (--max-absolute B --max-modules B --max-average A)
pass and adjust code or thresholds if necessary; reference maskfile.md, the
xenon package name, and the root pyproject.toml's dev dependency group when
making the change.
- Around line 606-618: Update the audit step so pip-audit targets the uv-managed
environment explicitly: instead of running plain "pip-audit", either install and
run pip-audit through uv (e.g., use uv to install pip-audit and then run it via
uv) or export the uv environment's requirements (use uv pip freeze to write
/tmp/req.txt) and run pip-audit against that requirements file; ensure the
existing shell safety flags (set -euo pipefail) remain and reference the
pip-audit and uv commands (pip-audit, uv pip freeze, uv pip install, uv run)
when making the change.

Comment thread .flox/env/manifest.toml Outdated
Comment thread .flox/pip.ini Outdated
Comment thread .github/workflows/run_code_checks.yaml Outdated
Comment thread maskfile.md
Comment thread maskfile.md Outdated
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 15, 2026

Greptile Summary

This PR adds several code quality and security audit tools to the development workflow: xenon for Python complexity enforcement, pip-audit for Python dependency vulnerability scanning, cargo-audit for Rust dependency scanning, and yamllint for YAML validation. It integrates these into the mask command structure, GitHub Actions CI, and pre-commit hooks, and adds YAML document start markers (---) across existing YAML files.

  • The xenon complexity command is missing --exclude flags for .flox, .venv, and target directories, which other tools (e.g., vulture) already exclude — this will cause false violations when scanning third-party code in virtual environments or build artifacts
  • cargo-audit is installed on-the-fly via cargo install rather than being declared in the Flox manifest like the other new tools (xenon, pip-audit), creating inconsistency and slower cold starts
  • New .flox/pip.ini has unconventional leading whitespace on the INI section header
  • Workflow formatting improvements, yamllint configuration, and YAML document start markers are all well-structured

Confidence Score: 4/5

  • This PR is safe to merge with one functional issue to address in the xenon command's missing directory exclusions
  • The changes are primarily additive tooling configuration with no production code impact. The main concern is the xenon command scanning unintended directories, which will produce false positives rather than silently passing bad code. All other changes (yamllint config, CI workflow additions, pre-commit hooks, document start markers) are well-structured and follow existing patterns.
  • Pay close attention to maskfile.md — the xenon complexity command needs directory exclusions to match other tools

Important Files Changed

Filename Overview
.flox/env/manifest.toml Adds xenon, pip-audit, and moves yamllint to alphabetical position alongside other dev tools. Clean changes.
.flox/pip.ini New pip config to require virtualenv for pip installs. Functionally correct but has unconventional leading whitespace on both lines.
.github/workflows/run_code_checks.yaml Adds new YAML code checks job and reformats the upload_test_coverage conditional for readability. Well-structured.
.pre-commit-config.yaml Adds YAML pre-commit hook consistent with the existing pattern for Python, Rust, and Markdown hooks.
.yamllint New yamllint config with sensible defaults: 120-char line length, truthy values allowing 'on' for GitHub Actions compatibility, and ignores for build artifacts.
maskfile.md Adds audit (Rust and Python), complexity, and YAML lint commands. The xenon command is missing directory exclusions (.flox, .venv, target) that other tools use, and cargo-audit is not managed via Flox.

Last reviewed commit: 25c0458

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

13 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment thread maskfile.md Outdated
Comment thread .flox/pip.ini Outdated
Comment thread maskfile.md Outdated
chrisaddy
chrisaddy previously approved these changes Feb 16, 2026
Fixed all review comments from CodeRabbit, Copilot, and Greptile:

1. Added pip-audit and cargo-audit to Flox manifest for consistency
   with other dev tools and faster cold starts

2. Fixed critical bug in GitHub Actions workflow where if condition
   was always truthy due to YAML literal block with ${{ }} wrapper

3. Added --exclude '.flox,.venv,target' to xenon complexity command
   to prevent false positives from third-party code

4. Responded to all 13 review comments with explanations and fixes

All Python and Rust development checks pass locally.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
coderabbitai[bot]
coderabbitai Bot previously approved these changes Feb 16, 2026
Copilot AI review requested due to automatic review settings February 16, 2026 02:30
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 16 out of 19 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread infrastructure/pyproject.toml Outdated
Comment thread .flox/pip.ini Outdated
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 16, 2026

Greptile Summary

This PR adds comprehensive code quality and security audit tooling to improve automated code review and development workflows.

Key Changes:

  • Replaced radon with xenon for Python complexity analysis with strict thresholds (C for absolute/modules, A for average)
  • Added pip-audit and cargo-audit for Python and Rust dependency vulnerability scanning
  • Added yamllint for YAML validation with 120-char line length and GitHub Actions-compatible truthy values
  • Integrated new checks into mask development python all, mask development rust all, and added mask development yaml all
  • Added YAML validation to GitHub Actions workflow and pre-commit hooks
  • Added document start markers (---) to all YAML files per best practices
  • Refactored complex Python functions (tide_data.py and server.py) by extracting helper methods to meet new complexity thresholds
  • Updated dependencies: polars 0.53.0, testcontainers 0.26, pip 26.0, protobuf 5.29.6
  • Created .flox/pip.ini to enforce virtual environment usage for pip operations
  • Updated project description from "Open source quantitative hedge fund 🍊" to "The open source capital management company"

Code Quality Improvements:

  • The refactoring in tide_data.py splits get_batches into logical helper methods: _select_batch_data, _validate_date_range, _collect_samples_from_ticker, _collect_all_samples, and _create_batches
  • The refactoring in server.py extracts position handling into _close_single_position, _close_all_positions, _open_single_position, _open_all_positions, and _prepare_portfolio_data
  • Both refactorings maintain identical logic while improving readability and meeting complexity requirements

Previous Feedback Addressed:

  • Xenon now includes --exclude '.flox,.venv,target' flag (previously missing)
  • cargo-audit is now managed via Flox manifest (previously only had runtime fallback)

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • All changes are additive tooling improvements with no breaking changes. The Python and Rust refactorings are complexity-driven splits with no logic changes. Dependency updates are minor version bumps. Previous feedback has been addressed (xenon exclusions, cargo-audit in Flox). The only minor issue is unconventional whitespace in pip.ini which doesn't affect functionality.
  • No files require special attention

Important Files Changed

Filename Overview
.flox/pip.ini New pip config file enforcing virtual environment usage, has unconventional leading whitespace
.yamllint New yamllint config with 120-char line length, truthy values for GitHub Actions, excludes build dirs
applications/equitypricemodel/src/equitypricemodel/tide_data.py Refactored get_batches into smaller helper methods to reduce complexity, no logic changes
applications/portfoliomanager/src/portfoliomanager/server.py Refactored create_portfolio into helper functions to reduce complexity, no logic changes
maskfile.md Added audit commands for Python/Rust, complexity check for Python, YAML lint commands, integrated into all commands

Last reviewed commit: 24e0d97

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment thread .flox/pip.ini Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
applications/equitypricemodel/src/equitypricemodel/tide_data.py (1)

417-433: 🛠️ Refactor suggestion | 🟠 Major

Log messages should use sentence case, not snake_case identifiers.

All new log messages ("partitioning_data_by_ticker", "batch_creation_started", "batch_progress", "sample_collection_complete") use snake_case event names instead of sentence-cased messages. The existing code in this file (e.g., "Invalid values in continuous column before scaling" on line 201) follows the correct convention.

Proposed fix (applies here and in _create_batches)
-        logger.info("partitioning_data_by_ticker")
+        logger.info("Partitioning data by ticker")
         ticker_groups = batch_data.sort("time_idx").partition_by("ticker", as_dict=True)
         total_tickers = len(ticker_groups)
-        logger.info("batch_creation_started", total_tickers=total_tickers)
+        logger.info("Batch creation started", total_tickers=total_tickers)
 
         for idx, ticker_df in enumerate(ticker_groups.values()):
             if idx % 25 == 0:
                 logger.info(
-                    "batch_progress", ticker_idx=idx, total_tickers=total_tickers
+                    "Batch progress", ticker_idx=idx, total_tickers=total_tickers
                 )
...
-        logger.info("sample_collection_complete", total_samples=len(samples))
+        logger.info("Sample collection complete", total_samples=len(samples))

Similarly in _create_batches (lines 440, 468):

-        logger.info("batching_samples", batch_size=batch_size)
+        logger.info("Batching samples", batch_size=batch_size)
...
-        logger.info("batch_creation_complete", total_batches=len(batches))
+        logger.info("Batch creation complete", total_batches=len(batches))

As per coding guidelines, "Structured log messages should be short sentences with sentence case (e.g., 'Starting data sync' not 'STARTING DATA SYNC')".

🤖 Fix all issues with AI agents
In `@applications/equitypricemodel/src/equitypricemodel/tide_data.py`:
- Around line 471-484: The parameter data_type only accepts "train", "validate",
or "predict" but is annotated as str; add a Literal alias to enforce this at
type-check time by importing Literal (e.g. from typing import Literal) and
defining DataType = Literal["train", "validate", "predict"], then change the
annotation data_type: str -> data_type: DataType in get_batches and in the
private helpers _select_batch_data, _collect_samples_from_ticker,
_collect_all_samples, and _create_batches so signatures reflect the allowed
values and enable static checking.
- Around line 340-352: The current _validate_date_range uses global min/max
across batch_data which misses per-ticker sparsity; change it to compute
per-ticker row counts (group by "ticker") and only raise if every ticker has
fewer than required_days (or otherwise skip raising), and additionally add a
warning in _collect_all_samples (after sample accumulation) to log a warning
when total_samples == 0 so get_batches doesn't silently return empty; refer to
_validate_date_range, _collect_all_samples, get_batches and the num_windows
logic when making these changes.
- Around line 363-364: The variables cat_array and cont_array are abbreviated;
rename them to full-word variables (e.g., categorical_array and
continuous_array) where they are created in tide_data.py (the lines assigning
ticker_df[self.categorical_columns] and ticker_df[self.continuous_columns]) and
update all subsequent references that use cat_array and cont_array (including
the later usages around the code that manipulates or passes these arrays) to the
new names, preserving the .to_numpy().astype(...) behavior and types (np.int32
and np.float32).

In `@applications/portfoliomanager/src/portfoliomanager/server.py`:
- Around line 300-302: The helper _prepare_portfolio_data currently returns
either data tuple or an HTTP Response which couples data logic to HTTP; change
it to always return the data tuple (tuple[AlpacaAccount, pl.DataFrame,
list[str], pl.DataFrame]) and instead raise a custom exception (e.g.,
PortfolioDataError) on error conditions; update the function signature to remove
the Response union, add a new exception class (PortfolioDataError) near related
types, and modify callers of _prepare_portfolio_data to catch PortfolioDataError
and convert it to the appropriate HTTP Response there (so callers handle
Response creation while _prepare_portfolio_data remains testable and
data-focused).
- Around line 363-370: The isinstance(Response) guard in create_portfolio is
fragile; change _prepare_portfolio_data to stop returning a Response union and
instead raise an HTTPException (or return a strictly typed payload) when it
needs to short-circuit, then update create_portfolio to assume a concrete
portfolio payload and remove the isinstance check. Concretely, modify
_prepare_portfolio_data to either raise fastapi.HTTPException on errors or
return a well-typed data object, and update create_portfolio (the
create_portfolio handler) to accept the data returned by _prepare_portfolio_data
without instanceof checks (replace portfolio_data union handling with
exception-based control flow or a discriminated result type).
- Around line 391-392: The account used to compute buying power is stale because
account is fetched in _prepare_portfolio_data before calling
_close_all_positions; after calling _close_all_positions you must refresh the
account (or update account.buying_power from the latest data) before calling
_open_all_positions so opens see the freed capital; update the code around
_close_all_positions/_open_all_positions to re-fetch the account (or call a
provided get_account/get_latest_account function) and then pass the refreshed
account.buying_power into _open_all_positions.

Comment thread applications/equitypricemodel/src/equitypricemodel/tide_data.py
Comment thread applications/equitypricemodel/src/equitypricemodel/tide_data.py Outdated
Comment thread applications/equitypricemodel/src/equitypricemodel/tide_data.py
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py
coderabbitai[bot]
coderabbitai Bot previously approved these changes Feb 16, 2026
@forstmeier
Copy link
Copy Markdown
Collaborator Author

Thank you @coderabbitai and @greptile-apps for the comprehensive reviews! All feedback has been addressed:

Configuration fixes:

  • Removed leading whitespace from .flox/pip.ini for standard INI format
  • Added upper bound constraint to pip dependency (>=26.0,<27.0)

Code quality improvements:

  • Renamed abbreviated variables (cat_array/cont_array → categorical_array/continuous_array) per CLAUDE.md guidelines
  • Added Literal type alias for data_type parameter to improve static type safety
  • Refactored _prepare_portfolio_data to use exception-based control flow for better separation of concerns

Critical bug fix:

  • Added account refresh after closing positions to ensure accurate buying power for opening new positions

Regarding the Rust check failures:
The failures are due to an external issue with the RustSec advisory database, not our code:

  • Error: unsupported CVSS version: 4.0 in RUSTSEC-2026-0009.md
  • Root cause: RustSec added a CVSS 4.0 advisory on Feb 14, but cargo-audit 0.22.1 (latest) doesn't support CVSS 4.0 yet
  • Platform-specific: Occurs only on Linux (GitHub Actions), not macOS
  • All actual tests pass (82/82), compilation succeeds, lint checks pass
  • No code changes needed - issue will resolve when cargo-audit is updated to support CVSS 4.0

All Python tests pass with 94% coverage. Ready for review!

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 16, 2026

@forstmeier Thank you for the comprehensive updates! Your detailed breakdown of the changes is excellent:

Configuration fixes - INI format correction and pip upper bound
Code quality improvements - Variable naming, type safety with Literal, and control flow refactoring
Critical bug fix - Account refresh for accurate buying power
Rust check explanation - Clear analysis of the external CVSS 4.0 issue

The explanation of the Rust check failures is particularly helpful - it's clear this is a temporary external dependency issue with cargo-audit not yet supporting CVSS 4.0, rather than a code problem. With all Python tests passing at 94% coverage and all Rust tests (82/82) successful, the changes look solid.

I'll trigger a fresh review now!

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@forstmeier forstmeier changed the title Add code quality and security audit tooling Add YAML linting and Python complexity tooling Feb 16, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 16 out of 19 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

- run_rust_code_checks
- run_python_code_checks
if: ${{ always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success') }}
if: always() && (needs.run_rust_code_checks.result == 'success' && needs.run_python_code_checks.result == 'success')
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conditional logic has been changed from OR to AND. Previously, coverage would be uploaded if EITHER Rust OR Python checks succeeded. Now it requires BOTH to succeed. This is a significant behavioral change that means if either language's tests fail, no coverage will be uploaded at all. This may be intentional to ensure complete coverage data, but it's a breaking change in the workflow behavior that should be highlighted.

Suggested change
if: always() && (needs.run_rust_code_checks.result == 'success' && needs.run_python_code_checks.result == 'success')
if: always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success')

Copilot uses AI. Check for mistakes.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 16, 2026

Greptile Summary

This PR adds YAML linting tooling (yamllint, mask commands, pre-commit hook, CI job) and makes YAML files compliant by adding --- document start markers and a yamllint config. It also includes significant refactors of server.py (extracting position management and data preparation into helper functions with a new PortfolioDataError exception) and tide_data.py (decomposing get_batches into smaller methods with a DataType Literal alias), Rust dependency bumps (polars 0.53, testcontainers 0.26), and a pip virtualenv enforcement config.

  • The coverage upload CI condition was changed from || to &&, which will skip coverage uploads if only one of the two test suites passes — this may not be the intended behavior
  • The server.py refactor introduces a new get_account() call after closing positions; if it fails, the endpoint returns 500 with positions already closed, leaving the portfolio partially rebalanced
  • The tide_data.py refactor is clean and preserves existing logic while improving readability through method extraction
  • YAML tooling setup is straightforward and follows the established pattern for markdown linting

Confidence Score: 3/5

  • The YAML tooling additions are safe, but the CI condition change and portfolio server refactor introduce behavioral changes that warrant review.
  • Score reflects two behavioral concerns: (1) the OR-to-AND change in the coverage upload condition may inadvertently prevent partial coverage uploads, and (2) the new account refresh in the portfolio rebalance flow can abort after positions have already been closed, leaving a partially rebalanced state. The remaining changes (YAML tooling, tide_data refactor, dependency bumps) are low-risk.
  • Pay close attention to .github/workflows/run_code_checks.yaml (coverage upload logic change) and applications/portfoliomanager/src/portfoliomanager/server.py (partial rebalance failure mode).

Important Files Changed

Filename Overview
.github/workflows/run_code_checks.yaml Added YAML code checks job; coverage upload condition changed from OR to AND, potentially losing partial coverage data on single-suite failures.
applications/portfoliomanager/src/portfoliomanager/server.py Major refactor extracting position management and data preparation into helper functions; introduces a new account refresh after closing positions that can abort the rebalance mid-way.
applications/equitypricemodel/src/equitypricemodel/tide_data.py Refactored get_batches into smaller methods; added DataType Literal type alias. Logic preserved correctly.
.yamllint New yamllint config with sensible defaults: 120 char line length, truthy values for GitHub Actions, ignores build dirs.
.pre-commit-config.yaml Added YAML pre-commit hook using mask development yaml all.
maskfile.md Added yaml lint and yaml all mask commands for YAML validation.
applications/datamanager/Cargo.toml Bumped polars from 0.51.0 to 0.53.0, testcontainers from 0.23 to 0.26, and testcontainers-modules from 0.11 to 0.14.

Last reviewed commit: 167a050

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

- run_rust_code_checks
- run_python_code_checks
if: ${{ always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success') }}
if: always() && (needs.run_rust_code_checks.result == 'success' && needs.run_python_code_checks.result == 'success')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Coverage upload condition changed from OR to AND

The original condition used ||, meaning coverage was uploaded if either Rust or Python checks succeeded. The new condition uses &&, requiring both to succeed. This means if one suite has a transient failure, neither suite's coverage gets uploaded — potentially losing valid coverage data from the passing suite. Was this intentional?

Suggested change
if: always() && (needs.run_rust_code_checks.result == 'success' && needs.run_python_code_checks.result == 'success')
if: always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success')
Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/workflows/run_code_checks.yaml
Line: 102:102

Comment:
**Coverage upload condition changed from OR to AND**

The original condition used `||`, meaning coverage was uploaded if *either* Rust or Python checks succeeded. The new condition uses `&&`, requiring *both* to succeed. This means if one suite has a transient failure, neither suite's coverage gets uploaded — potentially losing valid coverage data from the passing suite. Was this intentional?

```suggestion
    if: always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success')
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +404 to +410
try:
account = alpaca_client.get_account()
except Exception as e:
logger.exception(
"Failed to refresh account after closing positions", error=str(e)
)
return Response(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Account refresh failure aborts after positions already closed

If get_account() fails here, the endpoint returns 500 but positions have already been closed by _close_all_positions above. This leaves the portfolio in a partially rebalanced state (old positions closed, no new positions opened). In the original code, the buying power from the initial account fetch was reused, so the flow would continue and attempt to open new positions. Consider falling back to account.buying_power from the _prepare_portfolio_data result instead of aborting entirely.

Prompt To Fix With AI
This is a comment left during a code review.
Path: applications/portfoliomanager/src/portfoliomanager/server.py
Line: 404:410

Comment:
**Account refresh failure aborts after positions already closed**

If `get_account()` fails here, the endpoint returns 500 but positions have already been closed by `_close_all_positions` above. This leaves the portfolio in a partially rebalanced state (old positions closed, no new positions opened). In the original code, the buying power from the initial account fetch was reused, so the flow would continue and attempt to open new positions. Consider falling back to `account.buying_power` from the `_prepare_portfolio_data` result instead of aborting entirely.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
applications/equitypricemodel/src/equitypricemodel/tide_data.py (1)

413-440: ⚠️ Potential issue | 🟡 Minor

Log messages should use sentence case, not snake_case identifiers.

The new log event names ("partitioning_data_by_ticker", "batch_creation_started", "batch_progress", "sample_collection_complete") use snake_case, while the coding guidelines and existing code in this file (e.g., line 203: "Invalid values in continuous column before scaling") use sentence-case short sentences.

This also applies to "batching_samples" (line 446) and "batch_creation_complete" (line 474) in _create_batches.

Proposed fix (this method)
-        logger.info("partitioning_data_by_ticker")
+        logger.info("Partitioning data by ticker")
         ticker_groups = batch_data.sort("time_idx").partition_by("ticker", as_dict=True)
         total_tickers = len(ticker_groups)
-        logger.info("batch_creation_started", total_tickers=total_tickers)
+        logger.info("Batch creation started", total_tickers=total_tickers)
 
         for idx, ticker_df in enumerate(ticker_groups.values()):
             if idx % 25 == 0:
                 logger.info(
-                    "batch_progress", ticker_idx=idx, total_tickers=total_tickers
+                    "Batch progress", ticker_idx=idx, total_tickers=total_tickers
                 )
 ...
         logger.info("sample_collection_complete", total_samples=len(samples))
+        logger.info("Sample collection complete", total_samples=len(samples))

As per coding guidelines, "Structured log messages should be short sentences with sentence case (e.g., 'Starting data sync' not 'STARTING DATA SYNC')".

🤖 Fix all issues with AI agents
In `@applications/equitypricemodel/src/equitypricemodel/tide_data.py`:
- Around line 492-494: The method get_batches currently assigns batch data to an
instance attribute self.batch_data but only uses it within the method; change
the assignment in get_batches to a local variable (batch_data =
self._select_batch_data(...)), replace any subsequent references to
self.batch_data inside get_batches with batch_data (e.g., where used after the
call to _select_batch_data and at the later access points), and remove any other
writes/reads to self.batch_data so the mutable data does not persist on the
object; ensure no other methods rely on self.batch_data before committing the
change.

In `@applications/portfoliomanager/src/portfoliomanager/server.py`:
- Around line 185-194: The current code calls alpaca_client.get_account() after
every successful open (try/except around get_account()) which can hit API rate
limits; change the logic to refresh the real account only periodically instead
of per-position by adding a refresh policy in the scope that tracks opens (e.g.,
a counter or timestamp) and only invokes alpaca_client.get_account() when the
counter reaches N or a time interval has elapsed (or when remaining_buying_power
falls below a safety threshold); keep the existing except branch that falls back
to subtracting dollar_amount from remaining_buying_power to preserve safety when
a refresh fails, and update places that reference remaining_buying_power to use
the periodically refreshed value.
- Line 415: The comprehension creating failed_trades uses a short variable name
`r`; rename it to a full-word variable `result` for clarity. Update the list
comprehension in the scope where failed_trades is defined (which iterates over
all_results) to use `result` instead of `r`, and adjust any nearby usages
expecting `r` accordingly (e.g., any later references that assumed the short
name). Ensure the file's imports or linting pass after the rename.

Comment on lines +492 to +494
self.batch_data = self._select_batch_data(
data_type, validation_split, input_length + output_length
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check all references to batch_data in the codebase
rg -n 'batch_data' --type=py -C 3

Repository: oscmcompany/fund

Length of output: 7676


🏁 Script executed:

#!/bin/bash
# Check the entire get_batches method and look for any other methods that might access self.batch_data
rg -n 'def get_batches\|def ' applications/equitypricemodel/src/equitypricemodel/tide_data.py | head -20

Repository: oscmcompany/fund

Length of output: 42


🏁 Script executed:

#!/bin/bash
# Search for any direct access to self.batch_data outside the get_batches method
rg -n 'self\.batch_data' applications/equitypricemodel/src/equitypricemodel/tide_data.py

Repository: oscmcompany/fund

Length of output: 271


🏁 Script executed:

#!/bin/bash
# Read the full get_batches method to see the exact context (lines 477-502 as mentioned in the comment)
sed -n '470,510p' applications/equitypricemodel/src/equitypricemodel/tide_data.py | cat -n

Repository: oscmcompany/fund

Length of output: 1775


Make batch_data a local variable to eliminate unnecessary instance state.

The self.batch_data attribute is created at line 492, used immediately at lines 496 and 499 within get_batches, and never accessed outside this method. Storing it as instance state leaves mutable data lingering on the object after the method returns. If get_batches is called multiple times in sequence, self.batch_data will retain data from the previous call, creating a subtle source of bugs.

Since batch_data is only needed within this method, refactor it to a local variable:

Proposed fix
-        self.batch_data = self._select_batch_data(
+        batch_data = self._select_batch_data(
             data_type, validation_split, input_length + output_length
         )

-        self._validate_date_range(self.batch_data, input_length + output_length)
+        self._validate_date_range(batch_data, input_length + output_length)

         samples = self._collect_all_samples(
-            self.batch_data, input_length, output_length, data_type
+            batch_data, input_length, output_length, data_type
         )
🤖 Prompt for AI Agents
In `@applications/equitypricemodel/src/equitypricemodel/tide_data.py` around lines
492 - 494, The method get_batches currently assigns batch data to an instance
attribute self.batch_data but only uses it within the method; change the
assignment in get_batches to a local variable (batch_data =
self._select_batch_data(...)), replace any subsequent references to
self.batch_data inside get_batches with batch_data (e.g., where used after the
call to _select_batch_data and at the later access points), and remove any other
writes/reads to self.batch_data so the mutable data does not persist on the
object; ensure no other methods rely on self.batch_data before committing the
change.

Comment on lines +185 to +194
try:
account = alpaca_client.get_account()
remaining_buying_power = account.buying_power
except Exception:
logger.exception(
"Failed to refresh buying power from account, using estimate",
ticker=ticker,
deducting=dollar_amount,
)
remaining_buying_power -= dollar_amount
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Per-position account refresh is an API call per successful open.

Calling get_account() after every successful open is the safest approach for accurate buying power but may be costly under rate limits if many positions are opened. The fallback estimation on failure (Line 194) is a good safety net. Consider whether a batch refresh (e.g., every N positions) would be sufficient if rate-limiting becomes an issue.

🤖 Prompt for AI Agents
In `@applications/portfoliomanager/src/portfoliomanager/server.py` around lines
185 - 194, The current code calls alpaca_client.get_account() after every
successful open (try/except around get_account()) which can hit API rate limits;
change the logic to refresh the real account only periodically instead of
per-position by adding a refresh policy in the scope that tracks opens (e.g., a
counter or timestamp) and only invokes alpaca_client.get_account() when the
counter reaches N or a time interval has elapsed (or when remaining_buying_power
falls below a safety threshold); keep the existing except branch that falls back
to subtracting dollar_amount from remaining_buying_power to preserve safety when
a refresh fails, and update places that reference remaining_buying_power to use
the periodically refreshed value.

open_results, _, _ = _open_all_positions(open_positions, account.buying_power)

all_results = close_results + open_results
failed_trades = [r for r in all_results if r["status"] == "failed"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use full word variable name instead of r.

Per coding guidelines, use full word variables. Rename r to result.

Proposed fix
-    failed_trades = [r for r in all_results if r["status"] == "failed"]
+    failed_trades = [result for result in all_results if result["status"] == "failed"]

As per coding guidelines, **/*.{py,rs}: "Use full word variables in code whenever possible".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
failed_trades = [r for r in all_results if r["status"] == "failed"]
failed_trades = [result for result in all_results if result["status"] == "failed"]
🤖 Prompt for AI Agents
In `@applications/portfoliomanager/src/portfoliomanager/server.py` at line 415,
The comprehension creating failed_trades uses a short variable name `r`; rename
it to a full-word variable `result` for clarity. Update the list comprehension
in the scope where failed_trades is defined (which iterates over all_results) to
use `result` instead of `r`, and adjust any nearby usages expecting `r`
accordingly (e.g., any later references that assumed the short name). Ensure the
file's imports or linting pass after the rename.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
maskfile.md (1)

585-606: ⚠️ Potential issue | 🟠 Major

Add complexity to the Python “all” workflow.

mask development python all should be comprehensive, but it skips the new complexity task. Include it so the “all” command remains the canonical full check.

Proposed change
@@
 mask development python lint
 
 mask development python type-check
 
 mask development python dead-code
 
+mask development python complexity
+
 mask development python test

Based on learnings "Use 'mask development python all' for comprehensive Python checks".

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Feb 16, 2026

Greptile Summary

Added YAML linting infrastructure with yamllint and refactored complex Python functions to meet code complexity standards.

Major changes:

  • Added .yamllint configuration with sensible defaults (120 char line length, build/venv exclusions)
  • Integrated YAML checks into pre-commit hooks and GitHub Actions workflow
  • Added YAML document start markers (---) to all YAML files per best practices
  • Refactored tide_data.py::get_batches into helper methods (_select_batch_data, _validate_date_range, _collect_samples_from_ticker, _collect_all_samples, _create_batches) to reduce cyclomatic complexity
  • Refactored server.py portfolio rebalancing with _prepare_portfolio_data helper and added PortfolioDataError exception
  • Added xenon for Python complexity analysis and cargo-audit to Flox manifest
  • Reformatted long conditionals in workflow files for readability
  • Updated project description in pyproject.toml

The coverage upload condition was changed from OR to AND logic - verify this is intentional as it may prevent valid coverage uploads when one test suite fails.

Confidence Score: 4/5

  • This PR is safe to merge with one verification needed regarding the coverage upload logic change
  • The PR successfully adds YAML linting infrastructure and reduces Python code complexity through well-structured refactoring. The code changes are clean, follow best practices, and maintain backward compatibility. However, the coverage upload condition change from OR to AND logic (line 102 in run_code_checks.yaml) requires verification as it was previously flagged and may prevent valid coverage data from being uploaded when only one test suite fails
  • Verify the coverage upload condition change in .github/workflows/run_code_checks.yaml is intentional

Important Files Changed

Filename Overview
.yamllint Added YAML linting configuration with sensible defaults and directory exclusions
.github/workflows/run_code_checks.yaml Added YAML job and changed coverage upload from OR to AND logic - may prevent valid coverage uploads
maskfile.md Added YAML linting commands with proper structure
applications/equitypricemodel/src/equitypricemodel/tide_data.py Refactored get_batches method into smaller helper functions to reduce complexity
applications/portfoliomanager/src/portfoliomanager/server.py Refactored portfolio rebalancing logic to reduce complexity with helper functions

Last reviewed commit: 34a3426

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

- run_rust_code_checks
- run_python_code_checks
if: ${{ always() && (needs.run_rust_code_checks.result == 'success' || needs.run_python_code_checks.result == 'success') }}
if: always() && (needs.run_rust_code_checks.result == 'success' && needs.run_python_code_checks.result == 'success')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing || to && means coverage only uploads when both Rust and Python checks pass. If one suite has a transient failure, valid coverage data from the passing suite won't be uploaded. This was flagged in previous threads - was this change intentional?

Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/workflows/run_code_checks.yaml
Line: 102:102

Comment:
Changing `||` to `&&` means coverage only uploads when both Rust and Python checks pass. If one suite has a transient failure, valid coverage data from the passing suite won't be uploaded. This was flagged in previous threads - was this change intentional?

How can I resolve this? If you propose a fix, please make it concise.

@forstmeier
Copy link
Copy Markdown
Collaborator Author

This got all fucked up - I'm closing it and adding a subset of these changes in a separate pull request.

@forstmeier forstmeier closed this Feb 16, 2026
@github-project-automation github-project-automation Bot moved this from In Progress to Done in Overview Feb 16, 2026
@coveralls
Copy link
Copy Markdown
Collaborator

coveralls commented Feb 16, 2026

Coverage Status

coverage: 86.222% (+0.01%) from 86.212%
when pulling 34a3426 on development-tooling-expansion
into 17d5629 on master.

@forstmeier forstmeier deleted the development-tooling-expansion branch February 24, 2026 01:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

markdown Markdown code updates yaml YAML code updates

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

4 participants