Expand portfolio manager rebalancing logic#786
Expand portfolio manager rebalancing logic#786forstmeier merged 17 commits intostatistical-arbitrage-phase-onefrom
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR advances the portfolio manager by wiring in smarter rebalancing logic: it introduces market regime classification ( Key changes:
The main concern is in Confidence Score: 3/5
|
| Filename | Overview |
|---|---|
| applications/portfoliomanager/src/portfoliomanager/beta.py | New module computing per-ticker market betas via OLS regression against SPY. Has two data-guard issues: _MINIMUM_RETURN_COUNT = 2 allows statistically unreliable 2-point betas into the optimizer, and the ticker guard is inconsistent with the SPY guard. |
| applications/portfoliomanager/src/portfoliomanager/regime.py | New module classifying market regime via realized volatility and lag-1 autocorrelation. Logic is sound and tests cover edge cases; the minimum return count of 3 is low but acceptable given the 60-day window in normal operation. |
| applications/portfoliomanager/src/portfoliomanager/risk_management.py | Adds beta-neutral weight optimization via SLSQP and an exposure_scale multiplier. Optimizer falls back to vol-parity weights on non-convergence. Logic is sound with appropriate bounds preventing division by zero in the objective function. |
| applications/portfoliomanager/src/portfoliomanager/server.py | Major refactor: AlpacaClient moved to FastAPI lifespan, get_prior_portfolio now returns full DataFrame, new evaluate_prior_pairs function drives hold/close decisions via z-score, and regime/beta wired into position sizing. Previous review issues are addressed. |
| applications/datamanager/src/storage.rs | Removes the backwards-compatible dual-query fallback (with/without action column), consolidates into a single query that always selects pair_id. Clean simplification. |
Comments Outside Diff (2)
-
applications/portfoliomanager/src/portfoliomanager/beta.py, line 8 (link)_MINIMUM_RETURN_COUNT = 2allows statistically unreliable beta estimatesWith
_MINIMUM_RETURN_COUNT = 2,scipy.stats.linregresscan be called with as few as 2 data points (enforced bycount < _MINIMUM_RETURN_COUNTat line 39). OLS regression through exactly 2 points always yields a perfect fit (R² = 1.0, zero residual variance), so the resulting beta depends entirely on two arbitrary returns and is statistically meaningless. This unreliable beta feeds directly into_apply_beta_neutral_weightswhere it drives real position-size adjustments via the SLSQP optimizer.This is the same concern that led to raising
_MINIMUM_PAIR_PRICE_ROWSfrom 2 to 30 for z-score computation. For beta estimation via OLS, a practically sound minimum is also around 30 returns so there are enough residual degrees of freedom. -
applications/portfoliomanager/src/portfoliomanager/beta.py, line 34 (link)Inconsistent minimum price guard for ticker vs SPY
The SPY guard is
len(spy_close) < _MINIMUM_RETURN_COUNT + 1— correctly expressing "we need at least N+1 prices to get N returns". The ticker guard here islen(ticker_close) < _MINIMUM_RETURN_COUNT, which means "at least N prices (for N-1 returns)". This asymmetry doesn't cause a runtime bug because the downstreamcount < _MINIMUM_RETURN_COUNTcheck at line 39 catches the case where only 1 ticker return is available, but it makes the intent of the guard harder to reason about.For consistency and clarity, the ticker guard should use the same
+ 1idiom:
Last reviewed commit: 8a86b1c
There was a problem hiding this comment.
Pull request overview
Expands the portfolio manager’s rebalancing workflow to incorporate prior pair “hold vs close” decisions based on spread z-scores, while updating the data manager’s portfolio schema/handlers to include pair_id and return empty arrays for “no data” cases.
Changes:
- Add prior-portfolio retrieval + per-pair evaluation (
evaluate_prior_pairs) and exclude “held” tickers from the close list. - Require and propagate
pair_idthrough portfolio sizing, validation, persistence, and retrieval (Python + Rust). - Update data manager
/portfoliosbehavior to return200 OKwith[]when no portfolio data exists; refactor/expand related unit tests.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| applications/portfoliomanager/src/portfoliomanager/server.py | Adds get_prior_portfolio, evaluate_prior_pairs, and integrates held-ticker logic into rebalancing. |
| applications/portfoliomanager/src/portfoliomanager/risk_management.py | Adds z-score thresholds and includes pair_id in generated position rows. |
| applications/portfoliomanager/src/portfoliomanager/portfolio_schema.py | Makes pair_id required in portfolio_schema. |
| applications/portfoliomanager/tests/test_portfolio_server.py | New unit tests for prior portfolio fetching, prior pair evaluation, and position open/close selection. |
| applications/portfoliomanager/tests/test_portfolio_schema.py | Updates schema tests for required pair_id and revised expected shapes. |
| applications/datamanager/src/data.rs | Extends Portfolio struct and dataframe creation to include pair_id. |
| applications/datamanager/src/storage.rs | Updates portfolio query SQL/mapping to include pair_id and removes legacy fallback query path. |
| applications/datamanager/src/portfolios.rs | Returns 200 OK + [] for “no portfolio data/files” cases. |
| applications/datamanager/tests/test_data.rs | Updates tests for pair_id column and dataframe width expectations. |
| applications/datamanager/tests/test_handlers.rs | Updates handler tests for pair_id and new 200 OK empty-array responses. |
| applications/datamanager/tests/test_storage.rs | Updates sample portfolio to include pair_id and removes test for deprecated fallback behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…dge-case coverage - Fix stop-loss boundary condition in evaluate_prior_pairs: the inclusive upper bound (abs_z <= Z_SCORE_STOP_LOSS) incorrectly held pairs at exactly z=4.0 rather than triggering stop-loss. Changed to exclusive (abs_z < Z_SCORE_STOP_LOSS) so z=4.0 closes the position as intended. - Sort pair_ids before iteration in evaluate_prior_pairs to make the evaluation order deterministic. Fixes a non-deterministic test that relied on unique().to_list() ordering. - Import _PRIOR_PORTFOLIO_SCHEMA from server.py in test_portfolio_server.py instead of duplicating the definition, preventing schema drift between production code and tests. - Add edge-case tests for non-positive prices and NaN z-score paths in evaluate_prior_pairs (both previously untested branches). - Add test for missing pair_id column failing portfolio_schema validation, locking in the new required field. - Remove module-level ValueError guard that crashed pytest collection when ALPACA_API_KEY_ID and ALPACA_API_SECRET env vars were absent (root cause of CI failure). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause of CI failure: server.py instantiates AlpacaClient at module level. The Alpaca SDK's TradingClient.__init__ calls _validate_credentials, which raises ValueError when empty strings are supplied (as in CI where no credentials are set). This crashed pytest collection on import. Fix: Add conftest.py that patches TradingClient and StockHistoricalDataClient before server.py is imported during test collection, allowing AlpacaClient to initialize without real credentials. The patch targets the underlying SDK classes rather than AlpacaClient itself, so test_alpaca_client.py (which tests AlpacaClient directly with its own per-test patches) is unaffected. Also fix test_evaluate_prior_pairs_skips_pair_with_non_positive_prices: the zero-price row was placed at i=0 but pair_price_matrix.tail(60) excludes the first 5 rows of a 65-row dataset, so the non-positive price check was never reached. The test fell through to compute_spread_zscore on nearly identical data, producing scipy/numpy warnings. Moving the zero to the last row (i=64) ensures it survives the tail cut and the guard triggers correctly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The module-level ValueError guard was removed in a prior commit to fix CI test collection failures, but that silently broke the fail-fast guarantee: a deployment with missing Alpaca credentials would start up cleanly, pass health checks, and only fail when a trade was actually attempted. Fix: move AlpacaClient instantiation and credential validation into a FastAPI lifespan context manager (_lifespan). The lifespan runs at server startup before any requests are accepted, so: - A misconfigured deployment still fails immediately at boot with a clear error message, before any capital is at risk. - Tests can import server.py without real credentials because the lifespan is not invoked during pytest collection or unit test execution. The AlpacaClient instance is stored in app.state and retrieved via a local alias at the top of create_portfolio, following FastAPI's recommended pattern for app-level state instead of module-level globals. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Enforce _PRIOR_PORTFOLIO_SCHEMA when constructing prior portfolio DataFrame from JSON response, keeping both branches consistent - Add boundary test confirming abs_z == Z_SCORE_STOP_LOSS (4.0) triggers stop-loss under the exclusive upper bound condition Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adding schema= to pl.DataFrame can raise pl.exceptions.SchemaError or pl.exceptions.InvalidOperationError on type coercion failures, neither of which inherits from ValueError. Broaden the except clause to include pl.exceptions.PolarsError so type mismatches fall back to an empty DataFrame rather than propagating as a 500 from create_portfolio. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Without aggregate_function, Polars behavior on duplicate (ticker, timestamp) pairs is version-dependent: some versions silently take the first value, others raise a DuplicateError. Specifying "last" makes the deduplication strategy explicit and version-stable. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
With only 2 data points, OLS regression yields a trivially perfect fit (zero residual variance), producing z-scores that pass the NaN guard but lack statistical reliability. 30 rows is the minimum for stable OLS residual standard deviation estimates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…te_portfolio_beta Root cause: The early return in classify_regime checked len(spy_close) < _MINIMUM_RETURN_COUNT (i.e., < 2), so execution continued with exactly 2 prices producing 1 return. np.std([x], ddof=1) on a single-element array returns NaN, which silently propagated through volatility comparisons. The function returned the correct fallback by coincidence (NaN comparisons always evaluate False in Python), but relied on implicit NaN semantics rather than an explicit guard. Fix: Move returns = np.diff(np.log(spy_close)) before the early return and check len(returns) < _MINIMUM_RETURN_COUNT instead, ensuring at least 2 returns are present before computing standard deviation. Secondary fix: Remove the now-redundant inner length check on returns before calling np.corrcoef. After the outer guard, len(returns) >= 2 is guaranteed, making the else: autocorrelation = 0.0 branch dead code. Also added a regression test for the exactly-2-prices edge case and an inline comment on compute_portfolio_beta noting its role in test validation and intent for future portfolio-level beta reporting. Co-Authored-By: Claude <noreply@anthropic.com>
…ence logging Three bugs/gaps addressed from code review: 1. regime.py: Fix silent NaN propagation in classify_regime. The guard `len(returns) < _MINIMUM_RETURN_COUNT` (min=2) allowed len==2 through, causing np.corrcoef to receive single-element arrays and return NaN. NaN comparisons silently evaluate to False/0.0, dropping the autocorrelation signal. Fix: tighten guard to `< _MINIMUM_RETURN_COUNT + 1` (requires 3+ returns). _MINIMUM_RETURN_COUNT stays at 2 since beta.py uses it correctly for linregress. Added test for the len==2 boundary case. Also added a comment documenting the intentional conservative default (trending/0.0 halves exposure). 2. risk_management.py: Add zero-sum guard before normalizing adjusted_weights. If the optimizer returns all-zero weights, dividing by sum() would produce NaN. Falls back to volatility_parity_weights in that case. 3. server.py: Log regime_confidence alongside regime_state to surface the computed value. Added comment noting binary exposure_scale is intentional for this initial implementation with confidence reserved for future graduated scaling. Co-Authored-By: Claude <noreply@anthropic.com>
…y, dead code Three issues flagged by greptile-apps after the previous commit: 1. beta.py line 20: Guard compared len(spy_close) (price count) against _MINIMUM_RETURN_COUNT (a returns count), identical off-by-one to the one already fixed in regime.py. Changed to _MINIMUM_RETURN_COUNT + 1 so the guard correctly requires at least two prices to produce one return. 2. risk_management.py line 107: Replaced exact float equality adjusted_weights.sum() == 0.0 with np.isclose() for robustness. 3. risk_management.py lines 40-41: Removed unreachable if total == 0.0 guard inside the SLSQP objective. SLSQP enforces sum(w) = target_total throughout optimization and bounds keep all weights strictly positive, so total can never be zero. Co-Authored-By: Claude <noreply@anthropic.com>
…d regime beta.py had no check for zero/negative SPY close prices before computing np.log(spy_close), while every ticker already had an explicit np.any(<=0) guard. A corrupt row with close_price==0 or negative would silently produce -inf or nan in spy_returns, propagating nan betas for all tickers and causing SLSQP to fall back to vol-parity weights with a misleading warning. regime.py had the same asymmetry: np.diff(np.log(spy_close)) was called without any non-positive check, risking silent nan propagation into the realized_volatility and autocorrelation calculations. Fix: extend the existing length guard in beta.py to also reject non-positive SPY prices; add an equivalent early-return guard in regime.py before the log call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…three Build initial beta/regime risk management features
…trage (#785) * Build initial risk/data management functionality for statistical arbitrage * Address PR #785 review feedback: zero-price guard and pairs schema validation - Add np.any(prices <= 0) guard in _compute_log_returns alongside the existing NaN check. np.log(0) produces -inf which would propagate silently through np.diff and np.corrcoef before the pair was eventually discarded; the explicit guard prevents any inf values from entering the correlation matrix. - Add pairs_schema.validate(candidate_pairs) in server.py after select_pairs returns (skipped for empty DataFrames, which are handled by the existing InsufficientPairsError path). Aligns with project guideline requiring DataFrame schemas to be validated in the production pipeline, not only in tests. - Add test_compute_log_returns_excludes_tickers_with_zero_price to verify that a ticker containing a zero close price within the correlation window is excluded from pair candidates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Expand portfolio manager rebalancing logic * Address PR #786 feedback: stop-loss boundary, test determinism, and edge-case coverage - Fix stop-loss boundary condition in evaluate_prior_pairs: the inclusive upper bound (abs_z <= Z_SCORE_STOP_LOSS) incorrectly held pairs at exactly z=4.0 rather than triggering stop-loss. Changed to exclusive (abs_z < Z_SCORE_STOP_LOSS) so z=4.0 closes the position as intended. - Sort pair_ids before iteration in evaluate_prior_pairs to make the evaluation order deterministic. Fixes a non-deterministic test that relied on unique().to_list() ordering. - Import _PRIOR_PORTFOLIO_SCHEMA from server.py in test_portfolio_server.py instead of duplicating the definition, preventing schema drift between production code and tests. - Add edge-case tests for non-positive prices and NaN z-score paths in evaluate_prior_pairs (both previously untested branches). - Add test for missing pair_id column failing portfolio_schema validation, locking in the new required field. - Remove module-level ValueError guard that crashed pytest collection when ALPACA_API_KEY_ID and ALPACA_API_SECRET env vars were absent (root cause of CI failure). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Fix CI test collection failure and warning in portfoliomanager tests Root cause of CI failure: server.py instantiates AlpacaClient at module level. The Alpaca SDK's TradingClient.__init__ calls _validate_credentials, which raises ValueError when empty strings are supplied (as in CI where no credentials are set). This crashed pytest collection on import. Fix: Add conftest.py that patches TradingClient and StockHistoricalDataClient before server.py is imported during test collection, allowing AlpacaClient to initialize without real credentials. The patch targets the underlying SDK classes rather than AlpacaClient itself, so test_alpaca_client.py (which tests AlpacaClient directly with its own per-test patches) is unaffected. Also fix test_evaluate_prior_pairs_skips_pair_with_non_positive_prices: the zero-price row was placed at i=0 but pair_price_matrix.tail(60) excludes the first 5 rows of a 65-row dataset, so the non-positive price check was never reached. The test fell through to compute_spread_zscore on nearly identical data, producing scipy/numpy warnings. Moving the zero to the last row (i=64) ensures it survives the tail cut and the guard triggers correctly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Restore fail-fast credential check via FastAPI lifespan The module-level ValueError guard was removed in a prior commit to fix CI test collection failures, but that silently broke the fail-fast guarantee: a deployment with missing Alpaca credentials would start up cleanly, pass health checks, and only fail when a trade was actually attempted. Fix: move AlpacaClient instantiation and credential validation into a FastAPI lifespan context manager (_lifespan). The lifespan runs at server startup before any requests are accepted, so: - A misconfigured deployment still fails immediately at boot with a clear error message, before any capital is at risk. - Tests can import server.py without real credentials because the lifespan is not invoked during pytest collection or unit test execution. The AlpacaClient instance is stored in app.state and retrieved via a local alias at the top of create_portfolio, following FastAPI's recommended pattern for app-level state instead of module-level globals. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Address PR #786 follow-up feedback: schema enforcement and boundary test - Enforce _PRIOR_PORTFOLIO_SCHEMA when constructing prior portfolio DataFrame from JSON response, keeping both branches consistent - Add boundary test confirming abs_z == Z_SCORE_STOP_LOSS (4.0) triggers stop-loss under the exclusive upper bound condition Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Catch PolarsError in get_prior_portfolio after schema enforcement Adding schema= to pl.DataFrame can raise pl.exceptions.SchemaError or pl.exceptions.InvalidOperationError on type coercion failures, neither of which inherits from ValueError. Broaden the except clause to include pl.exceptions.PolarsError so type mismatches fall back to an empty DataFrame rather than propagating as a 500 from create_portfolio. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Specify aggregate_function in pivot to handle duplicate timestamps Without aggregate_function, Polars behavior on duplicate (ticker, timestamp) pairs is version-dependent: some versions silently take the first value, others raise a DuplicateError. Specifying "last" makes the deduplication strategy explicit and version-stable. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Raise _MINIMUM_PAIR_PRICE_ROWS to 30 for statistically stable z-scores With only 2 data points, OLS regression yields a trivially perfect fit (zero residual variance), producing z-scores that pass the NaN guard but lack statistical reliability. 30 rows is the minimum for stable OLS residual standard deviation estimates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Reorder test values * Build initial beta/regime risk management features * Fix off-by-one guard and dead code in classify_regime; annotate compute_portfolio_beta Root cause: The early return in classify_regime checked len(spy_close) < _MINIMUM_RETURN_COUNT (i.e., < 2), so execution continued with exactly 2 prices producing 1 return. np.std([x], ddof=1) on a single-element array returns NaN, which silently propagated through volatility comparisons. The function returned the correct fallback by coincidence (NaN comparisons always evaluate False in Python), but relied on implicit NaN semantics rather than an explicit guard. Fix: Move returns = np.diff(np.log(spy_close)) before the early return and check len(returns) < _MINIMUM_RETURN_COUNT instead, ensuring at least 2 returns are present before computing standard deviation. Secondary fix: Remove the now-redundant inner length check on returns before calling np.corrcoef. After the outer guard, len(returns) >= 2 is guaranteed, making the else: autocorrelation = 0.0 branch dead code. Also added a regression test for the exactly-2-prices edge case and an inline comment on compute_portfolio_beta noting its role in test validation and intent for future portfolio-level beta reporting. Co-Authored-By: Claude <noreply@anthropic.com> * Address PR #785 review feedback: constants, RNG modernization, zero-confidence test - Extract VOLATILITY_WINDOW_DAYS = 20 constant in consolidation.py and replace hardcoded .tail(20) call for consistency with other window constants - Add test_consolidate_predictions_all_zero_confidence_does_not_raise to cover the edge case where all raw_confidence values are 0.0 (quantile_90=inf), verifying the `or 1.0` fallback prevents division by zero - Replace legacy np.random.RandomState with np.random.default_rng (Generator API) across all test_statistical_arbitrage.py helpers and test functions - Fix test_risk_management.py import: MINIMUM_PAIRS_REQUIRED alias was removed in a prior commit; update to import REQUIRED_PAIRS directly Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Address PR #785 review feedback: defensive fixes for stat arb data pipeline Three issues flagged by automated review: 1. consolidation.py: Clip IQR to zero before computing raw_confidence to prevent inverted quantiles (quantile_10 > quantile_90) from producing raw_confidence > 1.0 or negative/inf values that corrupt the blended mean. Added test covering the inverted-quantile edge case. 2. statistical_arbitrage.py: Add np.isinf(hedge_ratio) to the degenerate-pair guard. np.polyfit returns inf when log_prices_b has zero variance; the explicit check makes the defensive intent clear and prevents inf hedge ratios from propagating. Added test with a flat price series to exercise this path. 3. data_client.py: Increase default lookback_days from 90 to 120 calendar days to widen the buffer above CORRELATION_WINDOW_DAYS = 60. The previous default left only ~3-4 trading days of headroom, meaning data gaps could silently drop tickers below the minimum required for pair selection. Co-Authored-By: Claude <noreply@anthropic.com> * Address PR #785 review feedback: pair_id propagation and group_by ordering Two correctness fixes identified in code review: 1. risk_management.py: Added pair_id to both long_positions and short_positions selects. Without this, the saved portfolio had no way to correlate a long position with its corresponding short position, breaking pair-level exit logic where both legs must be closed together when a z-score reverts. 2. consolidation.py: Added sort_by("timestamp") inside the group_by aggregation expression before tail(). Polars group_by does not guarantee intra-group row order, so the prior code relied on undefined behavior to pick the most-recent VOLATILITY_WINDOW_DAYS returns. The explicit sort_by makes ordering guaranteed. Co-Authored-By: Claude <noreply@anthropic.com> * Address PR #787 review feedback: NaN guard, zero-weight guard, confidence logging Three bugs/gaps addressed from code review: 1. regime.py: Fix silent NaN propagation in classify_regime. The guard `len(returns) < _MINIMUM_RETURN_COUNT` (min=2) allowed len==2 through, causing np.corrcoef to receive single-element arrays and return NaN. NaN comparisons silently evaluate to False/0.0, dropping the autocorrelation signal. Fix: tighten guard to `< _MINIMUM_RETURN_COUNT + 1` (requires 3+ returns). _MINIMUM_RETURN_COUNT stays at 2 since beta.py uses it correctly for linregress. Added test for the len==2 boundary case. Also added a comment documenting the intentional conservative default (trending/0.0 halves exposure). 2. risk_management.py: Add zero-sum guard before normalizing adjusted_weights. If the optimizer returns all-zero weights, dividing by sum() would produce NaN. Falls back to volatility_parity_weights in that case. 3. server.py: Log regime_confidence alongside regime_state to surface the computed value. Added comment noting binary exposure_scale is intentional for this initial implementation with confidence reserved for future graduated scaling. Co-Authored-By: Claude <noreply@anthropic.com> * Address PR #787 review feedback: beta guard off-by-one, float equality, dead code Three issues flagged by greptile-apps after the previous commit: 1. beta.py line 20: Guard compared len(spy_close) (price count) against _MINIMUM_RETURN_COUNT (a returns count), identical off-by-one to the one already fixed in regime.py. Changed to _MINIMUM_RETURN_COUNT + 1 so the guard correctly requires at least two prices to produce one return. 2. risk_management.py line 107: Replaced exact float equality adjusted_weights.sum() == 0.0 with np.isclose() for robustness. 3. risk_management.py lines 40-41: Removed unreachable if total == 0.0 guard inside the SLSQP objective. SLSQP enforces sum(w) = target_total throughout optimization and bounds keep all weights strictly positive, so total can never be zero. Co-Authored-By: Claude <noreply@anthropic.com> * Guard SPY prices against non-positive values before np.log in beta and regime beta.py had no check for zero/negative SPY close prices before computing np.log(spy_close), while every ticker already had an explicit np.any(<=0) guard. A corrupt row with close_price==0 or negative would silently produce -inf or nan in spy_returns, propagating nan betas for all tickers and causing SLSQP to fall back to vol-parity weights with a misleading warning. regime.py had the same asymmetry: np.diff(np.log(spy_close)) was called without any non-positive check, risking silent nan propagation into the realized_volatility and autocorrelation calculations. Fix: extend the existing length guard in beta.py to also reject non-positive SPY prices; add an equivalent early-return guard in regime.py before the log call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Update floating-point equality operation * Add pull request feedback/add future reporter logic from manual testing script --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Overview
Changes
Context
Continuing on with the next phase of work but keeping it separate so the bots can review it more cleanly.