Skip to content

Build initial beta/regime risk management features#787

Merged
forstmeier merged 6 commits intostatistical-arbitrage-phase-twofrom
statistical-arbitrage-phase-three
Mar 9, 2026
Merged

Build initial beta/regime risk management features#787
forstmeier merged 6 commits intostatistical-arbitrage-phase-twofrom
statistical-arbitrage-phase-three

Conversation

@forstmeier
Copy link
Copy Markdown
Collaborator

Overview

Changes

  • add in SPY data fetching
  • add beta-neutral rebalancing
  • add regime-aware sizing
  • update portfolio manager server functionality
  • expand unit tests

Context

These are the last two pieces of the initial statistical arbitrage build.

@github-project-automation github-project-automation Bot moved this to In Progress in Overview Mar 8, 2026
@github-actions github-actions Bot added the python Python code updates label Mar 8, 2026
@github-actions github-actions Bot requested a review from chrisaddy March 8, 2026 03:48
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 8, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: cc7e7f99-806e-48f5-85c4-ea8374e3806c

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch statistical-arbitrage-phase-three

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 8, 2026

Greptile Summary

This PR completes the initial statistical arbitrage risk management layer by adding SPY data fetching (fetch_spy_prices), a new beta.py module for per-ticker market beta computation via log-return OLS regression against SPY, a new regime.py module for classifying the market regime using realized volatility and lag-1 autocorrelation, and wiring both into size_pairs_with_volatility_parity via SLSQP-based beta-neutral weight optimization and a binary exposure scale.

All core logic fixes from the prior review cycle (non-positive price guards, minimum return count guards, np.isclose float comparisons, dead code removal) are correctly implemented in the final code. Two issues remain to be addressed:

  • Lint failure in server.py: The # noqa: PERF203 suppression is missing from line 313's except block inside the close-position loop, which will cause ruff to fail during CI.
  • Dead constant in beta.py: _TRADING_DAYS_PER_YEAR is defined at module level but never used in this file and should be removed.

Confidence Score: 4/5

  • Safe to merge after addressing the lint failure and unused constant removal; both are straightforward fixes.
  • The core logic is sound and all significant correctness issues from the prior review cycle have been properly addressed. The two remaining issues are the accidental removal of a # noqa: PERF203 suppression that will cause ruff to fail during CI (blocking merge), and an unused constant that is minor cleanup. Both are straightforward to fix.
  • applications/portfoliomanager/src/portfoliomanager/server.py (line 313 — missing noqa suppression, blocking CI), applications/portfoliomanager/src/portfoliomanager/beta.py (line 7 — unused constant removal)

Last reviewed commit: 3b38a1f

Comment thread applications/portfoliomanager/src/portfoliomanager/regime.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/regime.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py
…te_portfolio_beta

Root cause: The early return in classify_regime checked len(spy_close) <
_MINIMUM_RETURN_COUNT (i.e., < 2), so execution continued with exactly 2
prices producing 1 return. np.std([x], ddof=1) on a single-element array
returns NaN, which silently propagated through volatility comparisons. The
function returned the correct fallback by coincidence (NaN comparisons always
evaluate False in Python), but relied on implicit NaN semantics rather than
an explicit guard.

Fix: Move returns = np.diff(np.log(spy_close)) before the early return and
check len(returns) < _MINIMUM_RETURN_COUNT instead, ensuring at least 2
returns are present before computing standard deviation.

Secondary fix: Remove the now-redundant inner length check on returns before
calling np.corrcoef. After the outer guard, len(returns) >= 2 is guaranteed,
making the else: autocorrelation = 0.0 branch dead code.

Also added a regression test for the exactly-2-prices edge case and an inline
comment on compute_portfolio_beta noting its role in test validation and
intent for future portfolio-level beta reporting.

Co-Authored-By: Claude <noreply@anthropic.com>
Comment thread applications/portfoliomanager/src/portfoliomanager/regime.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py
Copy link
Copy Markdown
Collaborator

@chrisaddy chrisaddy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review comments on beta/regime risk management changes.

Comment thread applications/portfoliomanager/src/portfoliomanager/data_client.py
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py
Comment thread applications/portfoliomanager/src/portfoliomanager/regime.py
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py
…ence logging

Three bugs/gaps addressed from code review:

1. regime.py: Fix silent NaN propagation in classify_regime. The guard
   `len(returns) < _MINIMUM_RETURN_COUNT` (min=2) allowed len==2 through,
   causing np.corrcoef to receive single-element arrays and return NaN.
   NaN comparisons silently evaluate to False/0.0, dropping the autocorrelation
   signal. Fix: tighten guard to `< _MINIMUM_RETURN_COUNT + 1` (requires 3+
   returns). _MINIMUM_RETURN_COUNT stays at 2 since beta.py uses it correctly
   for linregress. Added test for the len==2 boundary case. Also added a comment
   documenting the intentional conservative default (trending/0.0 halves exposure).

2. risk_management.py: Add zero-sum guard before normalizing adjusted_weights.
   If the optimizer returns all-zero weights, dividing by sum() would produce NaN.
   Falls back to volatility_parity_weights in that case.

3. server.py: Log regime_confidence alongside regime_state to surface the computed
   value. Added comment noting binary exposure_scale is intentional for this initial
   implementation with confidence reserved for future graduated scaling.

Co-Authored-By: Claude <noreply@anthropic.com>
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/risk_management.py Outdated
…y, dead code

Three issues flagged by greptile-apps after the previous commit:

1. beta.py line 20: Guard compared len(spy_close) (price count) against
   _MINIMUM_RETURN_COUNT (a returns count), identical off-by-one to the
   one already fixed in regime.py. Changed to _MINIMUM_RETURN_COUNT + 1
   so the guard correctly requires at least two prices to produce one return.

2. risk_management.py line 107: Replaced exact float equality
   adjusted_weights.sum() == 0.0 with np.isclose() for robustness.

3. risk_management.py lines 40-41: Removed unreachable if total == 0.0
   guard inside the SLSQP objective. SLSQP enforces sum(w) = target_total
   throughout optimization and bounds keep all weights strictly positive,
   so total can never be zero.

Co-Authored-By: Claude <noreply@anthropic.com>
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py
…d regime

beta.py had no check for zero/negative SPY close prices before computing
np.log(spy_close), while every ticker already had an explicit np.any(<=0)
guard. A corrupt row with close_price==0 or negative would silently produce
-inf or nan in spy_returns, propagating nan betas for all tickers and
causing SLSQP to fall back to vol-parity weights with a misleading warning.

regime.py had the same asymmetry: np.diff(np.log(spy_close)) was called
without any non-positive check, risking silent nan propagation into the
realized_volatility and autocorrelation calculations.

Fix: extend the existing length guard in beta.py to also reject non-positive
SPY prices; add an equivalent early-return guard in regime.py before the log
call.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py Outdated
Comment thread applications/portfoliomanager/src/portfoliomanager/server.py
Comment thread applications/portfoliomanager/src/portfoliomanager/beta.py
@forstmeier forstmeier merged commit c63ab2e into statistical-arbitrage-phase-two Mar 9, 2026
8 checks passed
@forstmeier forstmeier deleted the statistical-arbitrage-phase-three branch March 9, 2026 00:48
@github-project-automation github-project-automation Bot moved this from In Progress to Done in Overview Mar 9, 2026
chrisaddy pushed a commit that referenced this pull request Mar 11, 2026
…trage (#785)

* Build initial risk/data management functionality for statistical arbitrage

* Address PR #785 review feedback: zero-price guard and pairs schema validation

- Add np.any(prices <= 0) guard in _compute_log_returns alongside the
  existing NaN check. np.log(0) produces -inf which would propagate
  silently through np.diff and np.corrcoef before the pair was
  eventually discarded; the explicit guard prevents any inf values from
  entering the correlation matrix.

- Add pairs_schema.validate(candidate_pairs) in server.py after
  select_pairs returns (skipped for empty DataFrames, which are handled
  by the existing InsufficientPairsError path). Aligns with project
  guideline requiring DataFrame schemas to be validated in the
  production pipeline, not only in tests.

- Add test_compute_log_returns_excludes_tickers_with_zero_price to
  verify that a ticker containing a zero close price within the
  correlation window is excluded from pair candidates.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Expand portfolio manager rebalancing logic

* Address PR #786 feedback: stop-loss boundary, test determinism, and edge-case coverage

- Fix stop-loss boundary condition in evaluate_prior_pairs: the inclusive
  upper bound (abs_z <= Z_SCORE_STOP_LOSS) incorrectly held pairs at exactly
  z=4.0 rather than triggering stop-loss. Changed to exclusive (abs_z <
  Z_SCORE_STOP_LOSS) so z=4.0 closes the position as intended.

- Sort pair_ids before iteration in evaluate_prior_pairs to make the
  evaluation order deterministic. Fixes a non-deterministic test that relied
  on unique().to_list() ordering.

- Import _PRIOR_PORTFOLIO_SCHEMA from server.py in test_portfolio_server.py
  instead of duplicating the definition, preventing schema drift between
  production code and tests.

- Add edge-case tests for non-positive prices and NaN z-score paths in
  evaluate_prior_pairs (both previously untested branches).

- Add test for missing pair_id column failing portfolio_schema validation,
  locking in the new required field.

- Remove module-level ValueError guard that crashed pytest collection when
  ALPACA_API_KEY_ID and ALPACA_API_SECRET env vars were absent (root cause
  of CI failure).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix CI test collection failure and warning in portfoliomanager tests

Root cause of CI failure: server.py instantiates AlpacaClient at module
level. The Alpaca SDK's TradingClient.__init__ calls _validate_credentials,
which raises ValueError when empty strings are supplied (as in CI where no
credentials are set). This crashed pytest collection on import.

Fix: Add conftest.py that patches TradingClient and StockHistoricalDataClient
before server.py is imported during test collection, allowing AlpacaClient
to initialize without real credentials. The patch targets the underlying SDK
classes rather than AlpacaClient itself, so test_alpaca_client.py (which
tests AlpacaClient directly with its own per-test patches) is unaffected.

Also fix test_evaluate_prior_pairs_skips_pair_with_non_positive_prices: the
zero-price row was placed at i=0 but pair_price_matrix.tail(60) excludes
the first 5 rows of a 65-row dataset, so the non-positive price check was
never reached. The test fell through to compute_spread_zscore on nearly
identical data, producing scipy/numpy warnings. Moving the zero to the last
row (i=64) ensures it survives the tail cut and the guard triggers correctly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Restore fail-fast credential check via FastAPI lifespan

The module-level ValueError guard was removed in a prior commit to fix CI
test collection failures, but that silently broke the fail-fast guarantee:
a deployment with missing Alpaca credentials would start up cleanly, pass
health checks, and only fail when a trade was actually attempted.

Fix: move AlpacaClient instantiation and credential validation into a
FastAPI lifespan context manager (_lifespan). The lifespan runs at server
startup before any requests are accepted, so:
- A misconfigured deployment still fails immediately at boot with a clear
  error message, before any capital is at risk.
- Tests can import server.py without real credentials because the lifespan
  is not invoked during pytest collection or unit test execution.

The AlpacaClient instance is stored in app.state and retrieved via a local
alias at the top of create_portfolio, following FastAPI's recommended
pattern for app-level state instead of module-level globals.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Address PR #786 follow-up feedback: schema enforcement and boundary test

- Enforce _PRIOR_PORTFOLIO_SCHEMA when constructing prior portfolio
  DataFrame from JSON response, keeping both branches consistent
- Add boundary test confirming abs_z == Z_SCORE_STOP_LOSS (4.0)
  triggers stop-loss under the exclusive upper bound condition

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Catch PolarsError in get_prior_portfolio after schema enforcement

Adding schema= to pl.DataFrame can raise pl.exceptions.SchemaError or
pl.exceptions.InvalidOperationError on type coercion failures, neither
of which inherits from ValueError. Broaden the except clause to include
pl.exceptions.PolarsError so type mismatches fall back to an empty
DataFrame rather than propagating as a 500 from create_portfolio.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Specify aggregate_function in pivot to handle duplicate timestamps

Without aggregate_function, Polars behavior on duplicate (ticker, timestamp)
pairs is version-dependent: some versions silently take the first value,
others raise a DuplicateError. Specifying "last" makes the deduplication
strategy explicit and version-stable.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Raise _MINIMUM_PAIR_PRICE_ROWS to 30 for statistically stable z-scores

With only 2 data points, OLS regression yields a trivially perfect fit
(zero residual variance), producing z-scores that pass the NaN guard but
lack statistical reliability. 30 rows is the minimum for stable OLS
residual standard deviation estimates.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Reorder test values

* Build initial beta/regime risk management features

* Fix off-by-one guard and dead code in classify_regime; annotate compute_portfolio_beta

Root cause: The early return in classify_regime checked len(spy_close) <
_MINIMUM_RETURN_COUNT (i.e., < 2), so execution continued with exactly 2
prices producing 1 return. np.std([x], ddof=1) on a single-element array
returns NaN, which silently propagated through volatility comparisons. The
function returned the correct fallback by coincidence (NaN comparisons always
evaluate False in Python), but relied on implicit NaN semantics rather than
an explicit guard.

Fix: Move returns = np.diff(np.log(spy_close)) before the early return and
check len(returns) < _MINIMUM_RETURN_COUNT instead, ensuring at least 2
returns are present before computing standard deviation.

Secondary fix: Remove the now-redundant inner length check on returns before
calling np.corrcoef. After the outer guard, len(returns) >= 2 is guaranteed,
making the else: autocorrelation = 0.0 branch dead code.

Also added a regression test for the exactly-2-prices edge case and an inline
comment on compute_portfolio_beta noting its role in test validation and
intent for future portfolio-level beta reporting.

Co-Authored-By: Claude <noreply@anthropic.com>

* Address PR #785 review feedback: constants, RNG modernization, zero-confidence test

- Extract VOLATILITY_WINDOW_DAYS = 20 constant in consolidation.py and
  replace hardcoded .tail(20) call for consistency with other window constants
- Add test_consolidate_predictions_all_zero_confidence_does_not_raise to cover
  the edge case where all raw_confidence values are 0.0 (quantile_90=inf),
  verifying the `or 1.0` fallback prevents division by zero
- Replace legacy np.random.RandomState with np.random.default_rng (Generator
  API) across all test_statistical_arbitrage.py helpers and test functions
- Fix test_risk_management.py import: MINIMUM_PAIRS_REQUIRED alias was removed
  in a prior commit; update to import REQUIRED_PAIRS directly

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Address PR #785 review feedback: defensive fixes for stat arb data pipeline

Three issues flagged by automated review:

1. consolidation.py: Clip IQR to zero before computing raw_confidence to
   prevent inverted quantiles (quantile_10 > quantile_90) from producing
   raw_confidence > 1.0 or negative/inf values that corrupt the blended mean.
   Added test covering the inverted-quantile edge case.

2. statistical_arbitrage.py: Add np.isinf(hedge_ratio) to the degenerate-pair
   guard. np.polyfit returns inf when log_prices_b has zero variance; the
   explicit check makes the defensive intent clear and prevents inf hedge ratios
   from propagating. Added test with a flat price series to exercise this path.

3. data_client.py: Increase default lookback_days from 90 to 120 calendar days
   to widen the buffer above CORRELATION_WINDOW_DAYS = 60. The previous default
   left only ~3-4 trading days of headroom, meaning data gaps could silently
   drop tickers below the minimum required for pair selection.

Co-Authored-By: Claude <noreply@anthropic.com>

* Address PR #785 review feedback: pair_id propagation and group_by ordering

Two correctness fixes identified in code review:

1. risk_management.py: Added pair_id to both long_positions and short_positions
   selects. Without this, the saved portfolio had no way to correlate a long
   position with its corresponding short position, breaking pair-level exit logic
   where both legs must be closed together when a z-score reverts.

2. consolidation.py: Added sort_by("timestamp") inside the group_by aggregation
   expression before tail(). Polars group_by does not guarantee intra-group row
   order, so the prior code relied on undefined behavior to pick the most-recent
   VOLATILITY_WINDOW_DAYS returns. The explicit sort_by makes ordering guaranteed.

Co-Authored-By: Claude <noreply@anthropic.com>

* Address PR #787 review feedback: NaN guard, zero-weight guard, confidence logging

Three bugs/gaps addressed from code review:

1. regime.py: Fix silent NaN propagation in classify_regime. The guard
   `len(returns) < _MINIMUM_RETURN_COUNT` (min=2) allowed len==2 through,
   causing np.corrcoef to receive single-element arrays and return NaN.
   NaN comparisons silently evaluate to False/0.0, dropping the autocorrelation
   signal. Fix: tighten guard to `< _MINIMUM_RETURN_COUNT + 1` (requires 3+
   returns). _MINIMUM_RETURN_COUNT stays at 2 since beta.py uses it correctly
   for linregress. Added test for the len==2 boundary case. Also added a comment
   documenting the intentional conservative default (trending/0.0 halves exposure).

2. risk_management.py: Add zero-sum guard before normalizing adjusted_weights.
   If the optimizer returns all-zero weights, dividing by sum() would produce NaN.
   Falls back to volatility_parity_weights in that case.

3. server.py: Log regime_confidence alongside regime_state to surface the computed
   value. Added comment noting binary exposure_scale is intentional for this initial
   implementation with confidence reserved for future graduated scaling.

Co-Authored-By: Claude <noreply@anthropic.com>

* Address PR #787 review feedback: beta guard off-by-one, float equality, dead code

Three issues flagged by greptile-apps after the previous commit:

1. beta.py line 20: Guard compared len(spy_close) (price count) against
   _MINIMUM_RETURN_COUNT (a returns count), identical off-by-one to the
   one already fixed in regime.py. Changed to _MINIMUM_RETURN_COUNT + 1
   so the guard correctly requires at least two prices to produce one return.

2. risk_management.py line 107: Replaced exact float equality
   adjusted_weights.sum() == 0.0 with np.isclose() for robustness.

3. risk_management.py lines 40-41: Removed unreachable if total == 0.0
   guard inside the SLSQP objective. SLSQP enforces sum(w) = target_total
   throughout optimization and bounds keep all weights strictly positive,
   so total can never be zero.

Co-Authored-By: Claude <noreply@anthropic.com>

* Guard SPY prices against non-positive values before np.log in beta and regime

beta.py had no check for zero/negative SPY close prices before computing
np.log(spy_close), while every ticker already had an explicit np.any(<=0)
guard. A corrupt row with close_price==0 or negative would silently produce
-inf or nan in spy_returns, propagating nan betas for all tickers and
causing SLSQP to fall back to vol-parity weights with a misleading warning.

regime.py had the same asymmetry: np.diff(np.log(spy_close)) was called
without any non-positive check, risking silent nan propagation into the
realized_volatility and autocorrelation calculations.

Fix: extend the existing length guard in beta.py to also reject non-positive
SPY prices; add an equivalent early-return guard in regime.py before the log
call.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Update floating-point equality operation

* Add pull request feedback/add future reporter logic from manual testing script

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python Python code updates

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

2 participants