Build initial risk/data management functionality for statistical arbitrage#785
Build initial risk/data management functionality for statistical arbitrage#785
Conversation
|
Caution Review failedFailed to post review comments 📝 WalkthroughWalkthroughAdds a pairs-first statistical-arbitrage pipeline: data fetchers, model consolidation, pair selection, volatility- and beta-based sizing, regime/beta analytics, Alpaca shortability checks, portfolio schema changes (pair_id), server orchestration updates, new exceptions, and extensive unit tests and Rust datamanager adjustments. Changes
Sequence Diagram(s)sequenceDiagram
participant Server as Server
participant DM as DataManager API
participant Models as Prediction Service
participant Consolidation as Consolidation
participant Alpaca as Alpaca API
participant Pairs as Pair Selector
participant Risk as Risk Manager
Server->>DM: fetch_historical_prices(reference_date, lookback_days)
DM-->>Server: historical_prices
Server->>DM: fetch_equity_details()
DM-->>Server: equity_details
Server->>Models: get_raw_predictions()
Models-->>Server: model_predictions
Server->>Consolidation: consolidate_predictions(model_predictions, historical_prices, equity_details)
Consolidation-->>Server: consolidated_signals
Server->>Alpaca: get_shortable_tickers(consolidated_signals.tickers)
Alpaca-->>Server: shortable_tickers
Server->>Pairs: select_pairs(consolidated_signals.filtered, historical_prices)
Pairs-->>Server: candidate_pairs
Server->>Risk: size_pairs_with_volatility_parity(candidate_pairs, maximum_capital, timestamp, market_betas, exposure_scale)
Risk-->>Server: sized_positions
Server->>Alpaca: open/close positions (orders)
Alpaca-->>Server: order confirmations
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR builds the initial statistical arbitrage pipeline for the portfolio manager, adding pair selection, volatility-parity sizing, beta-neutral optimization, regime classification, and hold/stop-loss logic. Most previously flagged issues (zero-price filter, Two issues remain:
Confidence Score: 3/5
|
There was a problem hiding this comment.
Pull request overview
This PR refactors the portfolio manager from a prediction-based long/short strategy to a statistical arbitrage (pairs trading) approach. It introduces data fetching utilities, a consolidation pipeline for blending model predictions, pair selection via cointegration analysis, and volatility-parity-based position sizing. The old add_predictions_zscore_ranked_columns / create_optimal_portfolio approach is replaced end-to-end.
Changes:
- Added
statistical_arbitrage.pyfor cointegration-based pair selection,consolidation.pyfor blending model predictions with volatility and sector data, anddata_client.pyfor fetching historical prices and equity details from the data manager. - Rewrote
risk_management.pyto size pairs using inverse-volatility weighting instead of equal-dollar allocation across ranked predictions, and updatedserver.pywith the new pipeline (data fetch → consolidation → shortability filter → pair selection → volatility-parity sizing). - Expanded the test suite with comprehensive tests for all new modules and updated existing tests, added
pairs_schemavalidation toportfolio_schema.py, and introduced new custom exceptions (PriceDataUnavailableError,InsufficientPairsError).
Reviewed changes
Copilot reviewed 15 out of 17 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
statistical_arbitrage.py |
New module for cointegrated pair selection using correlation filtering, spread z-scores, and greedy pair assignment |
consolidation.py |
New module for blending multi-model predictions with realized volatility and sector data |
data_client.py |
New module for fetching historical prices (parquet) and equity details (CSV) from the data manager |
risk_management.py |
Replaced equal-allocation portfolio construction with inverse-volatility-parity pair sizing |
server.py |
Updated portfolio creation pipeline with new data fetching, consolidation, shortability filtering, pair selection, and sizing steps |
portfolio_schema.py |
Added pairs_schema with check_pair_tickers_different validation and optional pair_id column to portfolio_schema |
exceptions.py |
Added PriceDataUnavailableError and InsufficientPairsError; removed InsufficientPredictionsError |
alpaca_client.py |
Added get_shortable_tickers method for filtering assets by shortability and easy-to-borrow status |
pyproject.toml |
Added scipy>=1.17.1 dependency |
test_statistical_arbitrage.py |
New tests for price matrix building, spread z-score computation, and pair selection edge cases |
test_consolidation.py |
New tests for prediction consolidation, blending, and error handling |
test_data_client.py |
New tests for data fetching with mocked HTTP responses |
test_risk_management.py |
Rewrote tests for the new volatility-parity sizing function |
test_alpaca_client.py |
New tests for get_shortable_tickers and full coverage of AlpacaClient methods |
test_portfolio_schema.py |
Added tests for new schema checks and pairs_schema validation |
.flox/env/manifest.lock |
Trailing newline fix |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…lidation - Add np.any(prices <= 0) guard in _compute_log_returns alongside the existing NaN check. np.log(0) produces -inf which would propagate silently through np.diff and np.corrcoef before the pair was eventually discarded; the explicit guard prevents any inf values from entering the correlation matrix. - Add pairs_schema.validate(candidate_pairs) in server.py after select_pairs returns (skipped for empty DataFrames, which are handled by the existing InsufficientPairsError path). Aligns with project guideline requiring DataFrame schemas to be validated in the production pipeline, not only in tests. - Add test_compute_log_returns_excludes_tickers_with_zero_price to verify that a ticker containing a zero close price within the correlation window is excluded from pair candidates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…dge-case coverage - Fix stop-loss boundary condition in evaluate_prior_pairs: the inclusive upper bound (abs_z <= Z_SCORE_STOP_LOSS) incorrectly held pairs at exactly z=4.0 rather than triggering stop-loss. Changed to exclusive (abs_z < Z_SCORE_STOP_LOSS) so z=4.0 closes the position as intended. - Sort pair_ids before iteration in evaluate_prior_pairs to make the evaluation order deterministic. Fixes a non-deterministic test that relied on unique().to_list() ordering. - Import _PRIOR_PORTFOLIO_SCHEMA from server.py in test_portfolio_server.py instead of duplicating the definition, preventing schema drift between production code and tests. - Add edge-case tests for non-positive prices and NaN z-score paths in evaluate_prior_pairs (both previously untested branches). - Add test for missing pair_id column failing portfolio_schema validation, locking in the new required field. - Remove module-level ValueError guard that crashed pytest collection when ALPACA_API_KEY_ID and ALPACA_API_SECRET env vars were absent (root cause of CI failure). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause of CI failure: server.py instantiates AlpacaClient at module level. The Alpaca SDK's TradingClient.__init__ calls _validate_credentials, which raises ValueError when empty strings are supplied (as in CI where no credentials are set). This crashed pytest collection on import. Fix: Add conftest.py that patches TradingClient and StockHistoricalDataClient before server.py is imported during test collection, allowing AlpacaClient to initialize without real credentials. The patch targets the underlying SDK classes rather than AlpacaClient itself, so test_alpaca_client.py (which tests AlpacaClient directly with its own per-test patches) is unaffected. Also fix test_evaluate_prior_pairs_skips_pair_with_non_positive_prices: the zero-price row was placed at i=0 but pair_price_matrix.tail(60) excludes the first 5 rows of a 65-row dataset, so the non-positive price check was never reached. The test fell through to compute_spread_zscore on nearly identical data, producing scipy/numpy warnings. Moving the zero to the last row (i=64) ensures it survives the tail cut and the guard triggers correctly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The module-level ValueError guard was removed in a prior commit to fix CI test collection failures, but that silently broke the fail-fast guarantee: a deployment with missing Alpaca credentials would start up cleanly, pass health checks, and only fail when a trade was actually attempted. Fix: move AlpacaClient instantiation and credential validation into a FastAPI lifespan context manager (_lifespan). The lifespan runs at server startup before any requests are accepted, so: - A misconfigured deployment still fails immediately at boot with a clear error message, before any capital is at risk. - Tests can import server.py without real credentials because the lifespan is not invoked during pytest collection or unit test execution. The AlpacaClient instance is stored in app.state and retrieved via a local alias at the top of create_portfolio, following FastAPI's recommended pattern for app-level state instead of module-level globals. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Enforce _PRIOR_PORTFOLIO_SCHEMA when constructing prior portfolio DataFrame from JSON response, keeping both branches consistent - Add boundary test confirming abs_z == Z_SCORE_STOP_LOSS (4.0) triggers stop-loss under the exclusive upper bound condition Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adding schema= to pl.DataFrame can raise pl.exceptions.SchemaError or pl.exceptions.InvalidOperationError on type coercion failures, neither of which inherits from ValueError. Broaden the except clause to include pl.exceptions.PolarsError so type mismatches fall back to an empty DataFrame rather than propagating as a 500 from create_portfolio. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Without aggregate_function, Polars behavior on duplicate (ticker, timestamp) pairs is version-dependent: some versions silently take the first value, others raise a DuplicateError. Specifying "last" makes the deduplication strategy explicit and version-stable. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
With only 2 data points, OLS regression yields a trivially perfect fit (zero residual variance), producing z-scores that pass the NaN guard but lack statistical reliability. 30 rows is the minimum for stable OLS residual standard deviation estimates. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…te_portfolio_beta Root cause: The early return in classify_regime checked len(spy_close) < _MINIMUM_RETURN_COUNT (i.e., < 2), so execution continued with exactly 2 prices producing 1 return. np.std([x], ddof=1) on a single-element array returns NaN, which silently propagated through volatility comparisons. The function returned the correct fallback by coincidence (NaN comparisons always evaluate False in Python), but relied on implicit NaN semantics rather than an explicit guard. Fix: Move returns = np.diff(np.log(spy_close)) before the early return and check len(returns) < _MINIMUM_RETURN_COUNT instead, ensuring at least 2 returns are present before computing standard deviation. Secondary fix: Remove the now-redundant inner length check on returns before calling np.corrcoef. After the outer guard, len(returns) >= 2 is guaranteed, making the else: autocorrelation = 0.0 branch dead code. Also added a regression test for the exactly-2-prices edge case and an inline comment on compute_portfolio_beta noting its role in test validation and intent for future portfolio-level beta reporting. Co-Authored-By: Claude <noreply@anthropic.com>
chrisaddy
left a comment
There was a problem hiding this comment.
Review comments on initial statistical arbitrage build.
…peline Three issues flagged by automated review: 1. consolidation.py: Clip IQR to zero before computing raw_confidence to prevent inverted quantiles (quantile_10 > quantile_90) from producing raw_confidence > 1.0 or negative/inf values that corrupt the blended mean. Added test covering the inverted-quantile edge case. 2. statistical_arbitrage.py: Add np.isinf(hedge_ratio) to the degenerate-pair guard. np.polyfit returns inf when log_prices_b has zero variance; the explicit check makes the defensive intent clear and prevents inf hedge ratios from propagating. Added test with a flat price series to exercise this path. 3. data_client.py: Increase default lookback_days from 90 to 120 calendar days to widen the buffer above CORRELATION_WINDOW_DAYS = 60. The previous default left only ~3-4 trading days of headroom, meaning data gaps could silently drop tickers below the minimum required for pair selection. Co-Authored-By: Claude <noreply@anthropic.com>
…ering
Two correctness fixes identified in code review:
1. risk_management.py: Added pair_id to both long_positions and short_positions
selects. Without this, the saved portfolio had no way to correlate a long
position with its corresponding short position, breaking pair-level exit logic
where both legs must be closed together when a z-score reverts.
2. consolidation.py: Added sort_by("timestamp") inside the group_by aggregation
expression before tail(). Polars group_by does not guarantee intra-group row
order, so the prior code relied on undefined behavior to pick the most-recent
VOLATILITY_WINDOW_DAYS returns. The explicit sort_by makes ordering guaranteed.
Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 15 out of 17 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…ence logging Three bugs/gaps addressed from code review: 1. regime.py: Fix silent NaN propagation in classify_regime. The guard `len(returns) < _MINIMUM_RETURN_COUNT` (min=2) allowed len==2 through, causing np.corrcoef to receive single-element arrays and return NaN. NaN comparisons silently evaluate to False/0.0, dropping the autocorrelation signal. Fix: tighten guard to `< _MINIMUM_RETURN_COUNT + 1` (requires 3+ returns). _MINIMUM_RETURN_COUNT stays at 2 since beta.py uses it correctly for linregress. Added test for the len==2 boundary case. Also added a comment documenting the intentional conservative default (trending/0.0 halves exposure). 2. risk_management.py: Add zero-sum guard before normalizing adjusted_weights. If the optimizer returns all-zero weights, dividing by sum() would produce NaN. Falls back to volatility_parity_weights in that case. 3. server.py: Log regime_confidence alongside regime_state to surface the computed value. Added comment noting binary exposure_scale is intentional for this initial implementation with confidence reserved for future graduated scaling. Co-Authored-By: Claude <noreply@anthropic.com>
…y, dead code Three issues flagged by greptile-apps after the previous commit: 1. beta.py line 20: Guard compared len(spy_close) (price count) against _MINIMUM_RETURN_COUNT (a returns count), identical off-by-one to the one already fixed in regime.py. Changed to _MINIMUM_RETURN_COUNT + 1 so the guard correctly requires at least two prices to produce one return. 2. risk_management.py line 107: Replaced exact float equality adjusted_weights.sum() == 0.0 with np.isclose() for robustness. 3. risk_management.py lines 40-41: Removed unreachable if total == 0.0 guard inside the SLSQP objective. SLSQP enforces sum(w) = target_total throughout optimization and bounds keep all weights strictly positive, so total can never be zero. Co-Authored-By: Claude <noreply@anthropic.com>
…d regime beta.py had no check for zero/negative SPY close prices before computing np.log(spy_close), while every ticker already had an explicit np.any(<=0) guard. A corrupt row with close_price==0 or negative would silently produce -inf or nan in spy_returns, propagating nan betas for all tickers and causing SLSQP to fall back to vol-parity weights with a misleading warning. regime.py had the same asymmetry: np.diff(np.log(spy_close)) was called without any non-positive check, risking silent nan propagation into the realized_volatility and autocorrelation calculations. Fix: extend the existing length guard in beta.py to also reject non-positive SPY prices; add an equivalent early-return guard in regime.py before the log call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…three Build initial beta/regime risk management features
Expand portfolio manager rebalancing logic
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 29 out of 31 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
applications/datamanager/src/data.rs:151
- The
pair_idcolumn is not normalized to uppercase, unliketicker,side, andactionwhich all have.str().to_uppercase()applied (lines 148-150). The testsample_portfolio_lowercase()usespair_id: "aapl-googl"buttest_create_portfolio_dataframe_uppercase_normalizationdoesn't verifypair_idis uppercased. Sincepair_idis derived from tickers (e.g."AAPL-MSFT"), this could cause mismatches when querying bypair_idif lowercase data is ever written. Consider adding.with_columns([col("pair_id").str().to_uppercase().alias("pair_id")])alongside the other normalization steps.
debug!("Normalizing ticker, side, and action columns to uppercase");
let portfolio_dataframe = portfolio_dataframe
.lazy()
.with_columns([col("ticker").str().to_uppercase().alias("ticker")])
.with_columns([col("side").str().to_uppercase().alias("side")])
.with_columns([col("action").str().to_uppercase().alias("action")])
.collect()?;
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Overview
Changes
Context
I'm separating this into chunks just because the pull request will be pretty big regardless and I want to be able to apply as many of the bot feedback as I can.
These are the data resources and scripts I used to test it - they lived in
tools/andtools/data/but I don't think they need to be checked into the repository. I liked what they generated though and it showed everything working.stat_arb_test.py
build_stat_arb_test_data.py
alpaca_shortable_cache.json
stat_arb_test_prices.csv
stat_arb_test_sectors.csv
Summary by CodeRabbit
New Features
Improvements
Tests
Dependencies