-
Notifications
You must be signed in to change notification settings - Fork 448
switch to using gpt-5-mini for free proxy #2297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughUpdates replace the default AI model from "gpt-4o-mini" to "gpt-5-mini" and increase ai:maxtokens from 2048 to 4000 across configs and docs. Changes include:
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
docs/docs/config.mdx (1)
103-104
: Avoid hard-coding version in the “current default configuration (v0.x.y)” header.
The snippet now reflects new defaults; consider dropping or auto-injecting the version to avoid future staleness.frontend/app/view/waveai/waveai.tsx (1)
226-226
: Make the cloud-path label reflect the effective model dynamically.
Hardcoding “gpt-5-mini” risks drift when users override ai:model.Apply this minimal change:
- title: "Using Wave's AI Proxy (gpt-5-mini)", + title: `Using Wave's AI Proxy (${aiOpts.model ?? "default"})`,pkg/wconfig/defaultconfig/presets/ai.json (1)
8-8
: Consider decoupling display:name from the model string.
If the preset’s model changes later, the name can get stale. Either:
- Keep name generic (e.g., “Wave Proxy”) and show the effective model in UI, or
- Compute “name (model)” in UI for all presets (not just the ai:* wildcard case).
Example (JSON tweak option):
- "display:name": "Wave Proxy - gpt-5-mini", + "display:name": "Wave Proxy",And rely on the UI header/preset list to append
(${effectiveModel})
.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
aiprompts/config-system.md
(6 hunks)docs/docs/config.mdx
(1 hunks)frontend/app/view/waveai/waveai.tsx
(1 hunks)pkg/wconfig/defaultconfig/presets/ai.json
(1 hunks)pkg/wconfig/defaultconfig/settings.json
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Build for TestDriver.ai
- GitHub Check: merge-gatekeeper
🔇 Additional comments (5)
pkg/wconfig/defaultconfig/settings.json (2)
3-4
: Defaults bump to gpt-5-mini + 4k tokens looks good.
Values align with the PR intent; no logic impacts here.
3-4
: Audit stale AI defaults across the repositoryPreliminary execution of the provided search script produced no hits for the old defaults (
gpt-4o-mini
or"ai:maxtokens": 2048
). To ensure there are truly no lingering references, please rerun and manually verify the following:#!/bin/bash set -euo pipefail # 1. Check for any old-model or old-token defaults rg -nP --hidden \ -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \ 'gpt-4o-mini|"\s*ai:maxtokens\s*"\s*:\s*2048' # 2. Enumerate all explicit ai:model or ai:maxtokens settings for spot-check rg -nP --hidden \ -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \ '"ai:model"\s*:\s*"[^"]+"|"\s*ai:maxtokens\s*"\s*:\s*\d+'Verify that:
- No occurrences of
gpt-4o-mini
orai:maxtokens: 2048
remain.- All
ai:model
defaults are"gpt-5-mini"
.- All
ai:maxtokens
defaults are4000
.docs/docs/config.mdx (1)
103-104
: Docs snippet updated to new defaults — good.
Matches settings.json and presets.aiprompts/config-system.md (1)
205-206
: Reference to new default model is correct.
Step 3 example shows gpt-5-mini — consistent with defaults.pkg/wconfig/defaultconfig/presets/ai.json (1)
8-18
: Preset aligns with new defaults.
display:name, ai:model, and ai:maxtokens updated consistently.
No description provided.