Skip to content

Conversation

sawka
Copy link
Member

@sawka sawka commented Aug 27, 2025

No description provided.

Copy link
Contributor

coderabbitai bot commented Aug 27, 2025

Walkthrough

Updates replace the default AI model from "gpt-4o-mini" to "gpt-5-mini" and increase ai:maxtokens from 2048 to 4000 across configs and docs. Changes include:

  • pkg/wconfig/defaultconfig/settings.json: ai:model and ai:maxtokens updated.
  • pkg/wconfig/defaultconfig/presets/ai.json: display name, ai:model, and ai:maxtokens updated.
  • docs/docs/config.mdx: config snippets updated to reflect new defaults.
  • frontend/app/view/waveai/waveai.tsx: UI label text updated to show "Using Wave's AI Proxy (gpt-5-mini)".
  • aiprompts/config-system.md: notes the default model change.
    No public APIs or logic paths altered.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sawka/gpt-5-proxy

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
docs/docs/config.mdx (1)

103-104: Avoid hard-coding version in the “current default configuration (v0.x.y)” header.
The snippet now reflects new defaults; consider dropping or auto-injecting the version to avoid future staleness.

frontend/app/view/waveai/waveai.tsx (1)

226-226: Make the cloud-path label reflect the effective model dynamically.
Hardcoding “gpt-5-mini” risks drift when users override ai:model.

Apply this minimal change:

-                            title: "Using Wave's AI Proxy (gpt-5-mini)",
+                            title: `Using Wave's AI Proxy (${aiOpts.model ?? "default"})`,
pkg/wconfig/defaultconfig/presets/ai.json (1)

8-8: Consider decoupling display:name from the model string.
If the preset’s model changes later, the name can get stale. Either:

  • Keep name generic (e.g., “Wave Proxy”) and show the effective model in UI, or
  • Compute “name (model)” in UI for all presets (not just the ai:* wildcard case).

Example (JSON tweak option):

-        "display:name": "Wave Proxy - gpt-5-mini",
+        "display:name": "Wave Proxy",

And rely on the UI header/preset list to append (${effectiveModel}).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 5b0daaf and 7fe8519.

📒 Files selected for processing (5)
  • aiprompts/config-system.md (6 hunks)
  • docs/docs/config.mdx (1 hunks)
  • frontend/app/view/waveai/waveai.tsx (1 hunks)
  • pkg/wconfig/defaultconfig/presets/ai.json (1 hunks)
  • pkg/wconfig/defaultconfig/settings.json (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: merge-gatekeeper
🔇 Additional comments (5)
pkg/wconfig/defaultconfig/settings.json (2)

3-4: Defaults bump to gpt-5-mini + 4k tokens looks good.
Values align with the PR intent; no logic impacts here.


3-4: Audit stale AI defaults across the repository

Preliminary execution of the provided search script produced no hits for the old defaults (gpt-4o-mini or "ai:maxtokens": 2048). To ensure there are truly no lingering references, please rerun and manually verify the following:

#!/bin/bash
set -euo pipefail

# 1. Check for any old-model or old-token defaults
rg -nP --hidden \
   -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \
   'gpt-4o-mini|"\s*ai:maxtokens\s*"\s*:\s*2048'

# 2. Enumerate all explicit ai:model or ai:maxtokens settings for spot-check
rg -nP --hidden \
   -g '!**/node_modules/**' -g '!**/dist/**' -g '!**/build/**' \
   '"ai:model"\s*:\s*"[^"]+"|"\s*ai:maxtokens\s*"\s*:\s*\d+'

Verify that:

  • No occurrences of gpt-4o-mini or ai:maxtokens: 2048 remain.
  • All ai:model defaults are "gpt-5-mini".
  • All ai:maxtokens defaults are 4000.
docs/docs/config.mdx (1)

103-104: Docs snippet updated to new defaults — good.
Matches settings.json and presets.

aiprompts/config-system.md (1)

205-206: Reference to new default model is correct.
Step 3 example shows gpt-5-mini — consistent with defaults.

pkg/wconfig/defaultconfig/presets/ai.json (1)

8-18: Preset aligns with new defaults.
display:name, ai:model, and ai:maxtokens updated consistently.

@sawka sawka merged commit 5c28874 into main Aug 28, 2025
9 of 10 checks passed
@sawka sawka deleted the sawka/gpt-5-proxy branch August 28, 2025 21:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant