Skip to content

Conversation

samhvw8
Copy link
Contributor

@samhvw8 samhvw8 commented Jun 24, 2025

🚀 Improve Local Provider Connectivity

Problem

Local AI providers (Ollama, LM Studio, etc.) often fail to connect due to CORS restrictions when accessed from the web app, causing frustration for users trying to use local models.

Describe Your Changes

  • Dual Fetch Strategy: Use Tauri's HTTP client for local providers to bypass CORS, regular fetch for remote providers

  • Smart Detection: Auto-detect local providers by hostname (localhost, 127.0.0.1) and known ports (11434 for Ollama, 1234 for LM Studio)

  • Flexible Auth: Make API keys optional for local providers that don't require authentication

  • Better Errors: Provide specific error messages for local connection issues

  • Add @tauri-apps/plugin-http dependency

  • Enhance fetchModelsFromProvider() with local provider detection

  • Add comprehensive test suite covering all scenarios

  • Improve error handling and user feedback

Testing

  • ✅ Local providers (Ollama, LM Studio) bypass CORS
  • ✅ API key handling for both scenarios
  • ✅ Error messages guide troubleshooting

Fixes Issues

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

Important

Improve local provider connectivity by bypassing CORS using Tauri's HTTP client, with enhanced detection and error handling in completion.ts and providers.ts.

  • Behavior:
    • Use Tauri's HTTP client for local providers to bypass CORS, regular fetch for remote providers in completion.ts and providers.ts.
    • Auto-detect local providers by hostname and known ports in providers.ts.
    • Make API keys optional for local providers in providers.ts.
    • Provide specific error messages for local connection issues in providers.ts.
  • Dependencies:
    • Add @tauri-apps/plugin-http to package.json.
  • Functions:
    • Enhance fetchModelsFromProvider() in providers.ts with local provider detection and error handling.
    • Modify sendCompletion() in completion.ts to use Tauri's fetch for openai-compatible providers.

This description was created by Ellipsis for bf0b114. You can customize this summary. It will automatically update as commits are pushed.

@samhvw8 samhvw8 moved this to In Progress in Jan Jun 24, 2025
@samhvw8 samhvw8 moved this from In Progress to QA in Jan Jun 24, 2025
@samhvw8 samhvw8 added this to the v0.6.2 milestone Jun 24, 2025
@samhvw8 samhvw8 moved this from QA to Needs Review in Jan Jun 24, 2025
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 69208b0 in 1 minute and 48 seconds. Click for details.
  • Reviewed 282 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/services/providers.ts:269
  • Draft comment:
    The catch block checks if error.message includes 'fetch' to decide whether to throw a custom error message. This may be brittle—consider using a more robust error identification (e.g. custom error types) to ensure proper handling of connection errors for local providers.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While the suggestion about using custom error types is technically valid, this is a relatively simple error handling case where the string check is actually quite reasonable. The error message is specific to local provider connection issues, and the current implementation is clear and functional. Creating custom error types would add complexity without significant benefit in this case. The comment makes a valid point about error handling best practices. String-based error checking can indeed be fragile if the underlying error messages change. However, in this specific case, the error handling is simple, focused, and the string check for 'fetch' is a reasonable way to detect network connectivity issues. The added complexity of custom error types isn't justified by the minimal benefit. The comment should be deleted. While technically correct, the suggested change would add unnecessary complexity for this simple error handling case.
2. web-app/src/services/providers.ts:227
  • Draft comment:
    Appending '/models' to provider.base_url may produce duplicate slashes if base_url ends with a slash. Consider normalizing the URL or documenting the expected format for base_url.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. web-app/src/services/providers.ts:177
  • Draft comment:
    The check in shouldBypassPreflightCheck includes hostname '0.0.0.0' as local. Confirm that this behavior is intentional for your local provider setups.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_OtYJilC9l0id61qm

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@samhvw8 samhvw8 changed the base branch from dev to release/v0.6.2 June 24, 2025 05:04
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed af59f5c in 1 minute and 31 seconds. Click for details.
  • Reviewed 37 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/lib/completion.ts:136
  • Draft comment:
    The fetch property is now always set to Tauri’s fetch. The PR description mentioned a dual strategy (using Tauri’s fetch only for local providers). Consider conditionally applying fetchTauri only when the provider’s base URL is local (e.g., localhost or 127.0.0.1) so that remote providers continue to use the default fetch.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The comment appears to be based on a discrepancy between the PR description and implementation. However, using fetchTauri universally to avoid CORS issues seems like a valid approach - there's no obvious downside to using it for all requests. The comment is speculative about whether this is actually a problem. Without seeing the PR description or understanding the full architecture, we can't be certain this is actually an issue. I may be overlooking potential issues with using fetchTauri for all requests. There could be valid reasons why the PR description specified a dual strategy. While there may be reasons for a dual strategy, the comment is speculative and asks for verification against the PR description rather than pointing out a clear technical problem. The current implementation appears functional. The comment should be deleted as it's speculative, asks for verification, and doesn't identify a clear technical issue with the current implementation.
2. web-app/src/lib/completion.ts:23
  • Draft comment:
    The ExtendedConfigOptions type is a good addition for extending TokenJS config. Consider adding a comment to explain its purpose and ensure that 'typeof fetch' correctly models the Tauri fetch signature, if it differs from the standard one.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.

Workflow ID: wflow_GvCV6GPgIYEIK0hy

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed bf0b114 in 1 minute and 25 seconds. Click for details.
  • Reviewed 15 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/lib/completion.ts:138
  • Draft comment:
    The use of the spread operator to conditionally include Tauri’s fetch is clever, but the condition (providerName === 'openai-compatible') may not explicitly reflect local provider detection (e.g. for LM Studio or Ollama). Ensure that local providers are reliably identified—perhaps by directly checking the baseURL (e.g. '127.0.0.1' or 'localhost') or via a dedicated flag rather than relying on providerName. This would better align with the PR’s intent of bypassing CORS for local providers.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment makes a reasonable point about local providers potentially needing CORS bypass. However, the code change seems intentional to specifically limit Tauri fetch to openai-compatible providers. Without understanding the full architecture and provider system, we can't be certain that all local providers need CORS bypass or that checking for localhost is the right approach. The current change may have good reasons for specifically targeting openai-compatible providers. I may be overlooking potential CORS issues that could affect local providers. The suggestion to check base_url could be more robust than checking provider type. While the suggestion has merit, we don't have enough context about the provider system architecture to know if this change would be correct. The current change appears intentional and specific. The comment should be removed as it makes assumptions about the architecture without strong evidence that the suggested change would be correct.

Workflow ID: wflow_O2EyiTcEgnoS4LIi

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@samhvw8
Copy link
Contributor Author

samhvw8 commented Jun 24, 2025

@louis-menlo it look good now 💪

samhvw8 added 3 commits June 25, 2025 13:36
- Add @tauri-apps/plugin-http dependency
- Implement dual fetch strategy for local vs remote providers
- Auto-detect local providers (localhost, Ollama:11434, LM Studio:1234)
- Make API key optional for local providers
- Add comprehensive test coverage for provider fetching

refactor: simplify fetchModelsFromProvider by removing preflight check logic
@samhvw8 samhvw8 force-pushed the fix/preflight-ollama-lmstudio branch from bf0b114 to 644e95b Compare June 25, 2025 06:47
@samhvw8 samhvw8 requested a review from urmauur June 25, 2025 07:35
Copy link
Contributor

@louis-menlo louis-menlo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@samhvw8 samhvw8 merged commit 0890de1 into release/v0.6.2 Jun 25, 2025
18 checks passed
@github-project-automation github-project-automation bot moved this from Needs Review to QA in Jan Jun 25, 2025
@samhvw8 samhvw8 deleted the fix/preflight-ollama-lmstudio branch June 25, 2025 08:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
2 participants