Skip to content

Conversation

urmauur
Copy link
Member

@urmauur urmauur commented Jun 18, 2025

Describe Your Changes

This pull request makes a small adjustment to the web-app/src/services/providers.ts file to conditionally include the capabilities property based on the environment. Additionally, it introduces the isProd utility for environment checks.

Changes related to environment-based logic:

Fixes Issues

  • Closes #
  • Closes #

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

Important

Add environment-based logic in providers.ts to conditionally include capabilities in getProviders function using isProd utility.

  • Environment-based Logic:
    • Import isProd utility from @/lib/version in providers.ts.
    • Update getProviders function in providers.ts to include capabilities only when !isProd.

This description was created by Ellipsis for 88e823b. You can customize this summary. It will automatically update as commits are pushed.

@urmauur urmauur added this to the v0.6.0 milestone Jun 18, 2025
@urmauur urmauur requested a review from louis-menlo June 18, 2025 12:56
@urmauur urmauur self-assigned this Jun 18, 2025
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 88e823b in 46 seconds. Click for details.
  • Reviewed 21 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/services/providers.ts:16
  • Draft comment:
    New import of isProd has been introduced for environment checks. Verify that '@/lib/version' reliably distinguishes production vs. non-production environments.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. web-app/src/services/providers.ts:69
  • Draft comment:
    Conditional spreading of 'capabilities' looks correct. Ensure that omitting 'capabilities' in production aligns with downstream expectations.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None

Workflow ID: wflow_w6YfI9gGgUL9rkca

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@urmauur urmauur merged commit c6cd37d into release/v0.6.0 Jun 18, 2025
20 checks passed
@urmauur urmauur deleted the fix/builtin-model-capabilities branch June 18, 2025 13:28
@github-project-automation github-project-automation bot moved this to QA in Jan Jun 18, 2025
samhvw8 pushed a commit that referenced this pull request Jun 19, 2025
louis-menlo added a commit that referenced this pull request Jun 20, 2025
* chore: enable shortcut zoom (#5261)

* chore: enable shortcut zoom

* chore: update shortcut setting

* fix: thinking block (#5263)

* Merge pull request #5262 from menloresearch/chore/sync-new-hub-data

chore: sync new hub data

* ✨enhancement: model run improvement (#5268)

* fix: mcp tool error handling

* fix: error message

* fix: trigger download from recommend model

* fix: can't scroll hub

* fix: show progress

* ✨enhancement: prompt users to increase context size

* ✨enhancement: rearrange action buttons for a better UX

* 🔧chore: clean up logics

---------

Co-authored-by: Faisal Amir <[email protected]>

* fix: glitch download from onboarding (#5269)

* ✨enhancement: Model sources should not be hard coded from frontend (#5270)

* 🐛fix: default onboarding model should use recommended quantizations (#5273)

* 🐛fix: default onboarding model should use recommended quantizations

* ✨enhancement: show context shift option in provider settings

* 🔧chore: wording

* 🔧 config: add to gitignore

* 🐛fix: Jan-nano repo name changed (#5274)

* 🚧 wip: disable showSpeedToken in ChatInput

* 🐛 fix: commented out the wrong import

* fix: masking value MCP env field (#5276)

* ✨ feat: add token speed to each message that persist

* ♻️ refactor: to follow prettier convention

* 🐛 fix: exclude deleted field

* 🧹 clean: all the missed console.log

* ✨enhancement: out of context troubleshooting (#5275)

* ✨enhancement: out of context troubleshooting

* 🔧refactor: clean up

* ✨enhancement: add setting chat width container (#5289)

* ✨enhancement: add setting conversation width

* ✨enahncement: cleanup log and change improve accesibility

* ✨enahcement: move const beta version

* 🐛fix: optional additional_information gpu (#5291)

* 🐛fix: showing release notes for beta and prod (#5292)

* 🐛fix: showing release notes for beta and prod

* ♻️refactor: make an utils env

* ♻️refactor: hide MCP for production

* ♻️refactor: simplify the boolean expression fetch release note

* 🐛fix: typo in build type check (#5297)

* 🐛fix: remove onboarding local model and hide the edit capabilities model (#5301)

* 🐛fix: remove onboarding local model and hide the edit capabilities model

* ♻️refactor: conditional search params setup screen

* 🐛fix: hide token speed when assistant params stream false (#5302)

* 🐛fix: glitch padding speed token (#5307)

* 🐛fix: immediately show download progress (#5308)

* 🐛fix:safely convert values to numbers and handle NaN cases (#5309)

* chore: correct binary name for stable version (#5303) (#5311)

Co-authored-by: hiento09 <[email protected]>

* 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU (#5310)

* 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU

* chore: cover more cases

* chore: clean up

* fix: should not show GPU section on Mac

* 🐛fix: update default extension settings (#5315)

* fix: update default extension settings

* chore: hide language setting on Prod

* 🐛fix: allow script posthog (#5316)

* Sync 0.5.18 to 0.6.0 (#5320)

* chore: correct binary name for stable version (#5303)

* ci: enable devtool on prod build (#5317)

* ci: enable devtool on prod build

---------

Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>

* fix: glitch model download issue (#5322)

* 🐛 fix(updater): terminate sidecar processes before update to avoid file access errors (#5325)

* 🐛 fix: disable sorting for threads in SortableItem and clean up thread order handling (#5326)

* improved wording in UI elements (#5323)

* fix: sorted-thread-not-stable (#5336)

* 🐛fix: update wording desc vulkan (#5338)

* 🐛fix: update wording desc vulkan

* ✨enhancement: update copy

* 🐛fix: handle NaN value tokenspeed (#5339)

* 🐛 fix: window path problem

* feat(server): filter /models endpoint to show only downloaded models (#5343)

- Add filtering logic to proxy server for GET /models requests
- Keep only models with status "downloaded" in response
- Remove Content-Length header to prevent mismatch after filtering
- Support both ListModelsResponseDto and direct array formats
- Add comprehensive tests for filtering functionality
- Fix Content-Length header conflict causing empty responses

Fixes issue where all models were returned regardless of download status.

* 🐛fix: render streaming token speed based on thread ID & assistant metadata (#5346)

* fix(server): add gzip decompression support for /models endpoint filtering (#5349)

- Add gzip detection using magic number check (0x1f 0x8b)
- Implement gzip decompression before JSON parsing
- Add gzip re-compression for filtered responses
- Fix "invalid utf-8 sequence" error when upstream returns gzipped content
- Maintain Content-Encoding consistency for compressed responses
- Add comprehensive gzip handling with flate2 library

Resolves issue where filtering failed on gzip-compressed model responses.

* fix(proxy): implement true HTTP streaming for chat completions API (#5350)

* fix: glitch toggle gpus (#5353)

* fix: glitch toogle gpu

* fix: Using the GPU's array index as a key for gpuLoading

* enhancement: added try-finally

* fix: built in models capabilities (#5354)

* 🐛fix: setting provider hide model capabilities (#5355)

* 🐛fix: setting provider hide model capabilities

* 🐛fix: hide tools icon on dropdown model providers

* fix: stop server on app close or reload

* ✨enhancement: reset heading class

---------

Co-authored-by: Louis <[email protected]>

* fix: stop api server on page unload (#5356)

* fix: stop api server on page unload

* fix: check api server status on reload

* refactor: api server state

* fix: should not pop the guard

* 🐛fix: avoid render html title thread (#5375)

* 🐛fix: avoid render html title thread

* chore: minor bump - tokenjs for manual adding models

---------

Co-authored-by: Louis <[email protected]>

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: LazyYuuki <[email protected]>
Co-authored-by: Bui Quang Huy <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Sam Hoang Van <[email protected]>
Co-authored-by: Ramon Perez <[email protected]>
@david-menloai david-menloai moved this from QA to Done in Jan Jun 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants