Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 feat: o1 #4019

Merged
merged 7 commits into from
Sep 12, 2024
Merged

🚀 feat: o1 #4019

merged 7 commits into from
Sep 12, 2024

Conversation

danny-avila
Copy link
Owner

@danny-avila danny-avila commented Sep 12, 2024

Summary

  • Added o1 models to the default OpenAI models list, ensuring they are available for selection.
  • Implemented special handling for o1 models with custom instructions, adjusting how prompts are constructed.
  • Updated token cost calculations to account for the complexity of o1 models, including separate tracking for reasoning tokens.
  • Refactored the BaseClient to use new input/output token keys, improving flexibility across different model types.
  • Updated the MessageEndpointIcon component to differentiate the OpenAI icon color for o1 models, providing visual distinction.
  • Updated localization to include new error messages related to o1 models and system message support.
  • Adjusted the getResponseSender function to return 'o1' as the default sender string for o1 models.

Other Changes

  • Added a new error type for models that don't support system messages, improving error handling and user feedback.
  • Enhanced the AnthropicClient to use the new input/output token keys and added prompt caching support for claude-3-opus.

Testing:

  1. Verify that o1 models appear in the model selection list.
  2. Test conversations with o1 models, ensuring that custom instructions are handled correctly.
  3. Check that token usage is accurately calculated and displayed for o1 models.
  4. Confirm that the MessageEndpointIcon displays the correct color for o1 models.
  5. Test error scenarios, particularly when attempting to use system messages with unsupported models.
  6. Verify that the correct sender string is displayed for o1 model responses.

@danny-avila danny-avila marked this pull request as draft September 12, 2024 20:53
@danny-avila danny-avila linked an issue Sep 12, 2024 that may be closed by this pull request
1 task
@danny-avila danny-avila marked this pull request as ready for review September 12, 2024 22:15
@danny-avila danny-avila merged commit 45b4283 into main Sep 12, 2024
4 checks passed
@danny-avila danny-avila deleted the feat/o1 branch September 12, 2024 22:15
@kukoboris
Copy link
Contributor

@danny-avila I have updated LibreChat (deploy-compose) but there are no o1 models in the model selection list....

@TyraVex
Copy link

TyraVex commented Sep 13, 2024

@danny-avila I have updated LibreChat (deploy-compose) but there are no o1 models in the model selection list....

You need to be API tier 5:
https://platform.openai.com/settings/organization/limits

And if you do:
https://community.openai.com/t/were-tier-5-usage-but-dont-have-access-to-o1-models/937904/4
"""
OpenAI confirmed that we should have access this afternoon, Pacific time.
"""

@kneelesh48
Copy link
Contributor

Do these changes work with o1-preview on openrouter?

@fuegovic
Copy link
Collaborator

Do these changes work with o1-preview on openrouter?

OpenRouter o1 also works!
image

BertKiv pushed a commit to BertKiv/LibreChat that referenced this pull request Dec 10, 2024
* feat: o1 default response sender string

* feat: add o1 models to default openai models list, add `no_system_messages` error type; refactor: use error type as localization key

* refactor(MessageEndpointIcon): differentiate openAI icon model color for o1 models

* refactor(AnthropicClient): use new input/output tokens keys; add prompt caching for claude-3-opus

* refactor(BaseClient): to use new input/output tokens keys; update typedefs

* feat: initial o1 model handling, including token cost complexity

* EXPERIMENTAL: special handling for o1 model with custom instructions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enhancement: support o1-preview, o1-mini
5 participants