-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Fireworks and add ChatFireworks #3
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may need to update docs/extras/integrations/llms/fireworks.ipynb as well.
return default_class(content=content) | ||
|
||
|
||
class ChatFireworks(BaseChatModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add validate_environment
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking validate parameters on our api side. Since I wrap all parameter in model_kwargs
(except model
), the error should pop up from our api.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually we need it! I added validate_environment for checking if there is an api_key environment variable.
return default_class(content=content) | ||
|
||
|
||
class ChatFireworks(BaseChatModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably add completion with retrying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openai create api has retrying itself: https://github.com/openai/openai-python/blob/5d50e9e3b39540af782ca24e65c290343d86e1a9/openai/api_resources/chat_completion.py#L23
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sg. Do we throw the correct exception to trigger the retries? It's a bit wired that their langchain code contains an additional wrapper: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L103
I'm fine with you code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, openai might only retry when rcode=409
https://github.com/openai/openai-python/blob/e389823ba013a24b4c32ce38fa0bd87e6bccae94/openai/api_requestor.py#L448
I will make sure adding retry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed to use our fireworks python client instead of using openai's because our error_handling is a little different from openai, and it is more accurate to use ours in terms of error handling. We further need it to decide if this error type should continue retries.
) | ||
|
||
|
||
class Fireworks(LLM): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we add completion with retrying?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openai create api has retrying itself: https://github.com/openai/openai-python/blob/5d50e9e3b39540af782ca24e65c290343d86e1a9/openai/api_resources/completion.py#L23
return default_class(content=content) | ||
|
||
|
||
class ChatFireworks(BaseChatModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sg. Do we throw the correct exception to trigger the retries? It's a bit wired that their langchain code contains an additional wrapper: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L103
I'm fine with you code.
7b59f32
to
2613d50
Compare
from langchain.schema.output import ChatGeneration, ChatGenerationChunk, ChatResult | ||
|
||
|
||
def _convert_delta_to_message_chunk( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what's the convention in the langchain codebase, but in general, it would be good to add some comments.
class ChatFireworks(BaseChatModel): | ||
"""Fireworks Chat models.""" | ||
|
||
model = "accounts/fireworks/models/llama-v2-7b-chat" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model: str = ...
d7b80ed
to
93a8759
Compare
* Refactor Fireworks api and remove FireworksChat * Add dependency for fireworks-ai
Description * Refactor Fireworks within Langchain LLMs. * Remove FireworksChat within Langchain LLMs. * Add ChatFireworks (which uses chat completion api) to Langchain chat models. * Users have to install `fireworks-ai` and register an api key to use the api. Issue - Not applicable Dependencies - None Tag maintainer - @rlancemartin @baskaryan
Changes in baseline of FIreworks
__call__
,generate
,stream