Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Fireworks and add ChatFireworks #3

Merged
merged 14 commits into from
Sep 14, 2023
Merged

Conversation

ZixinYang
Copy link

@ZixinYang ZixinYang commented Sep 8, 2023

  • Add baseline of Fireworks
  • Add baseline of ChatFireworks
  • Support streaming
  • Support async
  • Add notebook

Changes in baseline of FIreworks

  • Refactor the api (follow Anyscale's manner)
  • Keep basic functions, such as __call__, generate, stream

@ZixinYang ZixinYang changed the title Refactor Fireworks api and remove FireworksChat Refactor Fireworks and add ChatFireworks Sep 9, 2023
Copy link

@pgarbacki pgarbacki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may need to update docs/extras/integrations/llms/fireworks.ipynb as well.

libs/langchain/langchain/chat_models/fireworks.py Outdated Show resolved Hide resolved
return default_class(content=content)


class ChatFireworks(BaseChatModel):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add validate_environment?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking validate parameters on our api side. Since I wrap all parameter in model_kwargs (except model), the error should pop up from our api.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually we need it! I added validate_environment for checking if there is an api_key environment variable.

return default_class(content=content)


class ChatFireworks(BaseChatModel):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably add completion with retrying.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg. Do we throw the correct exception to trigger the retries? It's a bit wired that their langchain code contains an additional wrapper: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L103

I'm fine with you code.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed to use our fireworks python client instead of using openai's because our error_handling is a little different from openai, and it is more accurate to use ours in terms of error handling. We further need it to decide if this error type should continue retries.

)


class Fireworks(LLM):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we add completion with retrying?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs/extras/integrations/llms/fireworks.ipynb Outdated Show resolved Hide resolved
return default_class(content=content)


class ChatFireworks(BaseChatModel):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sg. Do we throw the correct exception to trigger the retries? It's a bit wired that their langchain code contains an additional wrapper: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L103

I'm fine with you code.

from langchain.schema.output import ChatGeneration, ChatGenerationChunk, ChatResult


def _convert_delta_to_message_chunk(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what's the convention in the langchain codebase, but in general, it would be good to add some comments.

class ChatFireworks(BaseChatModel):
"""Fireworks Chat models."""

model = "accounts/fireworks/models/llama-v2-7b-chat"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model: str = ...

@ZixinYang ZixinYang merged commit 1af2fef into master Sep 14, 2023
ZixinYang added a commit that referenced this pull request Sep 25, 2023
* Refactor Fireworks api and remove FireworksChat
* Add dependency for fireworks-ai
ZixinYang added a commit that referenced this pull request Sep 29, 2023
Description 
* Refactor Fireworks within Langchain LLMs.
* Remove FireworksChat within Langchain LLMs.
* Add ChatFireworks (which uses chat completion api) to Langchain chat
models.
* Users have to install `fireworks-ai` and register an api key to use
the api.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin @baskaryan
ZixinYang pushed a commit that referenced this pull request Oct 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants