Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add LLM output for the AI ML component to use ChatOpenAI base model #3230

Merged
merged 4 commits into from
Aug 7, 2024

Conversation

vasconceloscezar
Copy link
Collaborator

The current AI ML component only outputs Text, and all other AI Models output Text and Language Model

image

With this change, we can use models from AI ML in Agents and other components that use the Language Model as input.

image

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request labels Aug 7, 2024
@vasconceloscezar vasconceloscezar self-assigned this Aug 7, 2024
Copy link

This pull request is automatically being deployed by Amplify Hosting (learn more).

Access this pull request here: https://pr-3230.dmtpw4p5recq1.amplifyapp.com

Copy link
Contributor

@ogabrielluiz ogabrielluiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Aug 7, 2024
@ogabrielluiz ogabrielluiz enabled auto-merge (squash) August 7, 2024 20:28
@ogabrielluiz ogabrielluiz merged commit fcd8d37 into main Aug 7, 2024
51 checks passed
@ogabrielluiz ogabrielluiz deleted the fix/ai-ml-language-model-component branch August 7, 2024 20:32
name="stop_tokens",
display_name="Stop Tokens",
info="Comma-separated list of tokens to signal the model to stop generating text.",
DictInput(name="model_kwargs", display_name="Model Kwargs", advanced=True),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @vasconceloscezar thanks for the improvements. Sorry, I was out of town last week or would have taken a look then.

You may be more familiar with the AI/ML API than I am, so just wanted to ask some clarifying questions here.

IIUC, the AI/ML API only supports certain parameters that are common to all supported models (https://docs.aimlapi.com/api-reference/parameters). Does allowing model_kwargs here give the user the false(?) impression that model-specific args are allowed?

info="Comma-separated list of tokens to signal the model to stop generating text.",
DictInput(name="model_kwargs", display_name="Model Kwargs", advanced=True),
BoolInput(
name="json_mode",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this error using this change out of the box:

/Users/jordan.frazier/Documents/Langflow/langflow/.venv/lib/python3.11/site-packages/langchain_core/utils/utils.py:234: UserWarning: WARNING! json_mode is not default parameter.
                json_mode was transferred to model_kwargs.
                Please confirm that json_mode is what you intended.
  warnings.warn(
[08/12/24 09:44:42] ERROR    2024-08-12 09:44:42 - ERROR    - base - Completions.create() got an unexpected keyword argument 'json_mode'                                                               base.py:687

  • I believe json_mode is set differently using ChatOpenAI.
  • Did you test this and it worked for you? Just want to make sure I didn't accidentally change anything prior to testing.

advanced=True,
info="The base URL of the OpenAI API. Defaults to https://api.aimlapi.com . You can change this to use other APIs like JinaChat, LocalAI e Prem.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the intent of this component was to only hit the AI/ML API, so would lean towards removing this parameter. If users want to use a different base, we can add another component for that or use the generic OpenAI component. (I'm glad using the base_url here works using the aiml base - I tried with the OpenAIEmbeddings class and had issues).

SecretStrInput(
name="api_key",
display_name="AIML API Key",
info="The AIML API Key to use for the OpenAI model.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be specific to the OpenAI model

max_tokens = self.max_tokens
model_kwargs = self.model_kwargs or {}
aiml_api_base = self.aiml_api_base or "https://api.aimlapi.com"
json_mode = bool(output_schema_dict) or self.json_mode
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like output_schema exists only to determine whether to set json_mode. I do have the concern that even if fixed, the AI/ML API wouldn't support this (or, only would for specific models, which may be confusing for users).

else:
raise TypeError(f"Expected Message type, saw: {type(self.input_value)}")
if isinstance(aiml_api_key, SecretStr):
openai_api_key = aiml_api_key.get_secret_value()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: var name should stay aiml_api_key

base_url=aiml_api_base,
max_tokens=max_tokens or None,
seed=seed,
json_mode=json_mode,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as noted - json_mode isn't an explicit parameter to ChatOpenAI

"repeat_penalty": self.repeat_penalty or 1.1,
"stop_tokens": self.stop_tokens or None,
}
model = ChatOpenAI(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was curious about this usage - does using OpenAI's implementation have any compatibility issues using other models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants