Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: update assistants components and add integrations tests #3887

Merged
merged 2 commits into from
Sep 24, 2024

Conversation

phact
Copy link
Collaborator

@phact phact commented Sep 23, 2024

A couple of the assistants components were broken due to the way the OpenAI sdk client was being set up so I fixed and added integration tests.

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. bug Something isn't working labels Sep 23, 2024
Copy link

This pull request is automatically being deployed by Amplify Hosting (learn more).

Access this pull request here: https://pr-3887.dmtpw4p5recq1.amplifyapp.com

@@ -10,6 +10,7 @@ class AssistantsCreateAssistant(Component):
icon = "bot"
display_name = "Create Assistant"
description = "Creates an Assistant and returns it's id"
client = patch(OpenAI())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there potential issues that may arise from using a class variable for the client? I see we're creating a new assistant for each invocation, so this seems low risk, but just wondering.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously we were doing one per invocation so I actually think a client per class is less problematic. If they were async clients we could have races but these are synchronous.

One potential future improvement would be to do a shared client for all the assistants. Not sure what a good pattern for this would be, just a singleton?

"AssistantsRun",
"GetEnvVar",
"Dotenv",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked over at the Dotenv component, and curious how that works.

Q) Why does it need an output method?
Q) If I have a .env file I've loaded up during my langflow run --env .env, and I use this component with a separate env file, will it replace that entire set, or union the two?

I see why this is needed though - if I can't do langflow run myself (in Astra), we need an easy way to pass env vars.



class Dotenv(Component):
    display_name = "Dotenv"
    description = "Load .env file into env vars"

    inputs = [
        MultilineSecretInput(
            name="dotenv_file_content",
            display_name="Dotenv file content",
            info="Paste the content of your .env file directly, since contents are sensitive, using a Global variable set as 'password' is recommended",
        )
    ]

    outputs = [
        Output(display_name="env_set", name="env_set", method="process_inputs"),
    ]

    def process_inputs(self) -> Message:
        fake_file = io.StringIO(self.dotenv_file_content)
        result = load_dotenv(stream=fake_file, override=True)

        message = Message(text="No variables found in .env")
        if result:
            message = Message(text="Loaded .env")
        return message

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah assistants supports loads of LLM providers so it's very convenient to allow the user to add them all in one go. If there are existing env vars that are set they do not get unset but they could get overwritten.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Sep 24, 2024
@ogabrielluiz ogabrielluiz changed the title fixes and integrations tests fix: update assistants components and add integrations tests Sep 24, 2024
@ogabrielluiz ogabrielluiz merged commit 0f97d35 into main Sep 24, 2024
38 of 39 checks passed
@ogabrielluiz ogabrielluiz deleted the assistants-integration-tests branch September 24, 2024 12:27
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants