Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate openai assistant related config items from llm_config #1964

Closed
wants to merge 12 commits into from

Conversation

IANTHEREAL
Copy link
Collaborator

@IANTHEREAL IANTHEREAL commented Mar 12, 2024

Why are these changes needed?

openai assistant related config in llm_config will cause some function not working, like

import logging
import os

from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent

logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)

assistant_id = os.environ.get("ASSISTANT_ID", None)

config_list = config_list_from_json("OAI_CONFIG_LIST", filter_dict={"tags": ["assistant"]})
llm_config = {"config_list": config_list, "assistant_id": assistant_id}

gpt_assistant = GPTAssistantAgent(
    name="assistant", instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE, llm_config=llm_config
)

user_proxy = UserProxyAgent(
    name="user_proxy",
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
    human_input_mode="NEVER",
    max_consecutive_auto_reply=1,
)
result = user_proxy.initiate_chat(gpt_assistant, message="Print hello world", summary_method="reflection_with_llm")
print(result.summary)

Related issue number

Closes #1805

Checks

@codecov-commenter
Copy link

codecov-commenter commented Mar 12, 2024

Codecov Report

Attention: Patch coverage is 12.50000% with 21 lines in your changes are missing coverage. Please review.

Project coverage is 48.41%. Comparing base (8844f86) to head (6d79368).

Files Patch % Lines
autogen/agentchat/contrib/gpt_assistant_agent.py 12.50% 21 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1964       +/-   ##
===========================================
+ Coverage   36.99%   48.41%   +11.42%     
===========================================
  Files          66       66               
  Lines        7015     7031       +16     
  Branches     1534     1666      +132     
===========================================
+ Hits         2595     3404      +809     
+ Misses       4194     3341      -853     
- Partials      226      286       +60     
Flag Coverage Δ
unittests 48.28% <12.50%> (+11.29%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@IANTHEREAL
Copy link
Collaborator Author

IANTHEREAL commented Mar 13, 2024

To fix the CI error, should we replace np.Inf with np.inf in the code?

ERROR samples/tools/finetuning/tests/test_conversable_agent_update_model.py - AttributeError: `np.Inf` was removed in the NumPy 2.0 release. Use `np.inf` instead.

@ekzhu
Copy link
Collaborator

ekzhu commented Mar 13, 2024

To fix the CI error, should we replace np.Inf with np.inf in the code?

ERROR samples/tools/finetuning/tests/test_conversable_agent_update_model.py - AttributeError: `np.Inf` was removed in the NumPy 2.0 release. Use `np.inf` instead.

This is fixed in the latest release

Copy link
Contributor

@jtrugman jtrugman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - looks like we can extend this to add the feature of utilizing OpenAI Assistants API Threads as well 😄

@gagb gagb self-requested a review March 15, 2024 21:42
Copy link
Collaborator

@gagb gagb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please double-check my understanding, other wise looks good! great pr @IANTHEREAL

@ekzhu
Copy link
Collaborator

ekzhu commented Mar 15, 2024

@sonichi
Copy link
Contributor

sonichi commented Mar 15, 2024

Could you make the PR from the upstream repo because we use pull_request now as the trigger for openai workflows?

@IANTHEREAL
Copy link
Collaborator Author

Could you make the PR from the upstream repo because we use pull_request now as the trigger for openai workflows?

@sonichi I don't catch up with you, do you mean make a new PR using microsoft/autogen as upstream?

@sonichi
Copy link
Contributor

sonichi commented Mar 16, 2024

Could you make the PR from the upstream repo because we use pull_request now as the trigger for openai workflows?

@sonichi I don't catch up with you, do you mean make a new PR using microsoft/autogen as upstream?

Yes. create a branch in microsoft/autogen and make a PR from there. Otherwise the openai test will fail.

@IANTHEREAL IANTHEREAL closed this Mar 16, 2024
@IANTHEREAL IANTHEREAL deleted the assistant-config branch March 16, 2024 22:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: summary_method reflection_with_llm not working in GPTAssistantAgent
7 participants