Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate openai assistant related config items from llm_config #2037

Merged
merged 13 commits into from
Mar 16, 2024

Conversation

IANTHEREAL
Copy link
Collaborator

Why are these changes needed?

openai assistant related config in llm_config will cause some function not working, like

import logging
import os

from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent

logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)

assistant_id = os.environ.get("ASSISTANT_ID", None)

config_list = config_list_from_json("OAI_CONFIG_LIST", filter_dict={"tags": ["assistant"]})
llm_config = {"config_list": config_list, "assistant_id": assistant_id}

gpt_assistant = GPTAssistantAgent(
    name="assistant", instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE, llm_config=llm_config
)

user_proxy = UserProxyAgent(
    name="user_proxy",
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
    human_input_mode="NEVER",
    max_consecutive_auto_reply=1,
)
result = user_proxy.initiate_chat(gpt_assistant, message="Print hello world", summary_method="reflection_with_llm")
print(result.summary)

Related issue number

Closes #1805

Checks

@IANTHEREAL
Copy link
Collaborator Author

IANTHEREAL commented Mar 16, 2024

duplicate with #1964

@codecov-commenter
Copy link

codecov-commenter commented Mar 16, 2024

Codecov Report

Attention: Patch coverage is 83.33333% with 4 lines in your changes are missing coverage. Please review.

Project coverage is 66.91%. Comparing base (4429d4d) to head (5ceaa9e).

Files Patch % Lines
autogen/agentchat/contrib/gpt_assistant_agent.py 83.33% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #2037       +/-   ##
===========================================
+ Coverage   36.99%   66.91%   +29.92%     
===========================================
  Files          66       66               
  Lines        7015     7031       +16     
  Branches     1534     1666      +132     
===========================================
+ Hits         2595     4705     +2110     
+ Misses       4194     1892     -2302     
- Partials      226      434      +208     
Flag Coverage Δ
unittests 66.83% <83.33%> (+29.84%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ekzhu ekzhu self-requested a review March 16, 2024 06:28
@sonichi sonichi added this pull request to the merge queue Mar 16, 2024
Merged via the queue into main with commit 36c4d6a Mar 16, 2024
68 checks passed
@sonichi sonichi deleted the add-assistant-config branch March 16, 2024 20:34
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
…soft#2037)

* add assistant config

* add test

* change notebook to use assistant config

* use assistant config in testing

* code refinement

---------

Co-authored-by: Eric Zhu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: summary_method reflection_with_llm not working in GPTAssistantAgent
4 participants