Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot extract summary using reflection_with_llm: Error code: 400 #3319

Open
Alaya-Con opened this issue Aug 7, 2024 · 1 comment
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage

Comments

@Alaya-Con
Copy link

Alaya-Con commented Aug 7, 2024

Describe the bug

I use the agents chat. Both the user_proxy and assistant have nested chats with summary "reflection_with_llm".

UserWarning: Cannot extract summary using reflection_with_llm: Error code: 400
{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'user', 'ctx': {'expected': "'function'"}}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}]', 'type': 'BadRequestError', 'param': None, 'code': 400}. Using an empty str as summary.

Code block:

def writing_message(recipient, messages, sender, config):
    return f"{recipient.chat_messages_for_summary(sender)[-1]['content']}"

nested_chats1 = [
    {
        "recipient": coder,
        "message": writing_message,
        "clear_history": True,
        "summary_method": "reflection_with_llm",
        "summary_args": {"summary_prompt": '''
        Return the original task and the final improved code block to solve the task.
        '''},
        "max_turns": 1
    },
]
nested_chats2 = [
    {
        "recipient": executor,
        "summary_method": "last_msg",
        "max_turns": 1
    },
    {
        "recipient": tester,
        "message": writing_message,
        "summary_method": "reflection_with_llm",
        "summary_args": {"summary_prompt": '''
        Return the exitcode, Code output print and the result analysis. 
        If the result indicates there is an error, then reply "Please improve the code." in the end. If the result is correct, then reply "TERMINATE" in the end.
        '''},
        "max_turns": 1
    },

]

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=3,
    is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
    # is_termination_msg=lambda x: len(x.get("content", "").rstrip()) < 2,
    code_execution_config=False,
    # code_execution_config={
    #     # the executor to run the generated code
    #     "executor": LocalCommandLineCodeExecutor(work_dir="local_coding"),
    # },
)

assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={
        "cache_seed": 41,  # seed for caching and reproducibility
        "config_list": config_list,  # a list of OpenAI API configurations
        "temperature": 0,  # temperature for sampling
    },
    is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
)

user_proxy.register_nested_chats(
    nested_chats2,
    trigger=assistant,
)

assistant.register_nested_chats(
    nested_chats1,
    trigger=user_proxy,
)

chat_res = user_proxy.initiate_chat(
        assistant,
        message=task,
        max_turns=2,
        summary_method="reflection_with_llm",
        summary_args={"summary_prompt": "Return the final code block to solve the task, whose test result is correct."},
    )

Steps to reproduce

No response

Model Used

llama3-70b-instruct
Use latest vllm:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server --model ./Meta-Llama-3___1-70B-Instruct --host 0.0.0.0 --port 50051 --served-model-name llama3-70b --trust-remote-code --tensor-parallel-size 4 --dtype bfloat16 --max-model-len 4096 --enforce-eager

Expected Behavior

Fix the bug and release new version.

Screenshots and logs

No response

Additional Information

No response

@Alaya-Con Alaya-Con added the bug label Aug 7, 2024
@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
mindeleven added a commit to mindeleven/trading-sytem-exploration that referenced this issue Oct 10, 2024
…Agent Refinement; so far it returns an error that is described at microsoft/autogen#3319
@fniedtner fniedtner removed the bug label Oct 24, 2024
@jaredlang
Copy link

Has this bug been fixed? I happened to me too.

I found this post.
https://stackoverflow.com/questions/78649446/autogen-groupchat-error-code-openai-badrequesterror-error-code-400

I removed the space in the agent names, and solved the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage
Projects
None yet
Development

No branches or pull requests

4 participants