You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the agents chat. Both the user_proxy and assistant have nested chats with summary "reflection_with_llm".
UserWarning: Cannot extract summary using reflection_with_llm: Error code: 400
{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'user', 'ctx': {'expected': "'function'"}}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}]', 'type': 'BadRequestError', 'param': None, 'code': 400}. Using an empty str as summary.
Code block:
defwriting_message(recipient, messages, sender, config):
returnf"{recipient.chat_messages_for_summary(sender)[-1]['content']}"nested_chats1= [
{
"recipient": coder,
"message": writing_message,
"clear_history": True,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt": ''' Return the original task and the final improved code block to solve the task. '''},
"max_turns": 1
},
]
nested_chats2= [
{
"recipient": executor,
"summary_method": "last_msg",
"max_turns": 1
},
{
"recipient": tester,
"message": writing_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt": ''' Return the exitcode, Code output print and the result analysis. If the result indicates there is an error, then reply "Please improve the code." in the end. If the result is correct, then reply "TERMINATE" in the end. '''},
"max_turns": 1
},
]
user_proxy=autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
is_termination_msg=lambdax: x.get("content", "").find("TERMINATE") >=0,
# is_termination_msg=lambda x: len(x.get("content", "").rstrip()) < 2,code_execution_config=False,
# code_execution_config={# # the executor to run the generated code# "executor": LocalCommandLineCodeExecutor(work_dir="local_coding"),# },
)
assistant=autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 41, # seed for caching and reproducibility"config_list": config_list, # a list of OpenAI API configurations"temperature": 0, # temperature for sampling
},
is_termination_msg=lambdax: x.get("content", "").find("TERMINATE") >=0,
)
user_proxy.register_nested_chats(
nested_chats2,
trigger=assistant,
)
assistant.register_nested_chats(
nested_chats1,
trigger=user_proxy,
)
chat_res=user_proxy.initiate_chat(
assistant,
message=task,
max_turns=2,
summary_method="reflection_with_llm",
summary_args={"summary_prompt": "Return the final code block to solve the task, whose test result is correct."},
)
Describe the bug
I use the agents chat. Both the user_proxy and assistant have nested chats with summary "reflection_with_llm".
UserWarning: Cannot extract summary using reflection_with_llm: Error code: 400
{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'typed-dict', 'role'), 'msg': "Input should be 'function'", 'input': 'user', 'ctx': {'expected': "'function'"}}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}, {'type': 'extra_forbidden', 'loc': ('body', 'messages', 2, 'typed-dict', 'tool_calls'), 'msg': 'Extra inputs are not permitted', 'input': []}]', 'type': 'BadRequestError', 'param': None, 'code': 400}. Using an empty str as summary.
Code block:
Steps to reproduce
No response
Model Used
llama3-70b-instruct
Use latest vllm:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server --model ./Meta-Llama-3___1-70B-Instruct --host 0.0.0.0 --port 50051 --served-model-name llama3-70b --trust-remote-code --tensor-parallel-size 4 --dtype bfloat16 --max-model-len 4096 --enforce-eager
Expected Behavior
Fix the bug and release new version.
Screenshots and logs
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: