-
Notifications
You must be signed in to change notification settings - Fork 6.5k
AssistantAgent first "think" before calling a tool. #4895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You can use prompt words to constrain, which is what I do, so that the big model can give an execution plan before taking action Answer requirements
I added it like this, it will basically give an execution plan first |
@SuMiaoALi thanks for the suggestion. Could you create a PR to add a tip on how to trigger the "first think then execute" through prompting in the tutorial? autogen/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/agents.ipynb Lines 196 to 197 in d99fb23
You can use
|
我的意思是,你可以通过提示词来达到约束LLM的效果,这与代码无关的。 |
I understand. I meant can you provide an example prompt for a common model, like gpt-4o. |
Not needed for now as it can either be prompted or extracted from tool call response that contain additional text field. |
When using
gpt-4o
or similar models with theautogen_agentchat.agents.AssistantAgent
, it often just calls the tool directly without first responding with a message that says what tool it will call.It would be good to show how to either properly prompt the model to make it responds with a "thought" message first before calling tools or build in some mechanism in
AssistantAgent
that will trigger this behavior.Originally posted by @ekzhu in #4886 (comment)
Example usage when add a new parameter
think_before_tool_use=True
. When this parameter is set toTrue
, the agent will first call the model withtool_choice = "none"
and then call the model again withtool_choice="auto"
.One caveat is when the task is unrelated to tool use, the agent will produce two messages e.g.:
The question is how to reconcile this case. Should we include both messages in the final response's
chat_message
?The text was updated successfully, but these errors were encountered: