-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: openai.BadRequestError when using GPTAssistantAgent in GroupChat #3284
Comments
It looks like you’re running into an issue with the OpenAI API where the role parameter is set to 'tool', but the API only accepts 'user' or 'assistant' as valid values for this parameter. To resolve this, you’ll need to locate the part of your code where the API request is being made and make sure that the role is set to either 'user' or 'assistant' depending on what you’re trying to achieve. |
@LarsAC @PersonaDev I've encountered the same issue and can reproduce it. after enabling debug output, I found that the gptassistantagent is in fact generating a message with role value = 'tool' while using groupchat. I'm not specifically setting any messages, I'm simply calling initiate_chat for groupchat and then conversing. the tool that my agent is utilizing is the openai assistant file search, configured in my code per below. I have two gptassistant agents along with 4 conversable agents. I've noticed this error doesn't always occur for the same gptassistant in a given conversation, but it does always occur for one of them. please let me know any thoughts on debugging. i'll dig into conversable_agent.py and gpt_assistant_agent.py in the meantime to try to find the role assignment issue. based on the other related open issues, seems it might be unrelated to gptassistant agent, and a general issue with tool calling agents in groupchat. Thanks initiate groupchat: groupchat_result = user_proxy.initiate_chat( I followed this guide for configuration: my config: assistant_config = { content_manager = GPTAssistantAgent( ERROR logging: DEBUG:openai._base_client:HTTP Response: POST https://api.openai.com/v1/threads/thread_EOMfr5UY2sOG7jrOAYn9CP27/messages "200 OK" Headers({'date': 'Sun, 18 Aug 2024 22:32:57 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'openai-version': '2020-10-01', 'openai-organization': 'user-odczqxrlslggkjmvuya9yqaq', 'x-request-id': 'req_20f00ee86f9343181d7f642062e24f9d', 'openai-processing-ms': '134', 'strict-transport-security': 'max-age=15552000; includeSubDomains; preload', 'cf-cache-status': 'DYNAMIC', 'x-content-type-options': 'nosniff', 'server': 'cloudflare', 'cf-ray': '8b555cdfead72a9a-LAX', 'content-encoding': 'gzip', 'alt-svc': 'h3=":443"; ma=86400'}) |
Describe the bug
I have put together a small team of agents (user_proxy, two researchers, and a data analyst). The researchers are
AssistantAgent
s, the data analyst is aGPTAssistantAgent
with thecode_interpreter
tool.Using a sequential chat mode (
user_proxy.initiate_chats()
) the conversation terminates fine. When I switch to using a GroupChat though, the chat aborts upon trying to talk to the data_analyst:Steps to reproduce
No response
Model Used
Currently using gpt-4o, but does not seem model related.
Expected Behavior
Conversation should run smooth with out error.
Screenshots and logs
No response
Additional Information
pyautogen==0.2.33
openai==1.37.1
Python 3.11.9
I went through the issues #3164 and #960. While they seem somewhat related I think this error has a different origin.
The text was updated successfully, but these errors were encountered: