Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Async_human_input openai.BadRequestError #1174

Closed
haowaiwai opened this issue Jan 8, 2024 · 20 comments · Fixed by #1182, #1187 or #1204
Closed

[Bug]: Async_human_input openai.BadRequestError #1174

haowaiwai opened this issue Jan 8, 2024 · 20 comments · Fixed by #1182, #1187 or #1204

Comments

@haowaiwai
Copy link

Describe the bug

openai.BadRequestError: Error code: 400 - {'error': {'message': "Additional properties are not allowed ('tool_responses' was unexpected) - 'messages.3'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Steps to reproduce

No response

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

v0.2.4

@haowaiwai haowaiwai added the bug label Jan 8, 2024
@sonichi
Copy link
Contributor

sonichi commented Jan 8, 2024

Could you provide steps to reproduce?
cc @yenif

@haowaiwai
Copy link
Author

python run example:Agent Chat with Async Human Inputs
create_llm_config("gpt-3.5-turbo", "0.4", "23")

@yenif
Copy link
Collaborator

yenif commented Jan 9, 2024

I wasn't able to replicate, but #1182 should shore it up

@hardchor
Copy link
Contributor

hardchor commented Jan 9, 2024

Getting the same issue here with a very basic set-up (group chat, no tools). I don't believe this issue should be closed until it's confirmed to be fixed?

@yenif
Copy link
Collaborator

yenif commented Jan 9, 2024

Yeah shouldn't close until #1174 is at least merged. @hardchor can you provide more info on how to replicate? If you're willing you can try checking out from #1174 to see if it fixes your use case.

@hardchor
Copy link
Contributor

hardchor commented Jan 9, 2024

@yenif unfortunately not. Here is my setup and stacktrace: https://gist.github.com/hardchor/260e37de715d7335eb9f61c9b075139c

@yoadsn
Copy link

yoadsn commented Jan 9, 2024

Some more info on this.
autogen v0.2.5
openai v1.7.0

The openai client chokes since the tool_responses property is not expected on the message structure in the outgoing message.
The property is set here even if no tool usage is involved.

My flow is super simple to repro this:

from autogen import UserProxyAgent, ConversableAgent, config_list_from_json


def main():
    config_list = config_list_from_json(
        env_or_file="OAI_CONFIG_LIST",
        filter_dict={
            "model": {
                "gpt-4-1106-preview",
            }
        },
    )

    # Create the agent that uses the LLM.
    assistant = ConversableAgent("agent", llm_config={"config_list": config_list})

    # Create the agent that represents the user in the conversation.
    user_proxy = UserProxyAgent("user", code_execution_config=False)

    # Let the assistant start the conversation.  It will end when the user types exit.
    assistant.initiate_chat(user_proxy, message="How can I help you today?")


if __name__ == "__main__":
    main()

As soon as the "human" replies - the message generated from that reply gets that extra prop which is then sent to the LLM and the openai client refuses the accept it.
In essence check_termination_and_human_reply will process the human input within the UserProxy to generate from that an outgoing message sent to the assistant which would pass that over to the LLM to handle.

@rhamnett
Copy link

rhamnett commented Jan 9, 2024

I am seeing the same behaviour with the example here:

https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb

@rhamnett
Copy link

rhamnett commented Jan 9, 2024

I've also tried the PR , still getting same error as original comment

Command I used:
!pip uninstall -y pyautogen
!pip install git+https://github.com/microsoft/autogen.git@refs/pull/1182/head

@azngeek
Copy link

azngeek commented Jan 10, 2024

I can confirm that i have the same errors. Using GPT-3.5 and also the very basic example from async_

@hardchor
Copy link
Contributor

@sonichi @yenif Can confirm that #1182 fixes it

@rhamnett
Copy link

@sonichi @yenif @hardchor can you please try it on https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb

because I could not get it to work after installing the 1182 PR and restarting the notebook

@Risingabhi
Copy link

its not working getting same error as mentioned originally.

---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
[<ipython-input-9-8ce0ae4cd50a>](https://localhost:8080/#) in <cell line: 25>()
     23 
     24 
---> 25 await main()  # noqa: F704

24 frames
[<ipython-input-9-8ce0ae4cd50a>](https://localhost:8080/#) in main()
     16     )
     17 
---> 18     await boss.a_initiate_chat(
     19         assistant,
     20         message="Resume Review, Technical Skills Assessment, Project Discussion, Job Role Expectations, Closing Remarks.",

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_initiate_chat(self, recipient, clear_history, silent, **context)
    642         """
    643         self._prepare_chat(recipient, clear_history)
--> 644         await self.a_send(self.generate_init_message(**context), recipient, silent=silent)
    645 
    646     def reset(self):

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_send(self, message, recipient, request_reply, silent)
    445         valid = self._append_oai_message(message, "assistant", recipient)
    446         if valid:
--> 447             await recipient.a_receive(message, self, request_reply, silent)
    448         else:
    449             raise ValueError(

[<ipython-input-3-62207a670e9b>](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
     71     ):
     72         # Call the superclass method to handle message reception asynchronously
---> 73         await super().a_receive(message, sender, request_reply, silent)
     74 

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
    588         reply = await self.a_generate_reply(sender=sender)
    589         if reply is not None:
--> 590             await self.a_send(reply, sender, silent=silent)
    591 
    592     def _prepare_chat(self, recipient, clear_history):

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_send(self, message, recipient, request_reply, silent)
    445         valid = self._append_oai_message(message, "assistant", recipient)
    446         if valid:
--> 447             await recipient.a_receive(message, self, request_reply, silent)
    448         else:
    449             raise ValueError(

[<ipython-input-3-62207a670e9b>](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
     51     ):
     52         # Call the superclass method to handle message reception asynchronously
---> 53         await super().a_receive(message, sender, request_reply, silent)
     54 
     55 

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
    588         reply = await self.a_generate_reply(sender=sender)
    589         if reply is not None:
--> 590             await self.a_send(reply, sender, silent=silent)
    591 
    592     def _prepare_chat(self, recipient, clear_history):

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_send(self, message, recipient, request_reply, silent)
    445         valid = self._append_oai_message(message, "assistant", recipient)
    446         if valid:
--> 447             await recipient.a_receive(message, self, request_reply, silent)
    448         else:
    449             raise ValueError(

[<ipython-input-3-62207a670e9b>](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
     71     ):
     72         # Call the superclass method to handle message reception asynchronously
---> 73         await super().a_receive(message, sender, request_reply, silent)
     74 

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_receive(self, message, sender, request_reply, silent)
    586         if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    587             return
--> 588         reply = await self.a_generate_reply(sender=sender)
    589         if reply is not None:
    590             await self.a_send(reply, sender, silent=silent)

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_generate_reply(self, messages, sender, exclude)
   1244             if self._match_trigger(reply_func_tuple["trigger"], sender):
   1245                 if asyncio.coroutines.iscoroutinefunction(reply_func):
-> 1246                     final, reply = await reply_func(
   1247                         self, messages=messages, sender=sender, config=reply_func_tuple["config"]
   1248                     )

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in a_generate_oai_reply(self, messages, sender, config)
    731     ) -> Tuple[bool, Union[str, Dict, None]]:
    732         """Generate a reply using autogen.oai asynchronously."""
--> 733         return await asyncio.get_event_loop().run_in_executor(
    734             None, functools.partial(self.generate_oai_reply, messages=messages, sender=sender, config=config)
    735         )

[/usr/lib/python3.10/asyncio/futures.py](https://localhost:8080/#) in __await__(self)
    283         if not self.done():
    284             self._asyncio_future_blocking = True
--> 285             yield self  # This tells Task to wait for completion.
    286         if not self.done():
    287             raise RuntimeError("await wasn't used with future")

[/usr/lib/python3.10/asyncio/tasks.py](https://localhost:8080/#) in __wakeup(self, future)
    302     def __wakeup(self, future):
    303         try:
--> 304             future.result()
    305         except BaseException as exc:
    306             # This may also be a cancellation.

[/usr/lib/python3.10/asyncio/futures.py](https://localhost:8080/#) in result(self)
    199         self.__log_traceback = False
    200         if self._exception is not None:
--> 201             raise self._exception.with_traceback(self._exception_tb)
    202         return self._result
    203 

[/usr/lib/python3.10/concurrent/futures/thread.py](https://localhost:8080/#) in run(self)
     56 
     57         try:
---> 58             result = self.fn(*self.args, **self.kwargs)
     59         except BaseException as exc:
     60             self.future.set_exception(exc)

[/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py](https://localhost:8080/#) in generate_oai_reply(self, messages, sender, config)
    706 
    707         # TODO: #1143 handle token limit exceeded error
--> 708         response = client.create(
    709             context=messages[-1].pop("context", None), messages=self._oai_system_message + all_messages
    710         )

[/usr/local/lib/python3.10/dist-packages/autogen/oai/client.py](https://localhost:8080/#) in create(self, **config)
    259                         continue  # filter is not passed; try the next config
    260             try:
--> 261                 response = self._completions_create(client, params)
    262             except APIError as err:
    263                 error_code = getattr(err, "code", None)

[/usr/local/lib/python3.10/dist-packages/autogen/oai/client.py](https://localhost:8080/#) in _completions_create(self, client, params)
    376             params = params.copy()
    377             params["stream"] = False
--> 378             response = completions.create(**params)
    379 
    380         return response

[/usr/local/lib/python3.10/dist-packages/openai/_utils/_utils.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
    269                         msg = f"Missing required argument: {quote(missing[0])}"
    270                 raise TypeError(msg)
--> 271             return func(*args, **kwargs)
    272 
    273         return wrapper  # type: ignore

[/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py](https://localhost:8080/#) in create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    641         timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    642     ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 643         return self._post(
    644             "/chat/completions",
    645             body=maybe_transform(

[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in post(self, path, cast_to, body, options, files, stream, stream_cls)
   1089             method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1090         )
-> 1091         return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
   1092 
   1093     def patch(

[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in request(self, cast_to, options, remaining_retries, stream, stream_cls)
    850         stream_cls: type[_StreamT] | None = None,
    851     ) -> ResponseT | _StreamT:
--> 852         return self._request(
    853             cast_to=cast_to,
    854             options=options,

[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
    931                 err.response.read()
    932 
--> 933             raise self._make_status_error_from_response(err.response) from None
    934 
    935         return self._process_response(

BadRequestError: Error code: 400 - {'error': {'message': "Additional properties are not allowed ('tool_responses' was unexpected```

@sonichi
Copy link
Contributor

sonichi commented Jan 10, 2024

@yenif I guess there is more bug in the async function. Could you check?

@rhamnett
Copy link

@sonichi yes I guess that will be it

@sonichi sonichi reopened this Jan 10, 2024
@yenif
Copy link
Collaborator

yenif commented Jan 10, 2024

I'll be able to look at it tonight

@radman-x
Copy link
Collaborator

I can confirm that the latest commit (#e7cdae6) fixes this bug for me. I had exactly the same problem as @yoadsn super simple example before doing the latest pull and now it is gone.

@rhamnett
Copy link

rhamnett commented Jan 10, 2024

Async still an issue though

@davorrunje
Copy link
Collaborator

Async still an issue though

#1201 solves a problem with async functions, could be related

@yenif yenif mentioned this issue Jan 11, 2024
3 tasks
@yenif
Copy link
Collaborator

yenif commented Jan 11, 2024

#1204 should cover everything

whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants