Skip to content

Getting thread error when trying to use autogen through api #340

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
vashat opened this issue Oct 21, 2023 · 10 comments
Closed

Getting thread error when trying to use autogen through api #340

vashat opened this issue Oct 21, 2023 · 10 comments

Comments

@vashat
Copy link

vashat commented Oct 21, 2023

Hi!

Trying to wrap an api around an autogen setup, but it fails with "ValueError: signal only works in main thread of the main interpreter". This happen both if I try to put the app in Flask or when using FastAPI. Here's my code:

import autogen
from flask import Flask, request, jsonify
import re
from flask_socketio import SocketIO
from flask_cors import CORS

config_list = [
    {
        'model': 'gpt-3.5-turbo',
        'api_key': '',
    }
]


# create an AssistantAgent named "assistant"
assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={
        "seed": 72,  # seed for caching and reproducibility
        "config_list": config_list,  # a list of OpenAI API configurations
        "temperature": 0,  # temperature for sampling
    },  # configuration for autogen's enhanced inference API which is compatible with OpenAI API
    system_message = """
    Some prompt stuff...
    Give a final very short answer to the user when the code has runned without errors and then reply TERMINATE in the end.
    """
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE") or x.get("content", "").rstrip().endswith("TERMINATE."),
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,  # set to True or image name like "python:3" to use docker
    },
)

app = Flask(__name__)
CORS(app)

socketio = SocketIO(app, cors_allowed_origins="*")



@app.route('/api/chat', methods=['POST'])
def chat():
    print("In function")
    if request.json.get('question'):
        # Extract text
        question = request.json.get('question')
        user_proxy.initiate_chat(
            assistant,
            message = question
        )
        result = user_proxy.chat_messages[assistant]
        i = 0
        for message in result:

            if "exitcode: 0 (execution succeeded)" in message['content']:
                response = {}
                if ".png" in message['content']:
                    match = re.search(r'\b\w+[-\w]*\.png\b', message['content'])
                    response = { 'answer': 'Här är grafen som efterfrågades', 'image': match.group() }
                else:
                    answer = result[i+1]['content'].removesuffix("TERMINATE").removesuffix("TERMINATE.")
                    response = { 'answer': answer, 'image': '' }
                print("Going to return: " + response)
                return jsonify(response)
                break
                
            i += 1
    return jsonify({'error': 'No valid data provided'})


if __name__ == '__main__':
    # Run the Flask app
    #app.run(debug=False)
    socketio.run(app)

The error I get is:

[2023-10-21 21:26:52,276] ERROR in app: Exception on /api/chat [POST]
Traceback (most recent call last):
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/flask_cors/extension.py", line 176, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/Users/admin/scripts/autogen/climate-api.py", line 66, in chat
    user_proxy.initiate_chat(
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 531, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 464, in receive
    self.send(reply, sender, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 464, in receive
    self.send(reply, sender, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 464, in receive
    self.send(reply, sender, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 464, in receive
    self.send(reply, sender, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 464, in receive
    self.send(reply, sender, silent=silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 462, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 781, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 637, in generate_code_execution_reply
    exitcode, logs = self.execute_code_blocks(code_blocks)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 908, in execute_code_blocks
    exitcode, logs, image = self.run_code(
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/agentchat/conversable_agent.py", line 885, in run_code
    return execute_code(code, **kwargs)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/site-packages/autogen/code_utils.py", line 310, in execute_code
    signal.signal(signal.SIGALRM, timeout_handler)
  File "/Users/admin/scripts/miniconda3/envs/autogen/lib/python3.9/signal.py", line 56, in signal
    handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread of the main interpreter

So the error is triggered somewhere in autogen package.
Are there any examples out there on how to wrap tautogen app into an API?

@vashat
Copy link
Author

vashat commented Oct 22, 2023

I can add the the problem is triggered when I post a question like this:

curl -X POST http://localhost:5000/api/chat -H "Content-Type: application/json" -d '{"question":"My question here?"}'

@sonichi sonichi added the ui label Oct 22, 2023
@sonichi
Copy link
Contributor

sonichi commented Oct 22, 2023

@victordibia

@victordibia
Copy link
Collaborator

Taking a look. Will post an update here once I have one.

@bruinon
Copy link

bruinon commented Oct 23, 2023

i have the same problem running AutoGen inside Gradio

`--------------------------------------------------------------------------------
Dwight (to Michael):

Multiplication is a mathematical operation that combines two numbers to find their product. It can be thought of as repeated addition. For example, if you want to multiply 3 by 4, you can think of it as adding 3 four times: 3 + 3 + 3 + 3 = 12. The result, 12, is the product of the multiplication.

In general, to multiply two numbers, you can follow these steps:

  1. Write down the two numbers you want to multiply.
  2. Multiply the first number by each digit of the second number, starting from the rightmost digit.
  3. Write down the partial products, shifting each one place to the left as you move to the next digit of the second number.
  4. Add up all the partial products to get the final product.

Here's an example of multiplying 23 by 5:

  23
x  5
-----
 115 (23 * 5)

The product of 23 and 5 is 115.

In Python, you can perform multiplication using the * operator. For example:

result = 3 * 4
print(result)  # Output: 12

This code multiplies 3 by 4 and prints the result, which is 12.


EXECUTING CODE BLOCK 0 (inferred language is python)...
signal only works in main thread of the main interpreter

`

@victordibia
Copy link
Collaborator

victordibia commented Oct 23, 2023

@vashat .. so I am able to replicate the issue you mention.
My main observations so far ...

  • Autogen uses python signal to manage code execution timeouts. However, signal only works in main thread of the main interpreter

  • For some reason, Flask app above (despite being run in main if __name__ == '__main__':) seems to be running the initiate_chat method is a thread that is not the main interpreter.

To get you unblocked, I might suggest trying out FastAPI (without sockets). I have an example project where a fastapi endpoint returns simple queries to a frontend here

Can you confirm if you are able to run the fastapi sample above?


In general, we might experience more usecases/workflows where the client application may require autogen to run in a separate thread. For this, we will probably need to discuss alternatives to signal. @sonichi

@sonichi
Copy link
Contributor

sonichi commented Oct 23, 2023

#224
This PR has been a while but no review is made. Could you remind the reviewers please?

@fdchiu
Copy link

fdchiu commented Feb 12, 2024

Any resolution to this problem? It seems happening when autogen runs in the cloud and executing code (Gradio, streamline etc.).

Any work around ?

@victordibia
Copy link
Collaborator

Hi @fdchiu ,

Can you share the error you are getting?
Is this related to signal e.g. "ValueError: signal only works in main thread of the main interpreter"

@fdchiu
Copy link

fdchiu commented Feb 12, 2024 via email

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 13, 2024

@vashat is this issue resolved for you? If so please close. thanks!

@gagb gagb closed this as completed Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants