-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: IndexError: list index out of range #2038
Comments
The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In this case it didn't detect the single line code block. Try to set the default_reply of UserProxyAgent to a different msg, for example, "no code is found". |
Thank @sonichi for your reply ! # create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
default_auto_reply="no code is found",
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
) It not raises IndexError again. Is it caused by my default_auto_reply setting? Is it caused by my local model ? |
Why is the IndexError caused by my local model that it not supports empty messages? I think it is not the root cause, because my local model does not receive any requests. Is there something wrong with my understanding? |
I just tried to run the code under 'no code execution' in quick start found here. And I got the same error. |
I also encountered the error @MaveriQ mentioned when running the sample. Tried version 0.2.36 and 0.2.37 (Came back to say that it did work when switching to 0.2.35) |
guys, when you say "I got the same error", can you also post your model (local, remote, which API, version), as well as your code snippet to reproduce the error. Otherwise we don't know what to do with it. We don't have access to every possible model APIs and local models. Thanks, |
Same error here. autogen version: Tried to run following code snippet from the getting started documentation: import os
from autogen import AssistantAgent, UserProxyAgent
llm_config = {"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Tell me a joke about NVDA and TESLA stock prices.",
) Getting error:
|
I hit this as well.
After navigating the code I found the problem ultimately comes from this line (where the rate limiters are only initialised if the Line 442 in 6103889
FIX So the workaround for this bug is to simply supply the llm_config with a
|
I have the same error, do you find the answer? Thanks |
yes @huapsy I posted the fix above! Just do this:
|
* fix bug in getting started guide for 0.2 #2038 * remove uneeded submodule * remove uneeded submodule * remove unecessary file
Describe the issue
My python version: 3.11
When i run the code from the notebook
agentchat_auto_feedback_from_code_execution.ipynb
.Then i got the error message:
I confirm that the request is not being sent to the LLM model because I use the local model and no request logs were found.
How to solve it?
Steps to reproduce
agentchat_auto_feedback_from_code_execution.ipynb
Screenshots and logs
Additional Information
No response
The text was updated successfully, but these errors were encountered: