Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: IndexError: list index out of range #2038

Open
RyanChen1997 opened this issue Mar 16, 2024 · 10 comments
Open

[Issue]: IndexError: list index out of range #2038

RyanChen1997 opened this issue Mar 16, 2024 · 10 comments
Labels
0.2 Issues which are related to the pre 0.4 codebase code-execution execute generated code documentation Improvements or additions to documentation models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) needs-triage

Comments

@RyanChen1997
Copy link

RyanChen1997 commented Mar 16, 2024

Describe the issue

My python version: 3.11
When i run the code from the notebook agentchat_auto_feedback_from_code_execution.ipynb.

# create an AssistantAgent named "assistant"
assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config={
        "cache_seed": 41,  # seed for caching and reproducibility
        "config_list": config_list,  # a list of OpenAI API configurations
        "temperature": 0,  # temperature for sampling
    },  # configuration for autogen's enhanced inference API which is compatible with OpenAI API
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    },
)
# the assistant receives a message from the user_proxy, which contains the task description
chat_res = user_proxy.initiate_chat(
    assistant,
    message="""What date is today? Compare the year-to-date gain for META and TESLA.""",
    summary_method="reflection_with_llm",
)

Then i got the error message:

Traceback (most recent call last):
  File "/home/yongxiangchen69/develop/myproject/my_autogen/first_agent.py", line 37, in <module>
    chat_res = user_proxy.initiate_chat(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 973, in initiate_chat
    self.send(msg2send, recipient, silent=silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
    self.send(reply, sender, silent=silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
    self.send(reply, sender, silent=silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
    self.send(reply, sender, silent=silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
    self.send(reply, sender, silent=silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 779, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1862, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1261, in generate_oai_reply
    extracted_response = self._generate_oai_reply_from_client(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1285, in _generate_oai_reply_from_client
    extracted_response = llm_client.extract_text_or_completion_object(response)[0]
                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

I confirm that the request is not being sent to the LLM model because I use the local model and no request logs were found.
How to solve it?

Steps to reproduce

  1. pip install pyautogen
  2. run code on the notebook agentchat_auto_feedback_from_code_execution.ipynb

Screenshots and logs

Screenshot 2024-03-16 7 49 17 PM
Screenshot 2024-03-16 7 49 41 PM
Screenshot 2024-03-16 7 50 03 PM

Additional Information

No response

@RyanChen1997 RyanChen1997 changed the title [Issue]: unable to run notebook code from [Issue]: IndexError: list index out of range Mar 16, 2024
@sonichi
Copy link
Contributor

sonichi commented Mar 16, 2024

The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In this case it didn't detect the single line code block. Try to set the default_reply of UserProxyAgent to a different msg, for example, "no code is found".
If it solves your problem, I'd appreciate the answer to be added to FAQ or tutorials.
cc @ekzhu @jackgerrits

@sonichi sonichi added code-execution execute generated code models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) documentation Improvements or additions to documentation labels Mar 16, 2024
@RyanChen1997
Copy link
Author

Thank @sonichi for your reply !
Following your answer, i set default_auto_reply at UserProxyAgent:

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
    default_auto_reply="no code is found",
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    },
)

It not raises IndexError again.
But, the process like enters into one loop, answers and questions repeat again and again.
Uploading Screenshot 2024-03-17 5.20.41 PM.png…
Screenshot 2024-03-17 5 20 57 PM
Screenshot 2024-03-17 5 21 15 PM
Screenshot 2024-03-17 5 21 26 PM
Screenshot 2024-03-17 5 21 38 PM

Is it caused by my default_auto_reply setting? Is it caused by my local model ?

@RyanChen1997
Copy link
Author

The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In this case it didn't detect the single line code block. Try to set the default_reply of UserProxyAgent to a different msg, for example, "no code is found". If it solves your problem, I'd appreciate the answer to be added to FAQ or tutorials. cc @ekzhu @jackgerrits

Why is the IndexError caused by my local model that it not supports empty messages? I think it is not the root cause, because my local model does not receive any requests. Is there something wrong with my understanding?

@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
@MaveriQ
Copy link

MaveriQ commented Oct 26, 2024

I just tried to run the code under 'no code execution' in quick start found here. And I got the same error.

@dokwasny
Copy link

dokwasny commented Oct 28, 2024

I also encountered the error @MaveriQ mentioned when running the sample. Tried version 0.2.36 and 0.2.37

(Came back to say that it did work when switching to 0.2.35)

@ekzhu
Copy link
Collaborator

ekzhu commented Oct 29, 2024

@MaveriQ @dokwasny

guys, when you say "I got the same error", can you also post your model (local, remote, which API, version), as well as your code snippet to reproduce the error.

Otherwise we don't know what to do with it.

We don't have access to every possible model APIs and local models.

Thanks,

@moryachok
Copy link

Same error here.

autogen version: 0.2.37
python version: 3.12.7

Tried to run following code snippet from the getting started documentation:

import os
from autogen import AssistantAgent, UserProxyAgent

llm_config = {"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)

# Start the chat
user_proxy.initiate_chat(
    assistant,
    message="Tell me a joke about NVDA and TESLA stock prices.",
)

Getting error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[36], line 15
     12 user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
     14 # Start the chat
---> 15 user_proxy.initiate_chat(
     16     assistant,
     17     message="Tell me a joke about NVDA and TESLA stock prices.",
     18 )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1114, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
   1112     else:
   1113         msg2send = self.generate_init_message(message, **kwargs)
-> 1114     self.send(msg2send, recipient, silent=silent)
   1115 summary = self._summarize_chat(
   1116     summary_method,
   1117     summary_args,
   1118     recipient,
   1119     cache=cache,
   1120 )
   1121 for agent in [self, recipient]:

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:748, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    746 valid = self._append_oai_message(message, "assistant", recipient, is_sending=True)
    747 if valid:
--> 748     recipient.receive(message, self, request_reply, silent)
    749 else:
    750     raise ValueError(
    751         "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
    752     )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:914, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    912 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    913     return
--> 914 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    915 if reply is not None:
    916     self.send(reply, sender, silent=silent)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:2068, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
   2066     continue
   2067 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 2068     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
   2069     if logging_enabled():
   2070         log_event(
   2071             self,
   2072             "reply_func_executed",
   (...)
   2076             reply=reply,
   2077         )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1436, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
   1434 if messages is None:
   1435     messages = self._oai_messages[sender]
-> 1436 extracted_response = self._generate_oai_reply_from_client(
   1437     client, self._oai_system_message + messages, self.client_cache
   1438 )
   1439 return (False, None) if extracted_response is None else (True, extracted_response)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1455, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
   1452         all_messages.append(message)
   1454 # TODO: #1143 handle token limit exceeded error
-> 1455 response = llm_client.create(
   1456     context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self
   1457 )
   1458 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
   1460 if extracted_response is None:

File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:775, in OpenAIWrapper.create(self, **config)
    773             continue  # filter is not passed; try the next config
    774 try:
--> 775     self._throttle_api_calls(i)
    776     request_ts = get_current_ts()
    777     response = client.create(params)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:1072, in OpenAIWrapper._throttle_api_calls(self, idx)
   1070 def _throttle_api_calls(self, idx: int) -> None:
   1071     """Rate limit api calls."""
-> 1072     if self._rate_limiters[idx]:
   1073         limiter = self._rate_limiters[idx]
   1075         assert limiter is not None

IndexError: list index out of range

@exaspace
Copy link

exaspace commented Nov 4, 2024

@moryachok

I hit this as well.

  1. pip install autogen-agentchat
  2. run the very first getting started example shown here: https://microsoft.github.io/autogen/0.2/docs/Getting-Started
  3. error IndexError: list index out of range

After navigating the code I found the problem ultimately comes from this line (where the rate limiters are only initialised if the config_list parameter is present):

if config_list:

FIX

So the workaround for this bug is to simply supply the llm_config with a config_list dictionary key as:

llm_config = {
    "config_list": [
        {
            "model": "gpt-4",
            "api_key": os.environ.get("OPENAI_API_KEY"),
        },
    ]}

@huapsy
Copy link

huapsy commented Nov 5, 2024

Same error here.

autogen version: 0.2.37 python version: 3.12.7

Tried to run following code snippet from the getting started documentation:

import os
from autogen import AssistantAgent, UserProxyAgent

llm_config = {"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)

Start the chat

user_proxy.initiate_chat(
assistant,
message="Tell me a joke about NVDA and TESLA stock prices.",
)
Getting error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[36], line 15
     12 user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
     14 # Start the chat
---> 15 user_proxy.initiate_chat(
     16     assistant,
     17     message="Tell me a joke about NVDA and TESLA stock prices.",
     18 )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1114, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
   1112     else:
   1113         msg2send = self.generate_init_message(message, **kwargs)
-> 1114     self.send(msg2send, recipient, silent=silent)
   1115 summary = self._summarize_chat(
   1116     summary_method,
   1117     summary_args,
   1118     recipient,
   1119     cache=cache,
   1120 )
   1121 for agent in [self, recipient]:

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:748, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    746 valid = self._append_oai_message(message, "assistant", recipient, is_sending=True)
    747 if valid:
--> 748     recipient.receive(message, self, request_reply, silent)
    749 else:
    750     raise ValueError(
    751         "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
    752     )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:914, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    912 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    913     return
--> 914 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    915 if reply is not None:
    916     self.send(reply, sender, silent=silent)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:2068, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
   2066     continue
   2067 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 2068     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
   2069     if logging_enabled():
   2070         log_event(
   2071             self,
   2072             "reply_func_executed",
   (...)
   2076             reply=reply,
   2077         )

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1436, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
   1434 if messages is None:
   1435     messages = self._oai_messages[sender]
-> 1436 extracted_response = self._generate_oai_reply_from_client(
   1437     client, self._oai_system_message + messages, self.client_cache
   1438 )
   1439 return (False, None) if extracted_response is None else (True, extracted_response)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1455, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
   1452         all_messages.append(message)
   1454 # TODO: #1143 handle token limit exceeded error
-> 1455 response = llm_client.create(
   1456     context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self
   1457 )
   1458 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
   1460 if extracted_response is None:

File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:775, in OpenAIWrapper.create(self, **config)
    773             continue  # filter is not passed; try the next config
    774 try:
--> 775     self._throttle_api_calls(i)
    776     request_ts = get_current_ts()
    777     response = client.create(params)

File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:1072, in OpenAIWrapper._throttle_api_calls(self, idx)
   1070 def _throttle_api_calls(self, idx: int) -> None:
   1071     """Rate limit api calls."""
-> 1072     if self._rate_limiters[idx]:
   1073         limiter = self._rate_limiters[idx]
   1075         assert limiter is not None

IndexError: list index out of range

I have the same error, do you find the answer? Thanks

@exaspace
Copy link

exaspace commented Nov 5, 2024

yes @huapsy I posted the fix above!

Just do this:

llm_config = {
    "config_list": [
        {
            "model": "gpt-4",
            "api_key": os.environ.get("OPENAI_API_KEY"),
        },
    ]}

ekzhu pushed a commit that referenced this issue Nov 5, 2024
* fix bug in getting started guide for 0.2 #2038

* remove uneeded submodule

* remove uneeded submodule

* remove unecessary file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase code-execution execute generated code documentation Improvements or additions to documentation models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) needs-triage
Projects
None yet
Development

No branches or pull requests

9 participants