Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: ValidationError occurs when running branch gemini #1139

Closed
OTR opened this issue Jan 4, 2024 · 13 comments
Closed

[Bug]: ValidationError occurs when running branch gemini #1139

OTR opened this issue Jan 4, 2024 · 13 comments
Assignees

Comments

@OTR
Copy link

OTR commented Jan 4, 2024

Describe the bug

First case Scenario

Given

Environment: My local Windows 10 machine
Python version: Python 3.11.7 (tags/v3.11.7:fa7a6f2, Dec 4 2023, 19:24:49) [MSC v.1937 64 bit (AMD64)] on win32
autogen: version 0.2.2
branch: gemini

When

I tried to run first function from samples listed below on my local windows machine

Then

I got an error StopCandidateException ( I rarely get that exception, like floating bug)

raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: RECITATION

see full traceback listing below

Then when I tried to reproduce the Exception above, I got another Exception. (See step 5 from steps to reproduce)

Second Case Scenario

I tryed to reproduce the mentioned bug on Github Spaces platform.

Given:

Environment: Running on github spaces
Python version:

  • Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
    
  • Python 3.10.13 (main, Nov 29 2023, 05:20:19) [GCC 12.2.0] on linux
    

autogen: version 0.2.2
branch: gemini
commit #: c6792a8

When:

I tried to run second function from samples with gemini branch and my Google Gen AI API KEY

Then:

I got ValidationError in pydantic_core package.

pydantic_core._pydantic_core.ValidationError: 1 validation error for Choice
logprobs
  Field required [type=missing, input_value={'finish_reason': 'stop',...=None, tool_calls=None)}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.5/v/missing

See full traceback below

Steps to reproduce

Step 1

At first I tryed to install autogen package from gemini branch with following commmands (python3.10 by default):

pip install https://github.com/microsoft/autogen/archive/gemini.zip
pip install "google-generativeai" "pydash" "pillow"
pip install "pyautogen[gemini]~=0.2.0b4"

Step 2

And I got an Exception from second case scenario.

Step 3

Then I tryed to create an isolated environment with poetry and python3.11, installed autogen with following commands:

pip install https://github.com/microsoft/autogen/archive/gemini.zip
pip install "google-generativeai" "pydash" "pillow"
pip install "pyautogen[gemini]~=0.2.0b4"

Step 4

And I got an Exception from Second case scenario in pydantic_core package.

Step 5

Then I thought maybe get rid of hard coded version for the last installed package -> 0.2.0b4 and tried on my local machine within isolated poetry environment the following commands:

pip install https://github.com/microsoft/autogen/archive/gemini.zip
pip install "google-generativeai" "pydash" "pillow"
pip install "pyautogen[gemini]"

Step 6

Run just first function within main block. Comment out second function call.

Step 7

And I got StopCandidateException (First case scenario). But It is floating bug, and I managed to get it only once.

Expected Behavior

Agents should start communicating.

Screenshots and logs

Full Traceback for First case scenario :

> $ python sample.py

user_proxy (to assistant):

Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "\home\PycharmProjects\autogen_gemini_test\sample.py", line 45, in <module>
    first()
  File "\home\PycharmProjects\autogen_gemini_test\sample.py", line 25, in first
    user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 562, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 360, in send
    recipient.receive(message, self, request_reply, silent)
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 493, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 968, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 637, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\oai\client.py", line 274, in create
    response = client.call(params)
               ^^^^^^^^^^^^^^^^^^^
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\autogen\oai\gemini.py", line 93, in call
    response = chat.send_message(gemini_messages[-1].parts[0].text, stream=stream)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "\home\AppData\Local\pypoetry\Cache\virtualenvs\autogen-gemini-test-yrRBJdLh-py3.11\Lib\site-packages\google\generativeai\generative_models.py", line 384, in send_message
    raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: RECITATION

Full Traceback for Second case scenario :

> $ python simple_chat.py 

user_proxy (to assistant):

Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/workspaces/autogen/samples/simple_chat.py", line 46, in <module>
    another()
  File "/workspaces/autogen/samples/simple_chat.py", line 25, in another
    user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 562, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 360, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 493, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 968, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 637, in generate_oai_reply
    response = client.create(
               ^^^^^^^^^^^^^^
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/oai/client.py", line 274, in create
    response = client.call(params)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/autogen/oai/gemini.py", line 123, in call
    choices = [Choice(finish_reason="stop", index=0, message=message)]
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.cache/pypoetry/virtualenvs/samples-czj8q62m-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 164, in __init__
    __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Choice
logprobs
  Field required [type=missing, input_value={'finish_reason': 'stop',...=None, tool_calls=None)}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.5/v/missing

Additional Information

Code samples I tried to run

from autogen import UserProxyAgent, ConversableAgent, config_list_from_json, AssistantAgent

config_list_gemini = [{
        "model": "gemini-pro",
        "api_key": "AIza-my-api-key",
        "api_type": "google"
}]

config_list_gemini_vision = [{
        "model": "gemini-pro-vision",
        "api_key": "AIza-my-api-key",
        "api_type": "google"
}]

def first():
    assistant = AssistantAgent("assistant",
                           llm_config={"config_list": config_list_gemini, "seed": 42},
                           max_consecutive_auto_reply=13)

    user_proxy = UserProxyAgent("user_proxy",
                                code_execution_config={"work_dir": "coding", "use_docker": False},
                                human_input_mode="NEVER",
                            is_termination_msg = lambda x: content_str(x.get("content")).find("TERMINATE") >= 0)

    user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")


def second():

    assistant = ConversableAgent("agent", llm_config={"config_list": config_list_gemini})

    user_proxy = UserProxyAgent("user", code_execution_config=True)

    assistant.initiate_chat(user_proxy, message="How can I help you today?")


if __name__ == "__main__":
    first()
    second()

Packages installed for the first case:

$ pip freeze

  • annotated-types==0.6.0
  • anyio==4.2.0
  • cachetools==5.3.2
  • certifi==2023.11.17
  • charset-normalizer==3.3.2
  • colorama==0.4.6
  • diskcache==5.6.3
  • distro==1.9.0
  • FLAML==2.1.1
  • google-ai-generativelanguage==0.4.0
  • google-api-core==2.15.0
  • google-auth==2.26.1
  • google-generativeai==0.3.2
  • googleapis-common-protos==1.62.0
  • grpcio==1.60.0
  • grpcio-status==1.60.0
  • h11==0.14.0
  • httpcore==1.0.2
  • httpx==0.26.0
  • idna==3.6
  • numpy==1.26.3
  • openai==1.6.1
  • pillow==10.2.0
  • proto-plus==1.23.0
  • protobuf==4.25.1
  • pyasn1==0.5.1
  • pyasn1-modules==0.3.0
  • pyautogen @ https://github.com/microsoft/autogen/archive/gemini.zip#sha256=a7ecbc81aa9279dde95be5ef7e33a8cd1733d90db3622c0f29ca8a53b4de511c
  • pydantic==2.5.3
  • pydantic_core==2.14.6
  • pydash==7.0.6
  • python-dotenv==1.0.0
  • regex==2023.12.25
  • requests==2.31.0
  • rsa==4.9
  • sniffio==1.3.0
  • termcolor==2.4.0
  • tiktoken==0.5.2
  • tqdm==4.66.1
  • typing_extensions==4.9.0
  • urllib3==2.1.0

Packages installed for the second case:

$ pip freeze

  • annotated-types==0.6.0
  • anyio==4.2.0
  • cachetools==5.3.2
  • certifi==2023.11.17
  • charset-normalizer==3.3.2
  • diskcache==5.6.3
  • distro==1.9.0
  • FLAML==2.1.1
  • google-ai-generativelanguage==0.4.0
  • google-api-core==2.15.0
  • google-auth==2.26.1
  • google-generativeai==0.3.2
  • googleapis-common-protos==1.62.0
  • grpcio==1.60.0
  • grpcio-status==1.60.0
  • h11==0.14.0
  • httpcore==1.0.2
  • httpx==0.26.0
  • idna==3.6
  • numpy==1.26.3
  • openai==1.6.1
  • pillow==10.2.0
  • proto-plus==1.23.0
  • protobuf==4.25.1
  • pyasn1==0.5.1
  • pyasn1-modules==0.3.0
  • pyautogen @ https://github.com/microsoft/autogen/archive/gemini.zip#sha256=a7ecbc81aa9279dde95be5ef7e33a8cd1733d90db3622c0f29ca8a53b4de511c
  • pydantic==2.5.3
  • pydantic_core==2.14.6
  • pydash==7.0.6
  • python-dotenv==1.0.0
  • regex==2023.12.25
  • requests==2.31.0
  • rsa==4.9
  • sniffio==1.3.0
  • termcolor==2.4.0
  • tiktoken==0.5.2
  • tqdm==4.66.1
  • typing_extensions==4.9.0
  • urllib3==2.1.0
@OTR OTR added the bug label Jan 4, 2024
@rickyloynd-microsoft
Copy link
Contributor

@BeibinLi

@BeibinLi BeibinLi self-assigned this Jan 4, 2024
@BeibinLi
Copy link
Collaborator

BeibinLi commented Jan 4, 2024

Thanks for the details! It seems you are using a different pydantic version. Can you try:

pip install "google-generativeai" "pydash" "pillow" "pydantic==1.10.13"

@OTR
Copy link
Author

OTR commented Jan 5, 2024

Thanks for the details! It seems you are using a different pydantic version. Can you try:
pip install "google-generativeai" "pydash" "pillow" "pydantic==1.10.13"

Looks like it helps with Exception in pydantic_core package, but here is two more bugs in gemini branch:

First case: exceeds MAX_TOKENS raises StopCandidateException:

Installation phase

pip install https://github.com/microsoft/autogen/archive/gemini.zip
pip install "google-generativeai" "pydash" "pillow" "pydantic==1.10.13"
pip install "pyautogen[gemini]"

The code I try to run:

config_list_gemini = [{
        "model": "gemini-pro",
        "api_key": "AIza-my-api-key",
        "api_type": "google"
}]

assistant = AssistantAgent("assistant",
                           llm_config={"config_list": config_list_gemini, "seed": 42},
                           max_consecutive_auto_reply=13)

user_proxy = UserProxyAgent("user_proxy",
                            code_execution_config={"work_dir": "coding", "use_docker": False},
                            human_input_mode="NEVER",
                           is_termination_msg = lambda x: content_str(x.get("content")).find("TERMINATE") >= 0)

user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")

Full Traceback listing:

agent (to user):

How can I help you today?

--------------------------------------------------------------------------------
Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]
user (to agent):

Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
---------------------------------------------------------------------------
StopCandidateException                    Traceback (most recent call last)
<ipython-input-14-e58c8e72340a> in <cell line: 11>()
      9                            is_termination_msg = lambda x: content_str(x.get("content")).find("TERMINATE") >= 0)
     10
---> 11 user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")
     12

7 frames
/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py in initiate_chat(self, recipient, clear_history, silent, **context)
    560         """
    561         self._prepare_chat(recipient, clear_history)
--> 562         self.send(self.generate_init_message(**context), recipient, silent=silent)
    563 
    564     async def a_initiate_chat(

/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py in send(self, message, recipient, request_reply, silent)
    358         valid = self._append_oai_message(message, "assistant", recipient)
    359         if valid:
--> 360             recipient.receive(message, self, request_reply, silent)
    361         else:
    362             raise ValueError(

/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py in receive(self, message, sender, request_reply, silent)
    491         if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    492             return
--> 493         reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    494         if reply is not None:
    495             self.send(reply, sender, silent=silent)

/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py in generate_reply(self, messages, sender, exclude)
    966                 continue
    967             if self._match_trigger(reply_func_tuple["trigger"], sender):
--> 968                 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
    969                 if final:
    970                     return reply

/usr/local/lib/python3.10/dist-packages/autogen/agentchat/conversable_agent.py in generate_oai_reply(self, messages, sender, config)
    635 
    636         # TODO: #1143 handle token limit exceeded error
--> 637         response = client.create(
    638             context=messages[-1].pop("context", None), messages=self._oai_system_message + messages
    639         )

/usr/local/lib/python3.10/dist-packages/autogen/oai/client.py in create(self, **config)
    272             try:
    273                 if isinstance(client, GeminiClient):
--> 274                     response = client.call(params)
    275                 else:
    276                     response = self._completions_create(client, params)

/usr/local/lib/python3.10/dist-packages/autogen/oai/gemini.py in call(self, params)
     91             chat = model.start_chat(history=gemini_messages[:-1])
     92             try:
---> 93                 response = chat.send_message(gemini_messages[-1].parts[0].text, stream=stream)
     94             except InternalServerError as e:
     95                 print(e)

/usr/local/lib/python3.10/dist-packages/google/generativeai/generative_models.py in send_message(self, content, generation_config, safety_settings, stream, **kwargs)
    382                 glm.Candidate.FinishReason.MAX_TOKENS,
    383             ):
--> 384                 raise generation_types.StopCandidateException(response.candidates[0])
    385 
    386         self._last_sent = content

StopCandidateException: finish_reason: RECITATION
index: 0

Second case: raises ValueError when I asked for feedback and send empty string:

Instalation phase is the same

Expected behaviour:

Common sense tells me that when I asked Press enter to skip that means 'Provide an empty String to python's input() function', but it should be handled behaviour and the program shouldn't crash when I send empty string back, but it does.

The sample I try to run

from autogen import UserProxyAgent, ConversableAgent

config_list_gemini = [{
        "model": "gemini-pro",
        "api_key": "AIza-my-api-key",
        "api_type": "google"
}]

def main():
    assistant = ConversableAgent("agent", llm_config={"config_list": config_list_gemini})

    user_proxy = UserProxyAgent("user", code_execution_config=False)

    assistant.initiate_chat(user_proxy, message="How can I help you today?")


if __name__ == "__main__":
    main()

Full Traceback listing:

agent (to user):

How can I help you today?

--------------------------------------------------------------------------------
Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]
user (to agent):

Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]

--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
INFO:autogen.token_count_utils:Gemini is not supported in tiktoken. Returning num tokens assuming gpt-4-0613.
WARNING:autogen.token_count_utils:Model gemini-pro not found. Using cl100k_base encoding.
Warning: model not found. Using cl100k_base encoding.
agent (to user):

```python
def bubble_sort(array):
    """
    Sorts the given array using the Bubble Sort algorithm.

    Parameters:
        array: The array to be sorted.

    Returns:
        The sorted array.
    """

    # Iterate over the array multiple times
    for i in range(len(array) - 1):
        # In each iteration, compare adjacent elements and swap them if they are in the wrong order
        for j in range(len(array) - 1 - i):
            if array[j] > array[j + 1]:
                array[j], array[j + 1] = array[j + 1], array[j]

    return array


if __name__ == "__main__":
    array = [4, 1, 3, 2]
    print(f"Unsorted array: {array}")

    sorted_array = bubble_sort(array)
    print(f"Sorted array: {sorted_array}")


Output:


Unsorted array: [4, 1, 3, 2]
Sorted array: [1, 2, 3, 4]

--------------------------------------------------------------------------------
Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation:

>>>>>>>> NO HUMAN INPUT RECEIVED.

>>>>>>>> USING AUTO REPLY...
user (to agent):



--------------------------------------------------------------------------------

>>>>>>>> USING AUTO REPLY...
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-9c7ec639fed5> in <cell line: 20>()
     19
     20 if __name__ == "__main__":
---> 21     main()

15 frames
/usr/local/lib/python3.10/dist-packages/google/generativeai/types/content_types.py in to_content(content)
    192 def to_content(content: ContentType):
    193     if not content:
--> 194         raise ValueError("content must not be empty")
    195
    196     if isinstance(content, Mapping):

ValueError: content must not be empty

@BeibinLi
Copy link
Collaborator

BeibinLi commented Jan 8, 2024

Looking at the block below, it seems like you have the "human_input_mode" parameter wrong.

Provide feedback to agent. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2] user (to agent):


Also, the code and the log you provided did not match. For instance, the agents' names do not match from the code to the output log.

One possibility is that the code is changed after caching the same question, and you can try to run rm -rf .cache in the same folder before rerunning.

@ng-tarun
Copy link

Can you anyone resolved the error

@OTR
Copy link
Author

OTR commented Jan 23, 2024

Please fix the code. Still getting RECITATION Exception:

Installation (This time running in Google Colab)

!pip install https://github.com/microsoft/autogen/archive/gemini.zip
!pip install "google-generativeai" "pydash" "pillow"
!pip install git+https://github.com/microsoft/autogen.git@gemini

from google.colab import userdata

Source code

from autogen import UserProxyAgent, ConversableAgent, config_list_from_json, AssistantAgent

config_list_gemini = [{
        "model": "gemini-pro",
        "api_key": userdata.get('GOOGLE_AI_API_KEY'),
        "api_type": "google"
}]


def first():
    assistant = AssistantAgent("assistant",
                           llm_config={"config_list": config_list_gemini, "seed": 42},
                           max_consecutive_auto_reply=13)

    user_proxy = UserProxyAgent("user_proxy",
                                code_execution_config={"work_dir": "coding", "use_docker": False},
                                human_input_mode="NEVER",
                            is_termination_msg = lambda x: content_str(x.get("content")).find("TERMINATE") >= 0)

    user_proxy.initiate_chat(assistant, message="Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]")


if __name__ == "__main__":
    first()

Traceback

user_proxy (to assistant):

Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]

--------------------------------------------------------------------------------
---------------------------------------------------------------------------
StopCandidateException                    Traceback (most recent call last)
[<ipython-input-2-fd2ce89ef12a>](https://localhost:8080/#) in <cell line: 37>()
     36 
     37 if __name__ == "__main__":
---> 38     first()
     39     second()

8 frames
[/usr/local/lib/python3.10/dist-packages/google/generativeai/generative_models.py](https://localhost:8080/#) in send_message(self, content, generation_config, safety_settings, stream, **kwargs)
    382                 glm.Candidate.FinishReason.MAX_TOKENS,
    383             ):
--> 384                 raise generation_types.StopCandidateException(response.candidates[0])
    385 
    386         self._last_sent = content

StopCandidateException: finish_reason: RECITATION
index: 0

The same setting and caught different ExceptionValueError

To troubleshoot this issue, let's try a few things:

1. Make sure that you are providing input to the program correctly. You can do this by running the program from the command line and providing the input directly. For example, you could run the program like this:


python complete_graph.py <<EOF
4
1 2
1 3
3 4
5
1 2
1 3
2 3
4 1
4 2
EOF


This will provide the following input to the program:


4
1 2
1 3
3 4
5
1 2
1 3
2 3
4 1
4 2


2. If you are providing input to the program correctly, then the issue may be with how the input is being read. You can try to debug the program by printing the value of the `line` variable before trying to convert it to an integer. This will help you see what value is being read from the input.

3. If the `line` variable is not empty, then the issue may be with the input itself. Make sure that the input is in the correct format and that it does not contain any invalid characters.

Once you have identified the source of the issue, you can take steps to fix it. For example, if you are not providing input to the program correctly, you can simply provide the input directly when you run the program. If there is an issue with how the input is being read, you can try to debug the program to find the source of the issue. And if there is an issue with the input itself, you can correct the input and then run the program again.

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is sh)...

>>>>>>>> EXECUTING CODE BLOCK 1 (inferred language is unknown)...
user_proxy (to assistant):

exitcode: 1 (execution failed)
Code output: 
4


unknown language unknown

--------------------------------------------------------------------------------
INFO:autogen.token_count_utils:Gemini is not supported in tiktoken. Returning num tokens assuming gpt-4-0613.
WARNING:autogen.token_count_utils:Model gemini-pro not found. Using cl100k_base encoding.
Warning: model not found. Using cl100k_base encoding.
assistant (to user_proxy):

It seems that the program is now reading the input correctly, but it is encountering an error when trying to convert the input to an integer. The error message "unknown language unknown" suggests that the program is trying to convert the input to an integer using a language that is not supported.

To fix this issue, we need to make sure that the program is using the correct language to convert the input to an integer. In Python, we can use the `int()` function to convert a string to an integer. The `int()` function takes two arguments: the string to be converted and the base of the integer. The base of the integer is the number of digits that are used to represent the integer. For example, the base of the decimal system is 10, which means that we use 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) to represent numbers.

In our case, we want to convert the input string to an integer using the decimal system. Therefore, we need to call the `int()` function with the input string as the first argument and the base 10 as the second argument.

Here is the corrected code:


# filename: complete_graph.py
import sys

# Read the number of vertices from the first line of the input.
line = sys.stdin.readline()
if not line:
    print("Error: Empty input.")
    exit(1)
n = int(line, 10)  # Convert the input string to an integer using base 10

# ... (the rest of the code)


Please try running the program again and let me know if you encounter any further issues.

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):

exitcode: 1 (execution failed)
Code output: 


--------------------------------------------------------------------------------
INFO:autogen.token_count_utils:Gemini is not supported in tiktoken. Returning num tokens assuming gpt-4-0613.
WARNING:autogen.token_count_utils:Model gemini-pro not found. Using cl100k_base encoding.
Warning: model not found. Using cl100k_base encoding.
assistant (to user_proxy):

If the program is still exiting with an error code of 1 and not producing any output, it means that there is still an issue with the program.

One possible reason for this is that the input is still not being read correctly. To troubleshoot this issue, you can try printing the value of the `line` variable before trying to convert it to an integer. This will help you see what value is being read from the input.

Another possible reason for the issue is that there is a bug in the program itself. You can try to debug the program to find the source of the bug. You can do this by adding print statements to the program to see what values are being stored in the variables at different points in the program.

Once you have identified the source of the issue, you can take steps to fix it. For example, if the input is not being read correctly, you can try to modify the program to read the input in a different way. If there is a bug in the program, you can try to fix the bug by modifying the code.

Here are some additional things you can try to troubleshoot the issue:

* Make sure that the program is being run with the correct Python interpreter.
* Make sure that the program is being run from the correct directory.
* Make sure that the input file is in the correct format and that it is located in the correct directory.

If you are still having trouble getting the program to run correctly, please provide me with the input that you are using and the full error message that is being displayed. This will help me to better understand the issue and provide you with a more accurate solution.

--------------------------------------------------------------------------------
user_proxy (to assistant):



--------------------------------------------------------------------------------
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-7-d9609b5d4d8b>](https://localhost:8080/#) in <cell line: 24>()
     23 
     24 if __name__ == "__main__":
---> 25     first()

33 frames
[/usr/local/lib/python3.10/dist-packages/google/generativeai/types/content_types.py](https://localhost:8080/#) in to_content(content)
    192 def to_content(content: ContentType):
    193     if not content:
--> 194         raise ValueError("content must not be empty")
    195 
    196     if isinstance(content, Mapping):

ValueError: content must not be empty

@BeibinLi
Copy link
Collaborator

@OTR Thanks for contributing these examples and logs! Code updated. Please try again.

@rakotomandimby
Copy link

You hit the RECITATION problem reported here and here

The problem is triggered by the fact that "Write a program in python that Sort the array with Bubble Sort: [4, 1, 3, 2]" is a really common problem. Whatever is the answer it will generate, there is a similar data in its training data. Gemini will then refuse to give it to you. Gemini does not want to plagiate.

@naourass
Copy link

Is there any workaround for the "RECITATION" issue?

@BeibinLi
Copy link
Collaborator

@OTR @rakotomandimby @naourass Thanks for all your interest using AutoGen for Gemini. Do you have any suggestions regarding the "RECITATION" problem for Gemini? As pointed out by @rakotomandimby, there are already lots of complaints for the Gemini API (not from AutoGen, but from other usages).

If a try... catch... exception could not resolve this issue elegantly, how to proceed and kick Gemini to give us a response (from prompt engineering perspective)?

@OTR
Copy link
Author

OTR commented Mar 12, 2024

@BeibinLi Could you please provide a working example of using autogen at gemini branch?

Your last commit points to not existing page 9bc4d4e

Such notebook doesn't exist:
https://github.com/microsoft/autogen/blob/main/notebook/agentchat_gemini.ipynb

When I am attempting to launch my sample code, which previously launched without errors, I am encountering a compilation error:

!pip install https://github.com/microsoft/autogen/archive/gemini.zip
!pip install "google-generativeai" "pydash" "pillow"
!pip install git+https://github.com/microsoft/autogen.git@gemini

Traceback:

https://gist.github.com/OTR/2c175ef404955dfca68bce57d5727e0e

img

@BeibinLi
Copy link
Collaborator

@OTR Thanks for pointing out. The notebook is at: https://github.com/microsoft/autogen/blob/gemini/notebook/agentchat_gemini.ipynb

For the installation bug, it seems like a conversable-agent.jpg file is in the tmp folder, which caused the issue.
Can you try to run the following code before installing?

rm -rf /tmp/pip-req-build-spmrq54h

Thanks!!!

whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
* simplify the initiation of chat

* version update

* include openai

* completion

* load config list from json

* initiate_chat

* oai config list

* oai config list

* config list

* config_list

* raise_error

* retry_time

* raise condition

* oai config list

* catch file not found

* catch openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* close microsoft#1139

* use property

* termination msg

* AIUserProxyAgent

* smaller dev container

* update notebooks

* match

* document code execution and AIUserProxyAgent

* gpt 3.5 config list

* rate limit

* variable visibility

* remove unnecessary import

* quote

* notebook comments

* remove mathchat from init import

* two users

* import location

* expose config

* return str not tuple

* rate limit

* ipython user proxy

* message

* None result

* rate limit

* rate limit

* rate limit

* rate limit
whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
…icrosoft#1142)

* simplify the initiation of chat

* version update

* include openai

* completion

* load config list from json

* initiate_chat

* oai config list

* oai config list

* config list

* config_list

* raise_error

* retry_time

* raise condition

* oai config list

* catch file not found

* catch openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* close microsoft#1139

* use property

* termination msg

* AIUserProxyAgent

* smaller dev container

* update notebooks

* match

* document code execution and AIUserProxyAgent

* gpt 3.5 config list

* rate limit

* variable visibility

* remove unnecessary import

* quote

* notebook comments

* remove mathchat from init import

* two users

* import location

* expose config

* return str not tuple

* rate limit

* ipython user proxy

* message

* None result

* rate limit

* rate limit

* rate limit

* rate limit

* make auto_reply a common method for all agents

* abs path

* refactor and doc

* set mathchat_termination

* code format

* modified

* emove import

* code quality

* sender -> messages

* system message

* clean agent hierarchy

* dict check

* invalid oai msg

* return

* openml error

* docstr

---------

Co-authored-by: kevin666aa <[email protected]>
@BeibinLi
Copy link
Collaborator

Closing due to inactivity.
Gemini is now officially supported by AutoGen, and you can check our roadmap for gemini at: #2387

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants