You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In reverie/backend_server/persona/prompt_template/run_gpt_prompt.py, multiple requests to OpenAI are made with a hardcoded model gpt-35-turbo-0125, which is currently not a valid/supported model on the OpenAI API (unlike on Azure).
If I try to run the simulation (e.g., with the command run 1) with the name gpt-3.5-turbo-0125, the simulation seems to be working.
However, if I change the model to a newer (and cheaper) one, e.g., gpt-4o-mini-2024-07-18, I eventually get the following error:
File "...generative_agents\reverie\backend_server\persona\memory_structures\spatial_memory.py", line 107, in get_str_accessible_arena_game_objects
x = ", ".join(list(self.tree[curr_world][curr_sector][curr_arena.lower()]))
KeyError: '{main room'
caused by the output of the gpt call being {main room without a closing bracket, i.e. wrong syntax on the model's output.
Do you know more about which models have the prompts been successfully tested on (e.g., the original joonspk-research's repo contained yet another model, text-davinci-002), what was your reasoning behind choosing gpt-3.5-turbo-0125, and do you have any suggestion for resolution of the problem for other models (e.g., structured output comes to mind)?
The text was updated successfully, but these errors were encountered:
Unfortunately I have not tested models other than gpt-3.5-turbo-0125.
The reason why I choosed that model is that text-davinci-002 was already deprecated, and gpt-3.5-turbo-0125 was the cheapest model available at that time.
I suggest you to look to the changes made by @chowington in his pull request #8. You may find the solution there.
jackboyla
pushed a commit
to jackboyla/generative_agents
that referenced
this issue
Dec 14, 2024
In
reverie/backend_server/persona/prompt_template/run_gpt_prompt.py
, multiple requests to OpenAI are made with a hardcoded modelgpt-35-turbo-0125
, which is currently not a valid/supported model on the OpenAI API (unlike on Azure).If I try to run the simulation (e.g., with the command
run 1
) with the namegpt-3.5-turbo-0125
, the simulation seems to be working.However, if I change the model to a newer (and cheaper) one, e.g.,
gpt-4o-mini-2024-07-18
, I eventually get the following error:caused by the output of the gpt call being
{main room
without a closing bracket, i.e. wrong syntax on the model's output.Do you know more about which models have the prompts been successfully tested on (e.g., the original joonspk-research's repo contained yet another model,
text-davinci-002
), what was your reasoning behind choosinggpt-3.5-turbo-0125
, and do you have any suggestion for resolution of the problem for other models (e.g., structured output comes to mind)?The text was updated successfully, but these errors were encountered: