-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autogen always uses ChatGPT 4, it ignores the config list. #460
Comments
Did you use it following https://microsoft.github.io/autogen/docs/FAQ#use-the-constructed-configuration-list-in-agents ? |
I tried the same but it seems like autogen only works with chat-gpt4 |
@sonichi |
Gpt-3.5-turbo definitely works and is supported. I test it all the time. Do you see any warnings printed to console when running your script. If it fails to load the config-list, it will print a warning and fall back to GPT-4 by default. |
@afourney I am trying to execute the following code. assistant = AssistantAgent("assistant", user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"}) user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") It returns the following error.
I have not added my payment details until now. |
@afourney |
@afourney Error: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details I changed loading of the config now to this: But I get the same error. |
Your original |
@sonichi |
even i am getting this kind of error openai.error.InvalidRequestError: The model full error msg: user_proxy (to assistant): Plot a chart of NVDA and TESLA stock price change YTD. [autogen.oai.completion: 10-29 11:52:31] {788} WARNING - Completion was provided with a config_list, but the list was empty. Adopting default OpenAI behavior, which reads from the 'model' parameter instead. Process finished with exit code 1 kindly help me regarding this |
It's not necessary. Do you know which AssistantAgent caused this error? And do you use GroupChatManager? That agent needs a llm_config. |
This warning in the trace above, suggested to me the OAI_CONFIG_LIST isn't being found: "WARNING:root:The specified config_list file 'OAI_CONFIG_LIST' does not exist." |
Closing this issue per the stack trace. |
I can confirm this is still happening. However, I believe I know the cause and it's technically WAI but extremely potentially problematic behaviour by the library that will likely cost users money. Here's how I constructed my llm_config (I copied and pasted this from a guide) :
Followed by llm_config being passed to every single Agent. Additionally, after creating the agent and checking
Did you notice what I did wrong? It took me a bit to notice. There's a trailing comma on the llm_config that I copied from somewhere. It seems that if llm_config is invalid for whatever reason, autogen reverts to gpt-4 silently as a safe default instead of refusing to continue. I racked up 10 bucks worth of unnecessary charges from this mistake, but I think it could be a lot worse if someone doesn't pay attention. I suspect that people running into this issue are setting the OPENAI_API_KEY through environment variables like me, and then accidentally passing something which is silently rejected by the constructor, which then allows it to continue with the fallback variable. Looking a bit deeper why there might be multiple people making this mistake. In a lot of the notebooks used for guidance (e.g. https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb ) there's examples such as:
I believe I copied my original llm_config snippet from a notebook just like this, and if a user does this and doesn't notice or think about the trailing comma autogen will always revert to gpt-4, and if you have the environment variable set successfully generate text without any indications something is wrong. What clued me off that I was using gpt-4 instead of gpt-3.5 is actually the very slow inference time due to extensive previous experience using the API, but a novice can not make that determination and are likely to rack up a lot of bills without knowing it. Trying some variants of corrupt configs, it also appears if you set a non-existent model Autogen will just fall-back to GPT 4 silently without notifying you. After correcting the above issue and verifying that it uses gpt-3.5, I set it to a non-existent model and verify llm_config is set to it ('gpt-2.5-turbo' - confirmed through agent.llm_config) and unfortunately in this case it also continues to work without warning me it's doing this. Since all the configs I have access to via code are still set to the 'wrong' model there's no easy way to detect the fallback happened without actually tracing traffic and checking usage in the OpenAI billing panel. Proposal: I do not believe advising users to not use the OPENAI_API_KEY environment variable is the path forward here, simply because Autogen might have to interop with other libraries which use this as the primary mechanism to set the key through, and additionally might be deployed in containers or environments where that's set by default since it's their primary dependency. Just to get the ball rolling on a discussion, perhaps one of these options would be the best way forward:
Since Microsoft has a financial interest in OpenAI, I also believe there's some risk of reputational harm by silently opting people into a more expensive model choice. It would be easy for a tech writer without scruples to twist the narrative into Microsoft lining their own pockets. |
I vote for proposal 1. Can someone make a PR? |
Is there any update on this? I'm running into an issue where it is defaulting to gpt-4 even though I am attempting to specify gpt-3.5-turbo-16k, and don't see any issues with trailing commas as mentioned above. After a few queries in a RAG application, I am receiving:
which is confusing since 4096 seems to be the limit for other 3.5 models, and both gpt-4 and gpt-3.5-turbo-16k should both have much higher amounts. |
Update: I updated to the latest Autogen version (0.2.9) and am now seeing llm config output such as:
However, I'm still getting the same 4097 error message, even though |
@austinhumes-valtech , I'm not a contributor, but can you check the OpenAI billing page to see what model it's getting charged to? It should tell you if it's GPT4 or GPT3.5 - I'm not a maintainer as mentioned, but it would be worrying if it was hot-swapping it to GPT4 randomly in even more circumstances. |
@the-xentropy for today it looks like it is hitting both, which makes sense as I've attempted to use both. After some further investigation, I'm wondering if the PDF file I'm using is just too big. I have a RAG setup with some PDF files that I have vectorized in a local Chroma DB. If I ask it really specific questions like "what as the revenue in 2021?" it works, but if I ask it "summarize the report", that's when I get the error about token limit. I'm revisiting https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them to see if I need to go about this differently, since it states:
|
It should not be defaulting to gpt-4 anymore I don't think. We removed that default weeks ago, I think. |
Closing this issue. @austinhumes-valtech could you open a new issue and provide full trace and steps to reproduce? The error msg is unexpected based on your description. |
I was a bit confused why this was closed initially since Austin's message was a bit of an aside, but I'm guessing that this is closed in favor of the pending PR at #847 , so anyone else who's also interested in this can track it there. |
* add standard contributing.md Fill out contributing guide on website #455 * add contibuting.md #455 * Update CONTRIBUTING.md Co-authored-by: Eric Zhu <[email protected]> * Update CONTRIBUTING.md Co-authored-by: Eric Zhu <[email protected]> * Update CONTRIBUTING.md Co-authored-by: gagb <[email protected]> * add symlink to top level contributing --------- Co-authored-by: Eric Zhu <[email protected]> Co-authored-by: gagb <[email protected]>
Hey there,
Is this a bug or my fault? I cannot use ChatGPT 3.5 with Autogen, it always uses ChatGPT 4, even though it's not configured in the OAI_CONFIG_LIST file.
My OAI_CONFIG_LIST looks like this:
[ { "model": "gpt-3.5-turbo-16k", "api_key": "1234567" } ]
I load the config file like this:
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
When I type "set" into my Terminal, there is no OAI_CONFIG_LIST environment variable set that might overwrite the config file.
I tried this again with a clean install of Autogen in a new project and I can't get it to work with ChatGPT 3.5.
Am I doing something wrong?
Thank you
The text was updated successfully, but these errors were encountered: