Skip to content

Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable #805

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ruifengma opened this issue Nov 29, 2023 · 7 comments
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) proj-studio

Comments

@ruifengma
Copy link

Hi, I start the autogenra ui successfully and I setting the proxy agent with no LLM model and remain the system message same as default. For the assistant agent I use self-host Mistral-7b, I gave the model name and base url, for the api_key, I gave "null". When I start to send the message and this error come up. How can I deal with this? Thanks

@hiteshsom
Copy link

I am getting similar error too.

My code

config_list = [
    {
    "model": "<path>\mistral-7b-instruct-v0.1.Q6_K.gguf",
    "api-base": "http://127.0.0.1:8000/v1",
    "api-type": "open_ai",
    "api-key": "sk-abc"
    }
]

llm_config = {"config_list": config_list, "seed": 10}

<My agents>

manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message="Hello, How are you ?")
Traceback (most recent call last):
  File "<project_path>\gen_ai_chatbot_server\autogen_file.py", line 115, in <module>
    user_proxy.initiate_chat(manager, message="Hello, How are you ?")
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 531, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 334, in send      
    recipient.receive(message, self, request_reply, silent)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 462, in receive   
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 779, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\groupchat.py", line 127, in run_chat
    speaker = groupchat.select_speaker(speaker, self)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\groupchat.py", line 56, in select_speaker     
    final, name = selector.generate_oai_reply(
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 606, in generate_oai_reply
    response = oai.ChatCompletion.create(
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\oai\completion.py", line 789, in create
    response = cls.create(
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\oai\completion.py", line 820, in create
    return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\autogen\oai\completion.py", line 210, in _get_response
    response = openai_completion.create(request_timeout=request_timeout, **config)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create    
    return super().create(*args, **kwargs)
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 151, in create
    ) = cls.__prepare_create_request(
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 108, in __prepare_create_request
    requestor = api_requestor.APIRequestor(
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\openai\api_requestor.py", line 139, in __init__
    self.api_key = key or util.default_api_key()
  File "<project_path>\gen_ai_chatbot_server\gen_ai_chatbot_server_venv\lib\site-packages\openai\util.py", line 186, in default_api_key
    raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment var
iable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <project_path>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.

@victordibia
Copy link
Collaborator

@ruifengma ,

The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

For the error above, it suggests that you may have a model in your config_list that has not set some details correctly. Can you confirm that you have set the base_url and not api_base.

Also, please try the latest version of autogenra as this has extended the UI to allow for modifying the models.

pip install -U autogenra

Image

config_list = [
    {
    "model": "<path>\mistral-7b-instruct-v0.1.Q6_K.gguf",
    "base_url": "http://127.0.0.1:8000/v1",
    "api-type": "open_ai",
    "api-key": "sk-abc"
    }
]

llm_config = {"config_list": config_list, "seed": 10}

<My agents>

manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message="Hello, How are you ?")

@ruifengma
Copy link
Author

@ruifengma ,

The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

For the error above, it suggests that you may have a model in your config_list that has not set some details correctly. Can you confirm that you have set the base_url and not api_base.

Also, please try the latest version of autogenra as this has extended the UI to allow for modifying the models.

pip install -U autogenra

Image

config_list = [
    {
    "model": "<path>\mistral-7b-instruct-v0.1.Q6_K.gguf",
    "base_url": "http://127.0.0.1:8000/v1",
    "api-type": "open_ai",
    "api-key": "sk-abc"
    }
]

llm_config = {"config_list": config_list, "seed": 10}

<My agents>

manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message="Hello, How are you ?")

Thanks @victordibia for the reply, I solve the issue by assigning the proxy agent an LLM as well. If I delete all the models for proxy agent and leave it a blank, then this error comes up. If I assign a model to proxy agent, then it works. The question is that, I saw the examples which autogen supports proxy agent without LLM and only assistant with LLM and I tried this setting before. Besides, I saw your ui which model settings are totally Blank (different from mine)

@sonichi sonichi added ra-oss models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) labels Dec 3, 2023
@ghost
Copy link

ghost commented Dec 26, 2023

This is a helpful article

@srishino
Copy link

srishino commented Jan 2, 2024

Hi all, I am having the same error and having a similar set. I have set up pyautogen(version-0.2.2) am using oobaboga webui for hosting my local LLM(mistral-7b-instruct). It runs fine when i query using user_proxy agent. Now I am developing a function tool following - https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb and that throws an error. My code is as follows:
`import autogen

config_list = [
{
"model": "model path/model name",
"base_url": "http://127.0.0.1:5000/v1",
#"api_type": "open_ai",
"api_key": "sk-111111111111111111111111111111111111111111111111", # just a placeholder
}
]
###same code from the above link
chatbot = autogen.AssistantAgent(
name="chatbot",
system_message="For coding tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.",
llm_config=config_list,
)

create a UserProxyAgent instance named "user_proxy"

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir": "coding"},
llm_config=config_list
)

define functions according to the function description

from IPython import get_ipython
from typing_extensions import Annotated

@user_proxy.register_for_execution()
@chatbot.register_for_llm(name="python", description="run cell in ipython and return the execution result.")
def exec_python(cell: Annotated[str, "Valid Python cell to execute."]) -> str:
ipython = get_ipython()
result = ipython.run_cell(cell)
log = str(result.result)
if result.error_before_exec is not None:
log += f"\n{result.error_before_exec}"
if result.error_in_exec is not None:
log += f"\n{result.error_in_exec}"
return log

@user_proxy.register_for_execution()
@chatbot.register_for_llm(name="sh", description="run a shell script and return the execution result.")
def exec_sh(script: Annotated[str, "Valid Python cell to execute."]) -> str:
return user_proxy.execute_code_blocks([("sh", script)])

start the conversation

user_proxy.initiate_chat(
chatbot,
message="Draw two agents chatting with each other with an example dialog. Don't add plt.show().",
) `
Please help me solve this, thanks!

@victordibia
Copy link
Collaborator

Closing this due to inactivity.

References

@mynewstart
Copy link

@ruifengma ,

The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

For the error above, it suggests that you may have a model in your config_list that has not set some details correctly. Can you confirm that you have set the base_url and not api_base.
Also, please try the latest version of autogenra as this has extended the UI to allow for modifying the models.
pip install -U autogenra

Image
config_list = [
{
"model": "\mistral-7b-instruct-v0.1.Q6_K.gguf",
"base_url": "http://127.0.0.1:8000/v1",
"api-type": "open_ai",
"api-key": "sk-abc"
}
]

llm_config = {"config_list": config_list, "seed": 10}

manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
user_proxy.initiate_chat(manager, message="Hello, How are you ?")

Thanks @victordibia for the reply, I solve the issue by assigning the proxy agent an LLM as well. If I delete all the models for proxy agent and leave it a blank, then this error comes up. If I assign a model to proxy agent, then it works. The question is that, I saw the examples which autogen supports proxy agent without LLM and only assistant with LLM and I tried this setting before. Besides, I saw your ui which model settings are totally Blank (different from mine)

I found the same issue, do you know why?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) proj-studio
Projects
None yet
Development

No branches or pull requests

6 participants