Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: OpenAI API key format #3345

Closed
SiriusYou opened this issue Aug 13, 2024 · 43 comments · Fixed by #3569
Closed

[Bug]: OpenAI API key format #3345

SiriusYou opened this issue Aug 13, 2024 · 43 comments · Fixed by #3569

Comments

@SiriusYou
Copy link

Describe the bug

[autogen.oai.client: 08-13 18:12:58] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
When I define an agent using ConversableAgent, The above message appeared. And when I define a reply function using agent.generate_reply and print(reply), it does't work and shows:
UnicodeEncodeError: 'ascii' codec can't encode character '\u201c' in position 7: ordinal not in range(128)

What's wrong with it?

Steps to reproduce

No response

Model Used

gpt-4

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

No response

@SiriusYou SiriusYou added the bug label Aug 13, 2024
@ekzhu
Copy link
Collaborator

ekzhu commented Aug 14, 2024

Can you show some example code?

@SiriusYou
Copy link
Author

like this:
import os

from autogen import ConversableAgent

agent = ConversableAgent(
"chatbot",
llm_config={"config_list": [{"model": "gpt-4", "api_key": os.environ.get("OPENAI-API-KEY")}]},
code_execution_config=False, # Turn off code execution, by default it is off.
function_map=None, # No registered functions, by default it is None.
human_input_mode="NEVER", # Never ask for human input.
)

[autogen.oai.client: 08-13 18:12:58] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

@SiriusYou
Copy link
Author

SiriusYou commented Aug 15, 2024 via email

@Zakk-Yang
Copy link

i also got the similar problem.
Error: [autogen.oai.client: 08-15 15:04:50] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

Code:

from utils import OPENAI_API_KEY
llm_config = {'model': 'gpt-3.5-turbo', 'api_key': OPENAI_API_KEY}

from autogen import ConversableAgent
agent = ConversableAgent(
    name = 'chatbot',
    llm_config = llm_config,
    human_input_mode= 'NEVER'
)

@arunprasath2007
Copy link

Im facing the same issue as well
image

@ebeckner-looptech
Copy link

I am importing using

from autogen import config_list_from_json

config_list = config_list_from_json("OAI_CONFIG_LIST.json")

While my code otherwise executes successfully, this error seems to persist. My API key is formatted in a JSON-style format accordingly:

[{"model": "model_here", "api_key": "api_key_pasted_here"}]

This bug is a minor annoyance. I have not dug into the source code to understand the issue. It seems to display multiple times...

@yutz9836
Copy link

It seems to be the openai api key format has slightly changed?
the key I generate today looks like:

sk-proj-[35]-[4]-[48]-[16]-[11]_[25]_[2]-[8]
[n] = n digits of [A-Za-z0-9]

doesn't match with regex in this function:

autogen/autogen/oai/openai_utils.py

def is_valid_api_key(api_key: str) -> bool:
    """Determine if input is valid OpenAI API key.

    Args:
        api_key (str): An input string to be validated.

    Returns:
        bool: A boolean that indicates if input is valid OpenAI API key.
    """
    api_key_re = re.compile(r"^sk-([A-Za-z0-9]+(-+[A-Za-z0-9]+)*-)?[A-Za-z0-9]{32,}$")
    return bool(re.fullmatch(api_key_re, api_key))

@ebeckner-looptech
Copy link

I am importing using

from autogen import config_list_from_json

config_list = config_list_from_json("OAI_CONFIG_LIST.json")

While my code otherwise executes successfully, this error seems to persist. My API key is formatted in a JSON-style format accordingly:

[{"model": "model_here", "api_key": "api_key_pasted_here"}]

This bug is a minor annoyance. I have not dug into the source code to understand the issue. It seems to display multiple times...

Howdy friends. I have determined that by upgrading autogen to 0.2.34 (pip install pyautogen --upgrade) this issue is resolved. Seemed to be an issue with the version for the key formatting I detailed above.

@SoyGema
Copy link

SoyGema commented Aug 21, 2024

Hello there.
Im reproducing one of the examples, and this warning message seems to appear again in cell 3 , even after upgrading to pyautogen 0.2.35 . It happens within the context of AssistantAgent and RetrieveUserProxyAgent instantiation, not in ConversableAgent as the rest of the examples stated above.

[autogen.oai.client: 08-21 10:53:35] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

Keep up the great work and thanks for the time dedicated to solve this issue!

@ruhanaazam
Copy link

ruhanaazam commented Aug 22, 2024

Getting the same issue. Does anyone know if this error means your model defaults to using gpt-3.5?

@renedoz
Copy link

renedoz commented Aug 22, 2024

Getting the same problem. Underscores in the key seems to be the issue. Not sure.

@laurent512
Copy link

Same problem

@joarobles
Copy link

same here with version 0.2.35

@nileshvj2
Copy link

you will have to use below config pointing to azure as api type

config_list = [
{
"model": "YOUR_DEPLOYMENT_NAME",
"base_url": "https://xxx.openai.azure.com",
"api_type": "azure",
"api_version": "2023-07-01-preview",
"api_key": "xxx"
}
]

Refer #601

@NightHao
Copy link

NightHao commented Sep 2, 2024

Same

@Alessio1599
Copy link

Alessio1599 commented Sep 2, 2024

I have the same issue

Version libraries:
autogen == 0.2.32
python-dotenv == 0.21.0
openai == 1.37.0

Code:

import os
import autogen  
from autogen import AssistantAgent, UserProxyAgent

from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()  # Load the .env file

# Access the API key
api_key = os.environ.get('OPENAI_API_KEY')

llm_config = {"model": "gpt-3.5-turbo-16k", "api_key": api_key} #"api_key": os.environ["OPENAI_API_KEY"]
assistant = AssistantAgent("assistant", llm_config=llm_config)

user_proxy = UserProxyAgent(
    "user_proxy", code_execution_config={"executor": autogen.coding.LocalCommandLineCodeExecutor(work_dir="coding")}
)

# Start the chat
user_proxy.initiate_chat(
    assistant,
    message="Plot a chart of NVDA and TESLA stock price change YTD.",
)

Error message: [autogen.oai.client: 09-02 16:54:54] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

@haunic
Copy link

haunic commented Sep 2, 2024

Same here (but using Autogen Studio).
[autogen.oai.client: 09-02 17:12:08] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

I tried with 2 different keys, the ones starting with "sk-proj-" and the old ones "sk-"

@markmadlangbayan
Copy link

Same here, frustrating!

@cjpark-sapcsa
Copy link

I did quick check import autogen

import autogen
import logging
from autogen import AssistantAgent
from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent

import os
import sys
from dotenv import load_dotenv
import requests

Enable logging

logging.basicConfig(level=logging.INFO)

Load environment variables from .env file

load_dotenv()

Add the root directory of the project to the PYTHONPATH

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(file), '..')))

Configuration for GPT-4V (Azure OpenAI-hosted GPT-4 model)

config_list_4v = [
{
"engine": "gpt-4",
"api_key": os.getenv("AZURE_OPENAI_KEY"), # Ensure this matches your deployment
"api_base": os.getenv("AZURE_OPENAI_BASE"), # Ensure this matches your deployment
"deployment_name": "gpt-4"
}
]

llm_config = {"config_list": config_list_4v, "cache_seed": 42, "temperature": 0.5}

Initialize the E-shop Manager agent

agent_eshop_manager = MultimodalConversableAgent(
name="E-shop Manager",
max_consecutive_auto_reply=5,
llm_config=llm_config,
)

Define the test function for chat initiation

def test_chat_initiation():
agent = agent_eshop_manager # Now, agent_eshop_manager is defined above
commander = AssistantAgent(
name="Commander",
system_message="Testing Azure OpenAI connection",
llm_config=llm_config,
)

# Messages structure follows OpenAI's format: role and content
messages = [{"role": "user", "content": "Test message to initiate chat"}]

print("Before initiating chat...")

try:
    response = commander.initiate_chat(
        agent,
        messages=messages,
        model="gpt-4",
        stream=False,
        timeout=5  # Temporarily reduce timeout for quicker feedback
    )
    logging.info(f"Chat initiation successful: {response}")
    print(f"Response: {response}")  # Added this line to explicitly print the response
except Exception as e:
    logging.error(f"Error during chat initiation: {str(e)}")
    print(f"Error: {str(e)}")

print("After initiating chat...")

Call the test function

test_chat_initiation()

PS C:\My Work\12. AI Demo\aiagents4customersupport\tests> & "c:/My Work/12. AI Demo/aiagents4customersupport/.venv/Scripts/python.exe" "c:/My Work/12. AI Demo/aiagents4customersupport/tests/test_chat_initiation.py"
[autogen.oai.client: 09-09 18:07:32] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
WARNING:autogen.oai.client:The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
[autogen.oai.client: 09-09 18:07:33] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
WARNING:autogen.oai.client:The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
Before initiating chat..

.env i used

AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_BASE=
AZURE_OPENAI_API_VERSION=

what is the right format ???

@tingwei161803
Copy link

Same here

@cfiestas6
Copy link

Check if your API key has underscores ('_') in it. I solved the warning by switching the API key to a project one.

@sehwan505
Copy link

I have same issue. when I try older api key, it works.

@JHFVR
Copy link

JHFVR commented Sep 16, 2024

+1 ... the old key format "sk-..." works, the new project structure "sk-proj-..." does not

@craigadam
Copy link

getting same signed up today so have the format using project key sk-proj-

WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model. is there any resolution?

@akigler
Copy link

akigler commented Sep 21, 2024

ya doesnt work with new or old key system

@imanoop7
Copy link

Same Error-
The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

@pwisocodes
Copy link

ya doesnt work with new or old key system

+1 please fix this.

@bveiseh
Copy link

bveiseh commented Sep 25, 2024

Same here, getting this issue on latest with both a project and legacy key

@Cohee9791
Copy link

this problem still exists today!!!!!

@cjy8s
Copy link

cjy8s commented Sep 26, 2024

Same problem as of today. Neither the "proj_" or the "sk-" works for me

[autogen.oai.client: 09-26 13:22:38] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

import autogen
import getpass
from dotenv import load_dotenv
import os

load_dotenv(dotenv_path='../../API_KEYs.env')

OPENAI_PROJECT_KEY = os.getenv('OPENAI_PROJECT_KEY')

# Optional, add tracing in LangSmith
config_list = [{"model" : "gpt-4o-mini", "api_key" : OPENAI_PROJECT_KEY}]

gpt4_config = {"seed": 42, 
               "config_list": config_list, 
               "temperature": 0}

user_proxy = autogen.UserProxyAgent(
    name="Admin",
    system_message="A human admin. some instruction...",
    code_execution_config=False,)

engineer = autogen.AssistantAgent(
    name="Engineer",
    llm_config=gpt4_config,
    system_message="""Engineer. some instruction...""",
)
scientist = autogen.AssistantAgent(
    name="Scientist",
    llm_config=gpt4_config,
    system_message="""Scientist. some instruction...""",
)

executor = autogen.UserProxyAgent(
    name="Executor",
    system_message="Executor. some instruction...",
    human_input_mode="NEVER",
    code_execution_config={
        "last_n_messages": 3,
        "work_dir": "paper",
        "use_docker": False,
    },  
)
critic = autogen.AssistantAgent(
    name="Critic",
    system_message="Critic. some instruction...",
    llm_config=gpt4_config,
)
groupchat = autogen.GroupChat(
    agents=[user_proxy, engineer, scientist, planner, executor, critic], messages=[], max_round=50
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=gpt4_config)

@imanoop7
Copy link

[autogen.oai.client: 09-27 11:11:44] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
[autogen.oai.client: 09-27 11:11:44] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
[autogen.oai.client: 09-27 11:11:45] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

still facing same error

@arjun1to10x
Copy link

I am also facing the same error . And validated that using the latest autogen version .

@jeff-luo
Copy link

Same error. Neither project API key nor user API key works.

@danivpv
Copy link

danivpv commented Oct 1, 2024

Same error. Although the code works fine.

@ziyingsk
Copy link

ziyingsk commented Oct 1, 2024

+1

@rwang1987
Copy link

I switched to an OpenAI API key without any underscore and the warming disappeared. I use pyautogen 0.3.0

@zhaxylykbayev
Copy link

+1

1 similar comment
@yinfangchen
Copy link

+1

@ekzhu
Copy link
Collaborator

ekzhu commented Oct 5, 2024

We are using autogen-agentchat for the latest release.

The latest version is 0.2.36

@ShawRong
Copy link

ShawRong commented Oct 6, 2024

same issue

@jackgerrits
Copy link
Member

If you are still facing this issue when using autogen-agentchat==0.2.36 then please provide more detailed steps to reproduce.

@fadybaly
Copy link

OpenAI keys have a different format if it's generated under a project/organization. this format is not covered in the regex of is_valid_api_key under openai_utils.py

@yinfangchen
Copy link

OpenAI keys have a different format if it's generated under a project/organization. this format is not covered in the regex of is_valid_api_key under openai_utils.py

Agree! I use the personal user key, then the warning was gone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet