-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failure to connect to server with auth disabled, still results in error "No API token provided" #64
Comments
Summary + Workaround for End-UsersCaution There is currently a known issue which will trigger the following error even when connecting to an unsecured server. "No API token provided, you have to set the OpenAI token into the settings to make things work." It is highly recommended to enable authentication in most cases, but especially when self-hosting models on your local system this can be inconvenient. Tip Use the following workaround to avoid this error until a permanent solution can be released. Simply ensure your assistant configuration defines a Sample config{
"url": "http://localhost:1234", // Url to your unsecured server
"token": "xxxxxxxxxx", // Token can be anything so long as it is at least 10 characters long
"assistants": [
{
// Inherits token from top-level, no error
"name": "Code assistant",
"prompt_mode": "panel",
"chat_model": "codestral-22b-v0.1",
"assistant_role": "You are a software developer, you develop software programs and applications using programming languages and development tools.",
"temperature": 1,
"max_tokens": 2048,
},
{
// Overrides top-level token incorrectly, will get error
"name": "Lazy Assistant",
"token": "",
"prompt_mode": "phantom",
"chat_model": "llama-3-8b-instruct-32k-v0.1",
"assistant_role": "You are very unhelpful.",
"max_tokens": 4000,
},
{
// Overrides top-level token correctly, no error
"name": "General Assistant",
"token": "abcdefghijklmn",
"prompt_mode": "phantom",
"chat_model": "llama-3-8b-instruct-32k-v0.1",
"assistant_role": "You are very helpful.",
"max_tokens": 4000,
},
]
} |
… mentions including a reference to details to relevant README sections, as well as default sublime-settings template
As long as the docs updated this is actually not a bug but a feature to implement. Originally by some forgotten sense I made this check local, but I think it's really worth it to leave it to the remote server to decide whether or not token is required and if it if what format and length it should be. So I believe those checks could be safely deleted. |
Fixed in 4.2.7 |
Problem: Currently package returns API request error even when server does not require authentication.
Cause: Excessive pre-request validation on token property from package settings.
Desired behavior: Provide a way to allow user to bypass this check when necessary, but still trigger the validation for legitimate cases.
Suggested fix: Allow token property to be empty or omitted from package settings if not required by provider. Either by explicitly setting
"token": null
, or by adding a separate property like"auth": false
.The error message is triggered by the exception raised on line 472 in the following code snippet.
OpenAI-sublime-text/plugins/openai_worker.py
Lines 468 to 474 in be04e41
Simply ensuring that the token key exists in the plugin config and has any value of length > 10 seems to be a good workaround for now, though it looks like this should be pretty straightforward to fix.
The text was updated successfully, but these errors were encountered: