Skip to content

Wildcard not expanding for ollama_chat to find models dynamically. #8095

@sakoht

Description

@sakoht

It appears that the ollama model list is hard-coded to just one model, which might-or-might-not actually be locally available.

I'll follow with a patch that detects an Ollama install and instead uses the actual list available, and has an empty list if none are availaable.

My example config demonstrates the problem. For this config, the first three expand as expected, and the 4th explicity ollama model works, but for the final entry it doesn't expand:

model_list:
  - model_name: perplexity/*
    litellm_params:
      model: perplexity/*
      api_key: os.environ/PERPLEXITY_API_KEY

  - model_name: anthropic/*
    litellm_params:
      model: anthropic/*
      api_key: os.environ/ANTHROPIC_API_KEY
  
  - model_name: openai/*
    litellm_params:
      model: openai/*
      api_key: os.environ/OPENAI_API_KEY
 
  - model_name: ollama_chat/deepseek-r1:14b
    litellm_params:
      model: ollama_chat/deepseek-r1:14b
      api_base: "http://localhost:11434"
  
  - model_name: ollama_chat/*
    litellm_params:
      model: ollama_chat/*
      api_base: "http://localhost:11434"

Full launch output with --detailed_debug:

> litellm --config litellm-config/litellm-config.yaml --detailed_debug
(venv-openwebui) ssmith@Scott-Laptop-2024 ~/open-webui (0.5.7-ss1)
> litellm --config litellm-config/litellm-config.yaml --detailed_debug
/Users/ssmith/open-webui/backend/venv-openwebui/lib/python3.11/site-packages/pydantic/_internal/_config.py:341: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
INFO:     Started server process [24689]
INFO:     Waiting for application startup.
10:35:37 - LiteLLM Proxy:DEBUG: proxy_server.py:442 - litellm.proxy.proxy_server.py::startup() - CHECKING PREMIUM USER - False
10:35:37 - LiteLLM Proxy:DEBUG: litellm_license.py:98 - litellm.proxy.auth.litellm_license.py::is_premium() - ENTERING 'IS_PREMIUM' - LiteLLM License=None
10:35:37 - LiteLLM Proxy:DEBUG: litellm_license.py:107 - litellm.proxy.auth.litellm_license.py::is_premium() - Updated 'self.license_str' - None
10:35:37 - LiteLLM Proxy:DEBUG: proxy_server.py:453 - worker_config: {"model": null, "alias": null, "api_base": null, "api_version": "2024-07-01-preview", "debug": false, "detailed_debug": true, "temperature": null, "max_tokens": null, "request_timeout": null, "max_budget": null, "telemetry": true, "drop_params": false, "add_function_to_prompt": false, "headers": null, "save": false, "config": "litellm-config/litellm-config.yaml", "use_queue": false}

#------------------------------------------------------------#
#                                                            #
#               'A feature I really want is...'               #
#        https://github.com/BerriAI/litellm/issues/new        #
#                                                            #
#------------------------------------------------------------#

 Thank you for using LiteLLM! - Krrish & Ishaan



Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new


10:35:37 - LiteLLM Proxy:DEBUG: proxy_server.py:1500 - loaded config={
    "model_list": [
        {
            "model_name": "perplexity/*",
            "litellm_params": {
                "model": "perplexity/*",
                "api_key": "os.environ/PERPLEXITY_API_KEY"
            }
        },
        {
            "model_name": "anthropic/*",
            "litellm_params": {
                "model": "anthropic/*",
                "api_key": "os.environ/ANTHROPIC_API_KEY"
            }
        },
        {
            "model_name": "openai/*",
            "litellm_params": {
                "model": "openai/*",
                "api_key": "os.environ/OPENAI_API_KEY"
            }
        },
        {
            "model_name": "ollama_chat/deepseek-r1:14b",
            "litellm_params": {
                "model": "ollama_chat/deepseek-r1:14b",
                "api_base": "http://localhost:11434"
            }
        },
        {
            "model_name": "ollama_chat/*",
            "litellm_params": {
                "model": "ollama_chat/*",
                "api_base": "http://localhost:11434"
            }
        }
    ]
}
LiteLLM: Proxy initialized with Config, Set models:
    perplexity/*
    anthropic/*
    openai/*
    ollama_chat/deepseek-r1:14b
    ollama_chat/*
10:35:37 - LiteLLM:DEBUG: utils.py:4323 - Error getting model info: This model isn't mapped yet. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json
10:35:37 - LiteLLM Router:DEBUG: client_initalization_utils.py:468 - Initializing OpenAI Client for perplexity/*, Api Base:https://api.perplexity.ai, Api Key:pplx-5LC***************
10:35:37 - LiteLLM:DEBUG: utils.py:4323 - Error getting model info: This model isn't mapped yet. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json
10:35:37 - LiteLLM Router:DEBUG: client_initalization_utils.py:468 - Initializing OpenAI Client for openai/*, Api Base:None, Api Key:sk-proj-***************
10:35:37 - LiteLLM:DEBUG: utils.py:4323 - Error getting model info: OllamaError: Error getting model info for *. Set Ollama API Base via `OLLAMA_API_BASE` environment variable. Error: Server error '500 Internal Server Error' for url 'http://localhost:11434/api/show'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500
10:35:37 - LiteLLM Router:DEBUG: router.py:4038 - 
Initialized Model List ['perplexity/*', 'anthropic/*', 'openai/*', 'ollama_chat/deepseek-r1:14b', 'ollama_chat/*']
10:35:37 - LiteLLM Router:INFO: router.py:613 - Routing strategy: simple-shuffle
10:35:37 - LiteLLM Router:DEBUG: router.py:505 - Intialized router with Routing strategy: simple-shuffle

Routing enable_pre_call_checks: False

Routing fallbacks: None

Routing content fallbacks: None

Routing context window fallbacks: None

Router Redis Caching=None

10:35:37 - LiteLLM Proxy:DEBUG: proxy_server.py:518 - prisma_client: None
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)

The key error seems to be:

10:35:37 - LiteLLM:DEBUG: utils.py:4323 - Error getting model info: OllamaError: Error getting model info for *. Set Ollama API Base via `OLLAMA_API_BASE` environment variable. Error: Server error '500 Internal Server Error' for url 'http://localhost:11434/api/show'

Note that the error about the env var was different at first, and I set that value:

export OLLAMA_API_BASE=http://localhost:11434

The error message is misleading b/c that URL does return correct model info, just not for "*".

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions