Skip to content

llama-3.1-70b-versatile has been decommissioned #107

@Strandpalme

Description

@Strandpalme

When I tried to use groq in ran into the following error:
Provider List: https://docs.litellm.ai/docs/providers 20:33:02 - LiteLLM:ERROR: main.py:370 - litellm.acompletion(): Exception occured - Error code: 400 - {'error': {'message': 'The model llama-3.1-70b-versatilehas been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}} Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, uselitellm.set_verbose=True'.
Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 942, in async_streaming
response = await openai_aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`

Changing ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile", in src/backend/constants.py to ChatModel.LLAMA_3_70B: "groq/llama-3.3-70b-versatile", fixed it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions