-
Notifications
You must be signed in to change notification settings - Fork 320
Description
When I tried to use groq in ran into the following error:
Provider List: https://docs.litellm.ai/docs/providers 20:33:02 - LiteLLM:ERROR: main.py:370 - litellm.acompletion(): Exception occured - Error code: 400 - {'error': {'message': 'The model
llama-3.1-70b-versatilehas been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}} Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use
litellm.set_verbose=True'.
Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 942, in async_streaming
response = await openai_aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`
Changing ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile",
in src/backend/constants.py to ChatModel.LLAMA_3_70B: "groq/llama-3.3-70b-versatile",
fixed it.