-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Description
Description
When using langchain-google-genai
with a Gemini model (e.g., gemini/gemini-pro
), CrewAI consistently fails with the error litellm.BadRequestError: LLM Provider NOT provided
.
The most critical diagnostic finding is that the exact same setup works perfectly when switching to the langchain-openai
integration. This proves that the core CrewAI framework, system environment (Python 3.11, clean venv), and code structure are all correct.
The bug appears to be isolated specifically to the interaction between CrewAI and langchain-google-genai
, where the model provider information is being lost or mangled before the request is handed off to litellm
.
Steps to Reproduce
- On macOS, create a clean Python 3.11 virtual environment.
- Install the required libraries:
pip install crewai crewai-tools langchain-google-genai
- Use the "Failing Gemini Script" provided in the code snippets section below.
- Set the
GOOGLE_API_KEY
environment variable. - Run the script. The execution will fail with the
LLM Provider NOT provided
error.
Expected behavior
The crew should execute the task successfully using the Gemini model, just as it does when configured with an OpenAI model.
Screenshots/Code snippets
main_gemini.py (FAILING)
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import FileReadTool, FileWriterTool
from langchain_google_genai import ChatGoogleGenerativeAI
Setup the LLM
llm = ChatGoogleGenerativeAI(
model="gemini/gemini-pro",
google_api_key=os.environ.get("GOOGLE_API_KEY")
)
... (rest of standard agent and task definitions)
main_openai.py (WORKING DIAGNOSTIC)
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import FileReadTool, FileWriterTool
from langchain_openai import ChatOpenAI
Setup the LLM
llm = ChatOpenAI(model_name="gpt-4o-mini", temperature=0.7)
... (rest of standard agent and task definitions)
Operating System
Other (specify in additional context)
Python Version
3.11
crewAI Version
crewai==0.203.1
crewAI Tools Version
crewai-tools==0.76.0
Virtual Environment
Venv
Evidence
File "/Users/james/ai-dev-agent/main.py", line 78, in
result = dev_crew.kickoff(inputs=task_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/crew.py", line 698, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/crew.py", line 812, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/crew.py", line 918, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/task.py", line 377, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/task.py", line 528, in _execute_core
raise e # Re-raise the exception after emitting the event
^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/task.py", line 441, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agent.py", line 471, in execute_task
raise e
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agent.py", line 447, in execute_task
result = self._execute_without_timeout(task_prompt, task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agent.py", line 543, in _execute_without_timeout
return self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agents/crew_agent_executor.py", line 149, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agents/crew_agent_executor.py", line 243, in _invoke_loop
raise e
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/agents/crew_agent_executor.py", line 189, in _invoke_loop
answer = get_llm_response(
^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/utilities/agent_utils.py", line 253, in get_llm_response
raise e
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/utilities/agent_utils.py", line 246, in get_llm_response
answer = llm.call(
^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/llm.py", line 1024, in call
return self._handle_non_streaming_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/crewai/llm.py", line 799, in _handle_non_streaming_response
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/utils.py", line 1330, in wrapper
raise e
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/utils.py", line 1205, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/main.py", line 3427, in completion
raise exception_type(
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/main.py", line 1097, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 391, in get_llm_provider
raise e
File "/Users/james/ai-dev-agent/agent_env/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 368, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-pro
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
Possible Solution
The error message You passed model=models/gemini/gemini-pro
suggests that somewhere in the chain, an unwanted models/
prefix is being added to the model name before it reaches litellm
. The issue seems to be in how langchain-google-genai
's ChatGoogleGenerativeAI
class interacts with CrewAI's backend, as this does not happen with langchain-openai
.
Additional context
This has been an extremely persistent bug that was extensively debugged. I tried to rule out many potential causes:
- Python Version: The error occurred on both Python 3.13 and a clean install of Python 3.11.
- Environment: The error persisted across multiple fresh virtual environments.
- Dependencies: Tried pinning specific library versions (
crewai==0.35.8
, etc.) and using the latest versions; the error was consistent. - Model Name: Tried multiple formats (
gemini-pro
,gemini/gemini-pro
); thegemini/
prefix was necessary to get past earlier errors but did not solve the final one. - API Keys: I tried setting both
GOOGLE_API_KEY
andGEMINI_API_KEY
.
The only variable that resolved the issue was replacing langchain-google-genai
with langchain-openai
, which strongly isolates the bug to the Gemini integration path.