Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SQLDatabaseToolkit adds temperature=0 by default and breaks the execution when using 03-mini #29541

Open
5 tasks done
aifa opened this issue Feb 2, 2025 · 5 comments
Open
5 tasks done
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate Flagged for investigation.

Comments

@aifa
Copy link

aifa commented Feb 2, 2025

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

In langchain_community/agent_toolkits/sql/toolkit.py, class SQLDatabaseToolkit(BaseToolkit): line 48:

Instantiate:
.. code-block:: python

        from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
        from langchain_community.utilities.sql_database import SQLDatabase
        from langchain_openai import ChatOpenAI

        db = SQLDatabase.from_uri("sqlite:///Chinook.db")
        llm = ChatOpenAI(temperature=0)

        toolkit = SQLDatabaseToolkit(db=db, llm=llm)

This instructs the LLM to generate code using temperature=0. When o3-mini is used ,then this causes:
BadRequestError('Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.",

I removed the temperature=0 from my local copy of the code and instruct it to not use temperature with o3-mini and the error stopped.
It also works when I pass the instruction not to set temperature in the user/system prompt directly.

Error Message and Stack Trace (if applicable)

BadRequestError('Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_parameter'}}')Traceback (most recent call last):

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 637, in generate
self._generate_with_cache(

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 855, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 717, in _generate
response = self.client.create(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 850, in create
return self._post(
^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1283, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 960, in request
return self._request(
^^^^^^^^^^^^^^

File "/root/.cache/pypoetry/virtualenvs/gix-service-9TtSrW0h-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1064, in _request
raise self._make_status_error_from_response(err.response) from None

openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_parameter'}}

Description

In langchain_community/agent_toolkits/sql/toolkit.py, class SQLDatabaseToolkit(BaseToolkit): line 48:

Instantiate:
.. code-block:: python

        from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
        from langchain_community.utilities.sql_database import SQLDatabase
        from langchain_openai import ChatOpenAI

        db = SQLDatabase.from_uri("sqlite:///Chinook.db")
        llm = ChatOpenAI(temperature=0)

        toolkit = SQLDatabaseToolkit(db=db, llm=llm)

This instructs the LLM to generate code using temperature=0. When o3-mini is used ,then this causes:
BadRequestError('Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.",

I removed the temperature=0 from my local copy of the code and instruct it to not use temperature with o3-mini and the error stopped.
It also works when I pass the instruction not to set temperature in the user/system prompt directly.

System Info

System Information

OS: Linux
OS Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Python Version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0]

Package Information

langchain_core: 0.3.33
langchain: 0.3.17
langchain_community: 0.3.16
langsmith: 0.3.4
langchain_anthropic: 0.3.5
langchain_cli: 0.0.35
langchain_google_genai: 2.0.9
langchain_groq: 0.2.4
langchain_mistralai: 0.2.6
langchain_openai: 0.2.14
langchain_text_splitters: 0.3.5
langgraph_sdk: 0.1.51
langserve: 0.3.1

Other Dependencies

aiohttp: 3.11.11
anthropic: 0.45.2
async-timeout: Installed. No version info available.
dataclasses-json: 0.6.7
defusedxml: 0.7.1
fastapi: 0.115.8
filetype: 1.2.0
gitpython: 3.1.44
google-generativeai: 0.8.4
gritql: 0.1.5
groq: 0.16.0
httpx: 0.28.1
httpx-sse: 0.4.0
jsonpatch: 1.33
langserve[all]: Installed. No version info available.
langsmith-pyo3: Installed. No version info available.
numpy: 2.2.2
openai: 1.61.0
orjson: 3.10.15
packaging: 24.2
pydantic: 2.10.6
pydantic-settings: 2.7.1
pytest: Installed. No version info available.
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
rich: 13.9.4
SQLAlchemy: 2.0.37
sse-starlette: 1.8.2
tenacity: 9.0.0
tiktoken: 0.8.0
tokenizers: 0.21.0
tomlkit: 0.13.2
typer[all]: Installed. No version info available.
typing-extensions: 4.12.2
uvicorn: 0.32.1
zstandard: 0.23.0

@langcarl langcarl bot added the investigate Flagged for investigation. label Feb 2, 2025
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Feb 2, 2025
@rawathemant246
Copy link
Contributor

rawathemant246 commented Feb 2, 2025

As openai reasoning models does not support the temperature parameter i have update the ChatOPENAI api to not to consider the temperature parameter in default parameters to avoid BadRequest

@aifa
Copy link
Author

aifa commented Feb 2, 2025

Thank you for looking into this.
I noticed the same behavior when using create_react_agent, but this is a bit more tricky to investigate, I have not found the source of the error yet. It sounds like the fix on the ChatOpenAI api might be resolving the issue with the react agent as well.

@rawathemant246
Copy link
Contributor

rawathemant246 commented Feb 3, 2025

@aifa i have go through the create_react_agent api i'll let you know soon what's the reason. Let me know if you are facing the issue.

can you please provide the code snippet and how do you call the create_react_agent and its response ?

@aifa
Copy link
Author

aifa commented Feb 3, 2025

Hi @rawathemant246 , It seems create_react_agent did not work because I was not using the latest version of langchain_openai. I updated the package and the error stopped occurring.

@rawathemant246
Copy link
Contributor

@aifa I also checks how to use create_react_agent you can use the latest method now

from langgraph.prebuilt import create_react_agent in the background it uses the ChatOPENAI API so you'll not get the BadRequest .
Nevertheless i also have a created a PR for the community-llms-openai.py to prevent this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature investigate Flagged for investigation.
Projects
None yet
Development

No branches or pull requests

2 participants