-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StreamEvents from langchain js core 0.3.40 not working with python back-end langgraph 0.2.74 #7749
Comments
Hello @mfchanou 👋, I'm here to help you with your software development needs. If you have any questions or need assistance with bug fixes, feel free to ask me. I found a similar closed issue titled "streamEvents from langchain js not working with python back-end" which was resolved by a pull request that added StreamEvents to Remote Runnable [1]. However, the specific changes made in the pull request are not detailed in the available information. The error you're encountering, "TypeError: obj.messages.map is not a function," suggests that the To resolve this issue, ensure that the {
"input": {
"messages": [
[
new SystemMessage("You are Godzilla"),
new HumanMessage("hello")
]
]
}
} This structure is part of the data object in the event with the event type "on_llm_start" [3]. Make sure that the Python back-end is returning the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@dosu the output is coming directly from Langgraph itself, I didn't touch it which implies that there is an incompatibility between python langgraph output and the input expected by langchain js. Here is the langgraph agent python code from dotenv import load_dotenv
from typing import Annotated
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from typing import Annotated, List
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_core.tools import tool
from dataclasses import dataclass
from langchain_community.tools.tavily_search import TavilySearchResults
from fastapi import FastAPI
from langserve import add_routes
from langchain_core.prompts import ChatPromptTemplate
load_dotenv()
@dataclass
class State:
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model_name="gpt-4o", temperature=0, streaming=True)
prompt = ChatPromptTemplate.from_messages([
("system", """
You're a kubernetes expert. Be clear and concise
"""),
("placeholder", "{messages}"),
])
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)
chain = prompt | llm_with_tools
graph_builder = StateGraph(State)
def chatbot_node(state: State):
return {"messages": chain.invoke({"messages": state.messages}) }
graph_builder.add_node("chatbot", chatbot_node)
graph_builder.add_node("tools", ToolNode(tools))
graph_builder.add_edge(START, "chatbot")
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
app = FastAPI(
title="My API",
version="1.0",
description="my API"
)
add_routes(
app,
graph,
path="/chat",
config_keys=["configurable"],
playground_type="chat"
) |
The issue you're encountering seems to stem from an incompatibility between the output format of the Langgraph Python code and the expected input format for the LangChain JS In your Python code, the Here's a potential solution:
By ensuring that the output from the Python backend matches the expected input format for the JavaScript frontend, you should be able to resolve the incompatibility issue. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@dosu still the same issue. After checking it seems that it's the event causing trouble it's an on_chain_start event which occurs after an on_chain_end. As you can see in the payload, the messages key in the payload is not an array. Another question is why we have an on_chain_start event after an on_chain_end?
|
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Erreur: TypeError: obj.messages.map is not a function
at revive (file://my-ai-app/node_modules/.pnpm/@langchain[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/remote.js:164:40)
at innerRevive (file://my-ai-app/node_modules/.pnpm/@langchain[email protected][email protected][email protected][email protected]_/node_modules/@langchain/core/dist/runnables/remote.js:175:13)
Description
I exposed a python langgraph agent through Langserve. I'm calling this langserve api with the js script above. When calling the streamEvents method, the LLM tokens processing end up failing because the last event which is of type on_chain_end returns a dict with a key messages which is not an array. The Langchain core js method is expecting a dict with a key message which is an array.
System Info
Javascript
Langchain core 0.3.40
Python
Langraph 0.2.74
Langserve 0.3.1
The text was updated successfully, but these errors were encountered: