-
Notifications
You must be signed in to change notification settings - Fork 19.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AI Agent failing to handle multi-paragraph response from Tools #14202
Comments
Hey @YuzuruT, We have created an internal ticket to look into this which we will be tracking as "GHC-1356" |
Hey @YuzuruT Can you provide a workflow that reproduces this, It seems odd that it be an issue with multi paragraph responses as I have seen them work with tools recently. Have you tried a different model or tweaking your prompt? If you can share a workflow that reproduces this issue and some screenshots of the ai log we can take a proper look at this. |
To follow up with few of the unanswered questions. Yes, I have tried various framing of prompts. Another anecdotal observation is that the probability of failure to answer seems to increase as the interface moves away from the node. I.e., "Test step" works better than the n8n built-in chat widget, and the n8n built-in chat widget seems to perform better than prompts sent from the Web deployed UI. Here, "perform better" means the probability of opting to the "I don't know" messaging is lower. - Since the behavior or very indeterministic, please take this with a grain of salt. But if this tendency continues, I may look into pull some stats to open a different bug ticket. |
Update. Although I am not completely convinced that this workaround should be the resolution, please feel free to close this case if the issue will not be prioritized. Thank you so much for all your work!! A huge fan of n8n!! |
I have exactly the same issue, that the Tool Agent (1.85.4) returns following JSON instead of calling any tools: {
"output": [
{
"type": "text",
"text": "Okay, ich werde die Details für die Setköpfe 1 und 2 im Angebot 161000 (ID: 101060) nachschlagen.\n\n"
},
{
"functionCall": {
"name": "get_offer_positions",
"args": {
"OfferPosititionNO": "1",
"ProductNumber": "",
"OfferID": "101060"
}
}
},
{
"functionCall": {
"name": "get_offer_positions",
"args": {
"ProductNumber": "",
"OfferID": "101060",
"OfferPosititionNO": "2"
}
}
}
]
} Which causes following issue using chat trigger: Execution: In my case switchting to Claude fixed my issue. |
Bug Description
I am creating a RAG AI Agent and instruct it to respond as "I don't know" when the connected Vector Store Tool cannot provide a good answer. I noticed that some information that I surely am aware that are available in the Vector Store are not being reflected in the Agent's response, and the Agent is prone to respond "I don't know".
Below is my suspicion for WHY this is happening (and hence is the title).
On examining the logs, the Vector Store Tool provides good responses. However, the Tools' responses are occasionally multi-paragraphed containing many line breaks. This seems to cause an awkwardly formatted prompt for the final "Chat Model" step as the AI Agent adds "Human: xxxx, AI: [xxxx], Tool: xxxx. " to the bottom of the prompt to fetch the final answer as a RAG agent.
In my case, the Vector Store Tool responds an extremely rich multi-paragraph text which is directly inserted after the "Tool:" and sent to the Chat Model. Even for a human, it is difficult to understand where exactly is the last word provided by the "Tool" from this prompt. Sadly, in such cases the chat model resorts to responding "I don't know" completely discarding the Tool's findings.
The issue seems to not happen when the text that the Vector Store Tool supplies is small.
Below is my current set up.
[email protected]
Chat Model - Gemini 2.0 Flash 001
To Reproduce
It is difficult to reproduce without a correct prompt and set up. Below are 2.
To reproduce the Observed Issue
To reproduce the Suspected root cause.
Tool:
.Expected behavior
If the Tool provided a multi-paragraph answer with rich information, I would expect the chat model to respect that and either relay the exact message or summarize or contextualize it.
Operating System
n/a
n8n Version
1.84.1
Node.js Version
n/a
Database
SQLite (default)
Execution mode
main (default)
The text was updated successfully, but these errors were encountered: