[Agent Builder] Fix truncated inline attachments#249799
[Agent Builder] Fix truncated inline attachments#249799viduni94 merged 5 commits intoelastic:mainfrom
Conversation
|
Pinging @elastic/obs-ai-team (Team:obs-ai) |
|
@viduni94 perhaps another bug with bounded tools: in this trace the system prompt contains this
But no |
|
Also, as mentioned on Slack, if the attachment content is truncated, we should make sure to put the important content first. For the AI insights we should not do Summary:
${JSON.stringify(summary)}
Context:
${JSON.stringify(context)}This way we ensure, even if the attachment is truncated, at least we show the summary. |
@sorenlouv |
The tool is getting called as expected.
|
Do you see the tool in the trace? Maybe I'm just overlooking it https://oblt-apps.elastic.dev/phoenix-ai/projects/UHJvamVjdDoz/traces/6c4c42999fe209c838145cc315a90bac?selected=&selectedSpanNodeId=U3BhbjozMjkzOTQ%3D |
I see what you mean. When we start the conversation from AI Insights, since we have the data inline, this tool is not called. On Asking on the agent-builder channel |
💛 Build succeeded, but was flaky
Failed CI StepsTest Failures
Metrics [docs]
History
cc @viduni94 |
| context: z.string(), | ||
| summary: z.string(), | ||
| context: z.string(), |
There was a problem hiding this comment.
i'm old enough to remember when key order in objects were not guaranteed. These days it's probably not a problem in modern browsers - but it still feels fragile.
That being said, it would be cleaner, and less noisy for the LLM if we did not have to stringify { summary context, } but avoid the outer object and send this string to the LLM:
Summary:
${summary}
Comtext:
${context}I believe that would yield better results. If that's impossible, or very hard, so be it. But would be better imo.
There was a problem hiding this comment.
We can send the data as a string but we'll have to remove the custom attachment label and use the default one (Summary). Is that okay?
(Or I have to send the content and attachment label as a string and parse the string to extract it)
There was a problem hiding this comment.
Let's merge this PR as-is and then discuss this separately




Closes #249908
Summary
Problem
When attachment content exceeds 10,000 characters in inline mode, the content is truncated with the message
[content truncated, use attachment_read for full content].However, the system prompt mentions,
"Since the content is shown inline, you don't need to read it - just reference it directly."This creates conflicting instructions - the truncation message specifies the LLM to use the
attachment_readtool, but the system prompt specifies to reference it directly. As a result, the LLM never calls theattachment_readfor truncated attachments and only a portion of the data is passed to the LLM.Solution
Track whether any attachment content was truncated and provide different system prompts based on that.
"Since the content is shown inline, you don't need to read it - just reference it directly.""If you see '[content truncated, use attachment_read for full content]', you MUST call attachment_read(id) to get the complete content before analyzing or referencing that attachment."This PR also updates the order of the content shown in the AI Insight attachment to show the summary first and then the context data. Therefore, even if the data is truncated, the summary will always be shown.
Checklist
release_note:*label is applied per the guidelinesbackport:*labels.