-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Fix for issue [18775](https://github.com/run-llama/llama_index/issues/18775#issuecomment-2890774335) #18781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fix for issue [18775](run-llama#18775 (comment))
...ntegrations/llms/llama-index-llms-bedrock-converse/llama_index/llms/bedrock_converse/base.py
Outdated
Show resolved
Hide resolved
|
|
||
| # If no region prefix, return the original model name | ||
| if not any(model_name.startswith(prefix) for prefix in REGION_PREFIXES): | ||
| if not any(prefix in model_name for prefix in REGION_PREFIXES): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sneaking in this fix, when prefixing with arn:..., the startswith assumption breaks down
| # handle empty inputs | ||
| argument_dict = {} | ||
| if tool_call["input"] and isinstance(tool_call["input"], str): | ||
| if tool_call.get("input", False) and isinstance(tool_call["input"], str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails when the tool has no inputs, because tool_call.get("input", False) returns an empty string (""), which evaluates to False. As a result, we fall into the else branch and skip the tool call.
To fix this, we should use if "input" in tool_call instead of if tool_call.get("input", False). I'll open a new PR with this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, the change you propose sounds good, but isn't it pretty much similar to what we already do?
If "input" is not in tool_call, we should get False from both the conditional statements (the one you propose and the one we are already using)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless you are proposing a higher level conditional change, like:
if "input" in tool_call:
if isinstance(tool_call["input], str):
# do something
elif isinstance(tool_call["input"], dict):
# do something else
else:
continue There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! Thanks for the quick reply.
isn't it pretty much similar to what we already do?
Not really. The issue happens when "input" exists in tool_call but tool_call["input"] == "". In this case, tool_call.get("input", False) will return "", which evaluates to False.
E.g., try running:
tool_call = {"input": ""}
if tool_call.get("input", False):
print("This will NOT be printed")
if "input" in tool_call:
print("This will be printed")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, sorry for the oversight!
Yeah, then please go ahead with the PR or I could also do that :),
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Already did: PR #18786 :)
See [this comment](run-llama#18781 (comment)).
) * Fix yielding incomplete tool calls Fix for issue [18775](#18775 (comment)) * small fix for getting model name when arn is prefixed * apply Clelia's fix * vbump * revert original PR changes * Bug fix: Calling tool with empty arguments See [this comment](#18781 (comment)). * Bumped to v0.5.5 --------- Co-authored-by: Logan Markewich <[email protected]>
Description
In the
astream_chatandstream_chatfunctions inllama_index/llms/bedrock_converse/base.py, aChatResponsewas yielded before the LLM finished streaming, resulting in a missing input field in the generated tool call.Now, when the LLM starts a tool call, we check if it is complete before yielding the
ChatResponse.Fixes # 18775.
New Package?
Did I fill in the
tool.llamahubsection in thepyproject.tomland provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.tomlfile of the package I am updating? (Except for thellama-index-corepackage)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
uv run make format; uv run make lintto appease the lint gods