Skip to content

Conversation

@adrianlyjak
Copy link
Contributor

@adrianlyjak adrianlyjak commented May 7, 2025

Description

Adds a parameter, tool_required, that adjust the LLM call to require a tool call if true. This will be useful for things like structured llms, which frequently rely on tool calling. There's a bit of an ad hoc tool_choice parameter that's passed around, but this is inconsistent between APIs. This change implements the llm provider specific param (usually called tool_choice, but they expect varied values).

This intentionally keeps the API simple and small: just require a tool or not. If you don't want a tool, don't pass tools.

New Package?

Not a new package

Version Bump?

Not yet. Should I do that now? The packages llama-index-core minimum version presumably needs to be updated to the latest

  • Yes
  • No

Type of Change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

I also added some integration style tests, and ran them with API keys. I am a bit concerned about accidentally breaking some of the harder to test integrations. I could back out of them for now

  • I added new unit tests to cover this change
  • I believe this change is already covered by existing unit tests

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

@adrianlyjak adrianlyjak force-pushed the tool_required branch 3 times, most recently from b2383fb to ed6e0b1 Compare May 8, 2025 15:08
@adrianlyjak adrianlyjak force-pushed the tool_required branch 5 times, most recently from 6ec8783 to 3cb762c Compare May 28, 2025 15:34
@adrianlyjak adrianlyjak marked this pull request as ready for review May 28, 2025 17:00
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label May 28, 2025
@adrianlyjak adrianlyjak changed the title adding functionality to require tool calling Add require_tool param to function calling LLMs May 28, 2025
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I didn't read this quite right. Might want to retain the predefined tool choice thing to maintain behavior. I kind of just want to delete all of the existing tool_choice code. The only real use cases I can think of are "use the tool" or "use either the tool or not." Everything else seems much more intuitive to control via the provided tools list.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That being said, this is a deprecated class with a warning

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea at this point, it might not be worth debugging lol

The point though is that with structured_predict, there is only one choice, so its trying to force/ensure a tool call

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we raise an error if its set to True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@logan-markewich My thinking is that this will frequently get set more by internal library code rather than user code. Like, we'll want to migrate StructuredLLM to set tool_required=True (it's currently setting tool_choice="function_name" or something and hoping for the best from the LLM implementation). Seemed better to have it maybe give a tool response rather than blow up, like it currently is

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise the alternative is to somehow advertise whether the LLM supports tool_required or not, and checking that before providing it, which seems like a lot of gymnastics for mostly just a few underused LLMs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair!

@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:XXL This PR changes 1000+ lines, ignoring generated files. labels May 29, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label May 29, 2025
@logan-markewich logan-markewich merged commit 26f2370 into run-llama:main May 30, 2025
10 checks passed
@rosspurdonubertasconsulting
Copy link

rosspurdonubertasconsulting commented Jun 1, 2025

This release breaks the llama-index-llm-anthropic package. I have had to revert back to release for this to work again 0.12.37

Traceback (most recent call last):                                                                                                                                                        │
│   File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run                                                                                                                        │
│   File "/usr/local/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 287, in handle_future_result                                                        │
│     raise exception                                                                                                                                                                       │
│   File "/usr/local/lib/python3.11/site-packages/llama_index/core/workflow/workflow.py", line 408, in _run_workflow                                                                        │
│     raise exception_raised                                                                                                                                                                │
│   File "/usr/local/lib/python3.11/site-packages/llama_index/core/workflow/context.py", line 637, in _step_worker                                                                          │
│     raise WorkflowRuntimeError(                                                                                                                                                           │
│ llama_index.core.workflow.errors.WorkflowRuntimeError: Error in step 'run_agent_step': AsyncMessages.create() got an unexpected keyword argument 'tool_required'                          │
│ INFO:root:stream:false

For incase anyone else runs into the same issue.

@adrianlyjak
Copy link
Contributor Author

@rosspurdonubertasconsulting thanks for the report.
It seems that there's a version compatibility issue that I didn't foresee. When the llama-index-core core package has the tool_required parameter, running against an old version of the anthropic integration, anthropic will pass through the additional argument to the internal anthropic message constructor.

Rather than rolling back, llama-index 0.12.39 should work if you also upgrade to the latest llama-index-llms-anthropic (That's 0.7.0). Let me know if that's working for you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm This PR has been approved by a maintainer size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants