Skip to content

Allow arbitrary arguments order for Q3C, Q3CN, and Qwen3.5#1352

Merged
ikawrakow merged 1 commit intoikawrakow:mainfrom
sayap:qwen3_tool_call_relax_arg
Mar 3, 2026
Merged

Allow arbitrary arguments order for Q3C, Q3CN, and Qwen3.5#1352
ikawrakow merged 1 commit intoikawrakow:mainfrom
sayap:qwen3_tool_call_relax_arg

Conversation

@sayap
Copy link
Contributor

@sayap sayap commented Mar 2, 2026

This should fix the read file at offset/limit issue, where the tool definition has offset before limit, while the model sets limit before offset.

This should fix the read file at offset/limit issue, where the tool
definition has offset before limit, while the model sets limit before
offset.
@sayap
Copy link
Contributor Author

sayap commented Mar 2, 2026

@ubergarm
Copy link
Contributor

ubergarm commented Mar 2, 2026

Ahh I had trouble with various qwen35moe models looping when experimenting/testing with opencode as well.

I have not tried the work-around mentioned in the Q3CN thread ~/.opencode/plugins/read-limit.ts.

In a single test this seems like it possibly fixes the issue, but I'll have to try some more tests. Also didn't regression test other models.


wait, @sayap do i need to run llama-server --relax_arg true or something to use this? i did not do that in my test so might be moot.

@sayap
Copy link
Contributor Author

sayap commented Mar 3, 2026

Thanks for testing. I just checked the mainline autoparser branch as well, and I think it still lacks the parallel tool calls fix from #1300, and the arbitrary arguments order fix from this PR.

@sayap
Copy link
Contributor Author

sayap commented Mar 3, 2026

do i need to run llama-server --relax_arg true or something to use this?

form.relax_arg = true is set in common_chat_params_init_qwen3_coder_xml. No change to CLI

@ikawrakow ikawrakow merged commit ea3e8e3 into ikawrakow:main Mar 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants