Skip to content

server: remove the verbose_prompt parameter#21059

Merged
ggerganov merged 4 commits into
ggml-org:masterfrom
aisk:server-verbose-prompt
Mar 27, 2026
Merged

server: remove the verbose_prompt parameter#21059
ggerganov merged 4 commits into
ggml-org:masterfrom
aisk:server-verbose-prompt

Conversation

@aisk
Copy link
Copy Markdown
Contributor

@aisk aisk commented Mar 27, 2026

Overview

When verbose_prompt command line argument is passed, print the prompt in llama-server. This should close or partial close #19653

Requirements

  • I have read and agree with the contributing guidelines
  • AI usage disclosure: YES, use codex to review the changes.

@aisk aisk requested a review from a team as a code owner March 27, 2026 08:03
@ggml-gh-bot
Copy link
Copy Markdown

ggml-gh-bot Bot commented Mar 27, 2026

Hi @aisk, thanks for your contribution!

Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:

  • Multiple open PRs from a new contributor: We limit new contributors (those without a previously merged PR) to 1 open PR at a time. You currently have 2 open PRs.

Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below.

Copy link
Copy Markdown
Contributor

@ServeurpersoCom ServeurpersoCom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The TODO literally asked for this, verbose_prompt was already parsed but never wired up server-side. Clean fix, SRV_INF is the right level since the user explicitly opted in.

@aisk
Copy link
Copy Markdown
Contributor Author

aisk commented Mar 27, 2026

Hi @aisk, thanks for your contribution!

Per our contribution guidelines, the automated PR checker found the following issue(s) that need your attention:

* **Multiple open PRs from a new contributor**: We limit new contributors (those without a previously merged PR) to **1 open PR at a time**. You currently have 2 open PRs.

Please note that maintainers reserve the right to make final decisions on PRs. If you believe there is a mistake, please comment below.

I think this is not accurate, I have two merged PRs for now: https://github.com/ggml-org/llama.cpp/pulls?q=is%3Apr+is%3Amerged+author%3Aaisk+

@CISC
Copy link
Copy Markdown
Member

CISC commented Mar 27, 2026

Sorry, but we can't accept this PR, see #19655 and #19752

Regarding the bot notice, there's also a time factor in play, don't worry too much about it, they are just reminders.

@CISC CISC closed this Mar 27, 2026
@aisk
Copy link
Copy Markdown
Contributor Author

aisk commented Mar 27, 2026

@CISC Hi thank you for your review. If we plan to not to support it, can we just remove this parameter in llama-server, instead of silence it and make user confuse?

@aisk aisk deleted the server-verbose-prompt branch March 27, 2026 09:42
@CISC
Copy link
Copy Markdown
Member

CISC commented Mar 27, 2026

@CISC Hi thank you for your review. If we plan to not to support it, can we just remove this parameter in llama-server, instead of silence it and make user confuse?

Yes, as suggested here #19752 (comment) :)

@aisk
Copy link
Copy Markdown
Contributor Author

aisk commented Mar 27, 2026

Hi @CISC I updated this PR to just remove the verbose_prompt, can you re-open it?

@aisk aisk changed the title server: respect the verbose_prompt parameter server: remove the verbose_prompt parameter Mar 27, 2026
@CISC CISC reopened this Mar 27, 2026
@CISC CISC requested a review from a team as a code owner March 27, 2026 10:27
Comment thread common/arg.cpp Outdated
@ggerganov ggerganov merged commit 48cda24 into ggml-org:master Mar 27, 2026
44 of 45 checks passed
slartibardfast pushed a commit to slartibardfast/llama.cpp that referenced this pull request Apr 12, 2026
* server: respect the verbose_prompt parameter

* Revert "server: respect the verbose_prompt parameter"

This reverts commit 8ed885c.

* Remove --verbose-prompt parameter from llama-server

* Using set_examples instead of set_excludes
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
* server: respect the verbose_prompt parameter

* Revert "server: respect the verbose_prompt parameter"

This reverts commit 8ed885c.

* Remove --verbose-prompt parameter from llama-server

* Using set_examples instead of set_excludes
rsenthilkumar6 pushed a commit to rsenthilkumar6/llama.cpp that referenced this pull request May 1, 2026
* server: respect the verbose_prompt parameter

* Revert "server: respect the verbose_prompt parameter"

This reverts commit 8ed885c.

* Remove --verbose-prompt parameter from llama-server

* Using set_examples instead of set_excludes
ljubomirj pushed a commit to ljubomirj/llama.cpp that referenced this pull request May 6, 2026
* server: respect the verbose_prompt parameter

* Revert "server: respect the verbose_prompt parameter"

This reverts commit 8ed885c.

* Remove --verbose-prompt parameter from llama-server

* Using set_examples instead of set_excludes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: verbose logging doesn't work for llama server

5 participants