You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed something small but useful that could improve CLI usability.
When using llama-mtmd-cli.exe, initialization messages go to StandardError, and model replies go to StandardOutput — perfect.
But when I run the /image [ImagePath] command, all the image-processing logs (like “encoding image slice…” and “decoding image batch…”) are also printed to StandardOutput, mixed with the assistant’s reply.
Example in terminal:
User: Analyze the image and describe what you see
Assistant: D:\dev\Apps\IRIS\Debug\net10.0-windows\Temp\img_prompt.png image loaded
encoding image slice...
image slice encoded in 1242 ms
decoding image batch 1/1, n_tokens_batch = 256
image decoded (batch 1/1) in 15 ms
The image shows the side of a cat’s face, with a brown and gray fur pattern and bright blue eyes. The background is black, creating a dramatic lighting effect.
Would it be possible to redirect those internal image-processing logs to StandardError (or another stream)?
That would keep StandardOutput clean and make it easier to parse or display only the model’s actual response in chat-based UIs.
Small tweak — big quality-of-life improvement for integrations.
Thanks for all your amazing work on llama.cpp! 🙏
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone 👋
I noticed something small but useful that could improve CLI usability.
When using llama-mtmd-cli.exe, initialization messages go to StandardError, and model replies go to StandardOutput — perfect.
But when I run the /image [ImagePath] command, all the image-processing logs (like “encoding image slice…” and “decoding image batch…”) are also printed to StandardOutput, mixed with the assistant’s reply.
Example in terminal:
Would it be possible to redirect those internal image-processing logs to StandardError (or another stream)?
That would keep StandardOutput clean and make it easier to parse or display only the model’s actual response in chat-based UIs.
Small tweak — big quality-of-life improvement for integrations.
Thanks for all your amazing work on llama.cpp! 🙏
Beta Was this translation helpful? Give feedback.
All reactions