Replies: 1 comment
-
My guess would be that it is still running the large prompt processing operators on the GPU, so the CPU doesn't need all of its threads. I wouldn't worry too much about these usage indicators as long as the performance you are getting is in line with what you expect. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Something I'm observing, unsure if this is expected behavior or not. Using b6719, built for Vulkan, presently with a Radeon RX 6600XT. I don't know if this is worthy of creating an issue or not.
When I'm using partial GPU offloading (such as for MoE, or
-nkvo 1
) I'm noticing that often, the CPU usage is lower in prompt processing than token generation. Kinda seems like it's using 3-4 threads for PP but it's respecting my-t 9
setting during PG. I'm observing this simply by watchingamdgpu_top
while llama.cpp is running.An example of what I'm seeing:
This is while it's running the 2048 prompt size. I'd expect that CPU usage to be in the 800-900%, but it's hovering right now between 200-300%.
It also doesn't seem to be consistent. Sometimes it will use the full CPU threads for PP.
Is there a better way to test/observe what's happening?
Beta Was this translation helpful? Give feedback.
All reactions