Conversation
Yep, seems something is not quite right yet. |
Add new kimi-k2.5 keys to mtmd convert Update V_MMPROJ tensor mapping for new mm_projector.proj keys Update V_M_IMP_NORM for new mm_projector.pre_norm key
|
Vision is working now for images, uploaded MMPROJ files to my repo. @ngxson I left comments about the places that confused me the most.
I think the rest of the changes are pretty sane. |
|
Great work AesSedai! I just downloaded the BF16 for mmproj. Is there any reason to get anything higher than Q8_0? What about ctk/ctv is there any good reason to run them in f16 instead of lower since the model is INT4? |
|
@segmond Thanks, for the MMPROJ some cards are more or less compatible with different versions. The BF16's don't work very well on my 3090s IIRC. The Q8_0 should be fine to use quality-wise. Regarding CTK / CTV, you do not want to quantize the cache on this model at all. The model weight quantization is different than the cache quantization. MLA / GQA already comes with some pretty severe compression on the cache so by further quantizing it you'll degrade it more. Besides, the context is very lightweight anyways. Something like 165k context in FP16 is like ballpark 10GB or so. |
|
@tempgidam Huh, interesting. I haven't seen any double-vision issues on my setup. I did update the first shard yesterday evening to adjust the chat template since MoonshotAI updated the media token. There's a minimal prompt on my ST setup ("A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions."). There was also a fix for the thinking block issue in the chat template that required an update a couple of days ago. Could you download a fresh copy of the first shard of the quant and re-test? I'd be surprised if there wasn't a bug lurking in there somewhere, which is why I'm waiting on ngxson to review. |
|
I used AesSedai's model (IQ2_XXS + Q8_0 mmproj) and tempgidam's image of a circle. Then prompted this:
And it generated me this: <svg viewBox="0 0 400 300" xmlns="http://www.w3.org/2000/svg">
<circle cx="100" cy="150" r="80" fill="none" stroke="black" stroke-width="15"/>
<circle cx="320" cy="150" r="80" fill="none" stroke="black" stroke-width="15"/>
</svg>I had to disable the thinking, because with thinking enabled it just rambled on and on for 3000 tokens, endlessly correcting itself, and then gave me an unusable SVG in the end. However, in its ramblings it said this:
...which doesn't correspond to the image above, but still mentions that one of the circles is cut off. Only in the image, the right circle is cut off slightly. So maybe there is something to it, or maybe it's just a hallucination because the quant is too small. The whole thinking process of the failed run (which was NOT used to generated the above image)The user wants me to create an approximate SVG file of the image shown. The image displays two black outlined circles (or ring shapes) on a white background. They appear to be positioned side by side, with the left one partially cut off on the left side and the right one fully visible or partially cut off on the right. Actually, looking more carefully, it seems like two circles with thick black strokes, positioned horizontally next to each other. Let me analyze the image:
To recreate this in SVG:
Looking at the proportions:
Actually, looking again, it seems the left circle is positioned so we see mostly its right side, and the right circle is positioned so we see its left side? No, wait. Let me reconsider. The image shows:
Wait, looking carefully at the original image description and the visual content:
Actually, looking at the image again, it seems like:
But wait, the left shape looks like it could be a full circle with the left side cut off by the image edge, and the right shape is a full circle. Let me create an SVG with:
Actually, looking at the aspect ratio, the image is wider than it is tall? No, the provided image seems to have a certain aspect ratio. Let me assume the viewBox is something like 0 0 200 100. Better approximation:
Looking at the image once more:
Actually, looking at the bottom alignment:
Let me go with:
But to make it look like the image where the left one is cut off:
Wait, looking at the image provided, the left circle appears to be positioned so we see the right side of it (like a "D" shape), and the right circle is a full "O" shape. Actually, no - the left one looks like it might be a full circle with the left side cut off by the frame. Let me create an SVG that captures the essence:
<svg width="200" height="100" viewBox="0 0 200 100" xmlns="http://www.w3.org/2000/svg">
<!-- Left circle, partially cut off -->
<circle cx="40" cy="50" r="35" fill="none" stroke="black" stroke-width="8" />
<!-- Right circle -->
<circle cx="140" cy="50" r="35" fill="none" stroke="black" stroke-width="8" />
</svg>But looking at the image, the left circle might be positioned at around x=30 (if width is 100), and the right at x=70. Actually, the image looks like it might be showing:
Let me adjust:
Stroke width should be roughly 1/5 to 1/4 of the radius? Looking at the image, the stroke is quite thick. If radius is 40, stroke might be 8-10. Let me provide a responsive SVG that approximates the visual: <svg viewBox="0 0 160 100" xmlns="http://www.w3.org/2000/svg">
<!-- Left circle (partially visible) -->
<circle cx="35" cy="50" r="30" fill="none" stroke="black" stroke-width="8" />
<!-- Right circle -->
<circle cx="115" cy="50" r="30" fill="none" stroke="black" stroke-width="8" />
</svg>This should create two thick-outlined circles, with the left one partially cut off on the left side if the viewBox starts at 0 and the circle extends to x=5 (35-30). Actually, looking at the image one more time, I think the left circle is positioned such that its left side is cut off, showing a "C" shape (or rather, the right side of a circle), and the right circle is fully visible. The vertical position seems roughly centered. Let me refine:
Wait, the spacing between them looks like roughly one radius or so. Final check:
So my SVG should be: <svg width="300" height="150" viewBox="0 0 300 150" xmlns="http://www.w3.org/2000/svg">
<circle cx="75" cy="75" r="60" fill="none" stroke="black" stroke-width="12" />
<circle cx="225" cy="75" r="60" fill="none" stroke="black" stroke-width="12" />
</svg>But with the left one positioned so its left side is cut off. So maybe cx="60" so left edge is at 0. Actually, looking at the original image, the left circle seems to be positioned with roughly 1/4 to 1/3 of it visible? No, it looks like more than half is visible. Maybe 2/3 or 3/4 of the circle is visible on the left. Let me go with:
Wait, looking at the provided image crops:
So the left circle is indeed partially cut off on the left side of the image. I'll provide an SVG with viewBox="0 0 200alpXD circle at (50,50) with r=40 and another at (150,50) with r=40, stroke-width around 8. Actually, to make the left one cut off: cx=40, r=40 (so x from 0 to 80). Final SVG: <svg viewBox="0 0 200 100" xmlns="http://www.w3.org/2000/svg">
<circle cx="···" cy="50" r="40" fill="none" stroke="black" stroke-width="8" />
<circle cx="160" cy="50" r="40" fill="none" stroke="black" stroke-width="8" />
</svg>With cx="40" for the left one (so it extends from 0 to 80), and cx="160" for the right one (80 to 160? No, 120 to 200). Wait, if cx=160 and r=40, it goes from 120 to 200. Spacing: from 80 (right edge of left) to 120 (left edge of right) is 40 units gap. That seems like a big gap. In the image, the gap looks smaller than the radius. Maybe:
This looks better. Or simply: <svg viewBox="0 0 180 100" xmlns="http://www.w3.org/2000/svg">
<circle cx···" cy="50" r="35" fill="none" stroke="black" stroke-width="8"/>
<circle cx="130" cy="50" r="35" fill="none" stroke="black" stroke-width="8"/>
</svg>Left circle at 45: range 10-80 Actually, looking at the image, the gap seems to be about one stroke-width or so, or maybe 1/4 of the diameter. I'll provide a clean, simple approximation that captures the essence. <svg viewBox="0 0 200 operativeCircle cx="45" cy="50" r="35" fill="none" stroke="black" stroke-width="8"/>
<circle cx="140" cy="50" r="35" fill="none" stroke="black" stroke-width="8"/>
</svg> |
|
Okay, I replaced the first shard with the updated one (and removed the --chat-template-file parameter which was loading the modified template from the hf discussion thread to fix the thinking tags with the old version). Sadly, this doesn't seem to change anything for me. I'm getting the same results across the board. I also tried some other things: However, nothing changed. The model still sees artifacts and seams across all of them. |
|
Thanks both for the feedback, I'll continue to tinker with this and see if I can figure out the issue :) |
|
@tempgidam / @z80maniac I've got something that might be worth testing on your setups. In the following file: Try swapping the numbers I might have goofed that permute and it's swapping w/h incorrectly before the |
|
@AesSedai Yes, with this patch my test creates only one circle: <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 200 200">
<circle cx="100" cy="100" r="80" fill="none" stroke="black" stroke-width="16"/>
</svg>And the This SVG is almost exact representation of the original picture: And in the reasoning it doesn't say that there are two circles:
|
|
@vaulter Honestly I'm not sure that this can be easily ported to ik_llama.cpp because of the changes for mtmd. I haven't reviewed the ik_llama codebase so I can't really estimate how much work that would be. |
|
The newest set of changes addresses PR feedback and require downloading updated mmproj files to be compatible. I've uploaded new mmprojs to my repo: https://huggingface.co/AesSedai/Kimi-K2.5-GGUF |
https://huggingface.co/gghfez said he managed to "vibe merge" it into |
|
@jukofyork Has https://huggingface.co/gghfez shared the patch version for ik_llama.cpp somewhere? And did he make it based on the version here before or after today's update by AesSedai? |
I don't think so - he just mentioned it here: https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0/discussions/15#698044e0f6c43685d5426b03 I don't know his GitHub handle or would ping him to ask. |
|
Uploaded a new set of mmproj files after removing the V/O permutes (it should have canceled out though, and there aren't any changes to the cpp code here so the old files should still work, just their format is a tad different). |
tools/mtmd/clip.cpp
Outdated
| // Ensure input is contiguous (needed when using merged QKV with ggml_view) | ||
| if (!ggml_is_contiguous(cur)) { | ||
| cur = ggml_cont(ctx0, cur); | ||
| } | ||
|
|
There was a problem hiding this comment.
Since #19338 only merged a few hours ago, I didn't have that one merged into this branch. I'll merge master and retry without that ggml_cont. Thanks for the callout!
There was a problem hiding this comment.
@CISC I merged master locally, recompiled, and tested w/o the ggml_cont and the embeddings are different on my raccoon image test.
./build/bin/llama-mtmd-cli -m /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf --chat-template /mnt/srv/snowdrift/fp16/Kimi-K2.5/chat_template.jinja --jinja --mmproj /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/mmproj-Kimi-K2.5-F16.gguf --n-gpu-layers 999 --threads 54 --override-tensor "blk\..*_exps\.=CPU" --flash-attn on --image Kimi-K2.5/raccoon.png -p "Describe the contents of this image" --seed 42 --no-warmup
Before, working:
Token 0 (first 16 values): -0.022020 -0.102026 -0.162646 0.098094 -0.076318 -0.260308 -0.064937 0.102250 0.065213 0.213322 -0.105310 -0.137638 -0.101015 -0.098639 -0.135186 0.053588
Token 0 (last 16 values): -0.017177 0.046693 0.107003 0.026340 -0.026663 -0.013917 0.131319 -0.144277 -0.192712 0.093361 -0.096736 0.163151 0.216722 0.159008 -0.039639 0.178694
Stats: mean=0.000457, std=0.078247, min=-4.923960, max=8.580477, sum=6362.352051
After, broken:
Token 0 (first 16 values): -0.004227 0.017700 0.045288 0.014412 0.019241 -0.000450 -0.011505 0.052048 -0.007355 -0.014572 0.018112 -0.027519 -0.005363 -0.020798 0.010681 -0.027069
Token 0 (last 16 values): -0.025635 0.009247 -0.005234 0.031509 -0.006050 -0.023453 -0.008224 -0.012344 -0.011505 -0.009567 -0.009521 -0.001862 0.034729 -0.016602 -0.026917 -0.002815
Stats: mean=0.000353, std=0.068569, min=-1.949219, max=1.311523, sum=4924.728027
And the text output is now seeing the image as garbled
<think>The user wants me to describe the contents of the image. Looking at the image, I see a repeating pattern of what appears to be a stylized logo or emblem.
The pattern consists of:
1. A circular gear-like or cog-like shape with teeth around the perimeter
2. Inside the circle, there appears to be a stylized design that looks like a DNA double helix or some kind of molecular structure in the center
3. The colors seem to be shades of blue, purple, and possibly some green or teal, with a gradient effect
4. The pattern is tiled across the entire image in a grid formation
5. The background appears to be dark (black or very dark gray)
The design looks like it could be related to:
- Bioinformatics
- DNA/genetics
- Engineering/mechanical gears
- A specific software logo or institutional logo
The image is essentially a wallpaper or background pattern made up of this repeating emblem. The emblem has a technical, scientific appearance combining mechanical (gear) and biological (DNA helix) elements.
I should describe this clearly and objectively, noting the repeating pattern, the gear-like shapes, the central helix design, and the color scheme.</think>The image displays a **repeating tiled pattern** consisting of a stylized emblem arranged in a grid across a dark background.
Each individual emblem features:
- **An outer gear or cog-like ring** with evenly spaced teeth around the circumference, rendered in shades of blue and cyan
- **A central design** resembling a DNA double helix or molecular structure, depicted in purple, pink, and light blue tones
- **A gradient color scheme** transitioning from cyan/blue on the outer ring to purple/pink in the center
The pattern creates a wallpaper-like effect with rows of these circular logos aligned both horizontally and vertically, filling the entire frame. The overall aesthetic suggests themes of **bioinformatics, biotechnology, genetics, or synthetic biology**, combining mechanical (gear) and biological (DNA) imagery. The dark background makes the cool-colored emblems stand out prominently.
There was a problem hiding this comment.
Ok, so we have another bug. @ORippler @JohannesGaessler
@jeffbolznv Mind testing on Vulkan?
There was a problem hiding this comment.
Tested one more time with e06088da0fa86aa444409f38dff274904931c507 (the non-contig rope fix) with the ggml_cont re-enabled and the embedding values look right again:
Token 0 (first 16 values): -0.029358 -0.105713 -0.161133 0.100708 -0.075317 -0.264465 -0.062134 0.100281 0.069458 0.216919 -0.102356 -0.134155 -0.098579 -0.098022 -0.130737 0.051514
Token 0 (last 16 values): -0.015167 0.048523 0.111225 0.024429 -0.022324 -0.014832 0.134521 -0.149582 -0.187012 0.097748 -0.097107 0.166382 0.219757 0.160767 -0.041840 0.179504
Stats: mean=0.000452, std=0.078249, min=-4.953125, max=8.621094, sum=6304.381348
Correct text output
<think>The user wants me to describe the contents of the image. Let me analyze what I see:
1. **Main subject**: A raccoon is the central focus of the image.
2. **Position**: The raccoon is lying on a tree branch, draped over it in a relaxed or lazy manner.
3. **Physical appearance**:
- Gray fur with the characteristic black "mask" around its eyes
- White fur on its snout/muzzle area
- Ringed tail (though partially obscured)
- Small rounded ears with white fur on the edges
- Front paws hanging down from the branch
4. **Setting**:
- Outdoor, natural environment
- Tree branch is thick and brown with some moss or lichen
- Background is blurred (bokeh effect) showing green foliage and trees
- Appears to be a forest or wooded area
5. **Lighting**: Natural daylight, soft lighting
6. **Mood**: The raccoon looks sleepy, relaxed, or resting. Its posture suggests it's comfortable and possibly napping.
I should provide a clear, detailed description covering these elements.</think>The image shows a **raccoon** draped lazily over a thick, horizontal tree branch in what appears to be a forest or wooded setting.
The raccoon has the species' characteristic features: grayish-brown fur, a distinctive black "mask" marking around its eyes, and white fur on its snout and the edges of its ears. It is lying with its belly on the branch, its head resting forward, and its front paws hanging down limply on either side, giving it a very relaxed, sleepy, or exhausted appearance.
The tree branch is sturdy and covered with patches of moss or lichen. The background is softly blurred (bokeh effect), showing various shades of green from foliage and the darker brown of tree trunks, suggesting a lush, natural environment with dappled daylight filtering through the leaves. The overall mood of the image is peaceful and endearing, capturing a moment of rest in the wild.
So it's definitely the ggml_cont there making the difference. I've pushed the updated merge from master to this branch so this should be reproducible by just commenting that line out and testing.
There was a problem hiding this comment.
Can you try this patch:
diff --git a/tools/mtmd/clip.cpp b/tools/mtmd/clip.cpp
index dae17c6fb..422a0e410 100644
--- a/tools/mtmd/clip.cpp
+++ b/tools/mtmd/clip.cpp
@@ -655,11 +655,6 @@ ggml_tensor * clip_graph::build_rope_2d(
const int64_t n_head = cur->ne[1];
const int64_t n_pos = cur->ne[2];
- // Ensure input is contiguous (needed when using merged QKV with ggml_view)
- if (!ggml_is_contiguous(cur)) {
- cur = ggml_cont(ctx0, cur);
- }
-
// for example, if we have cur tensor of shape (n_dim=8, n_head, n_pos)
// we will have a list of 4 inv_freq: 1e-0, 1e-1, 1e-2, 1e-3
// first half of cur will use 1e-0, 1e-2 (even)
@@ -677,8 +672,8 @@ ggml_tensor * clip_graph::build_rope_2d(
{
first = ggml_view_3d(ctx0, cur,
n_dim/2, n_head, n_pos,
- ggml_row_size(cur->type, n_dim),
- ggml_row_size(cur->type, n_dim*n_head),
+ cur->nb[1],
+ cur->nb[2],
0);
first = ggml_rope_ext(
ctx0,
@@ -696,8 +691,8 @@ ggml_tensor * clip_graph::build_rope_2d(
{
second = ggml_view_3d(ctx0, cur,
n_dim/2, n_head, n_pos,
- ggml_row_size(cur->type, n_dim),
- ggml_row_size(cur->type, n_dim*n_head),
+ cur->nb[1],
+ cur->nb[2],
n_dim/2 * ggml_element_size(cur));
second = ggml_rope_ext(
ctx0,There was a problem hiding this comment.
Sure, I'll give it a shot in a few hours once I'm back home from the office.
There was a problem hiding this comment.
Fine by me, not many models using this anyway I think?
@CISC Just repeating myself earlier, but this is the first model to use the build_rope_2d + merged QKV combo.
Other models seem to use the combo ggml_rope_ext + merged QKV so they're fine
There was a problem hiding this comment.
Fine by me, not many models using this anyway I think?
@CISC Just repeating myself earlier, but this is the first model to use the
build_rope_2d+ merged QKV combo.Other models seem to use the combo
ggml_rope_ext+ merged QKV so they're fine
Sure, I meant build_rope_2d in general.
There was a problem hiding this comment.
No ggml_cont + CUDA_VISIBLE_DEVICES= has the wrong embedding and text output still:
CPU only, no ggml_cont
$ CUDA_VISIBLE_DEVICES= MTMD_DEBUG_EMBEDDINGS=1 ./build/bin/llama-mtmd-cli -m /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf --chat-template /mnt/srv/snowdrift/fp16/Kimi-K2.5/chat_template.jinja --jinja --mmproj /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/mmproj-Kimi-K2.5-F16.gguf --threads 54 --flash-attn on --image Kimi-K2.5/raccoon.png -p "Describe the contents of this image" --seed 42 --no-warmup 2>&1 | tee ggml-cpu-only.log
ggml_cuda_init: failed to initialize CUDA: no CUDA-capable device is detected
build: 7986 (16010cba6) with GNU 14.2.1 for Linux x86_64
common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit on
llama_params_fit_impl: no devices with dedicated memory found
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 19.65 seconds
llama_model_loader: loaded meta data with 49 key-value pairs and 1096 tensors from /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 384x14B
llama_model_loader: - kv 3: general.license str = other
llama_model_loader: - kv 4: general.license.name str = modified-mit
llama_model_loader: - kv 5: general.tags arr[str,1] = ["image-text-to-text"]
llama_model_loader: - kv 6: deepseek2.block_count u32 = 61
llama_model_loader: - kv 7: deepseek2.context_length u32 = 262144
llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168
llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432
llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 64
llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 1
llama_model_loader: - kv 12: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 13: deepseek2.rope.scaling.factor f32 = 64.000000
llama_model_loader: - kv 14: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 15: deepseek2.rope.scaling.yarn_beta_fast f32 = 32.000000
llama_model_loader: - kv 16: deepseek2.rope.scaling.yarn_beta_slow f32 = 1.000000
llama_model_loader: - kv 17: deepseek2.rope.freq_base f32 = 50000.000000
llama_model_loader: - kv 18: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 19: deepseek2.expert_used_count u32 = 8
llama_model_loader: - kv 20: deepseek2.expert_group_count u32 = 1
llama_model_loader: - kv 21: deepseek2.expert_group_used_count u32 = 1
llama_model_loader: - kv 22: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 23: deepseek2.leading_dense_block_count u32 = 1
llama_model_loader: - kv 24: deepseek2.vocab_size u32 = 163840
llama_model_loader: - kv 25: deepseek2.attention.q_lora_rank u32 = 1536
llama_model_loader: - kv 26: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 27: deepseek2.attention.key_length u32 = 576
llama_model_loader: - kv 28: deepseek2.attention.value_length u32 = 512
llama_model_loader: - kv 29: deepseek2.attention.key_length_mla u32 = 192
llama_model_loader: - kv 30: deepseek2.attention.value_length_mla u32 = 128
llama_model_loader: - kv 31: deepseek2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 32: deepseek2.expert_count u32 = 384
llama_model_loader: - kv 33: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 34: deepseek2.expert_weights_scale f32 = 2.827000
llama_model_loader: - kv 35: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 36: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 37: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
llama_model_loader: - kv 38: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 39: tokenizer.ggml.pre str = kimi-k2
llama_model_loader: - kv 40: tokenizer.ggml.tokens arr[str,163840] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 41: tokenizer.ggml.token_type arr[i32,163840] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 42: tokenizer.ggml.merges arr[str,163328] = ["Ġ Ġ", "ĠĠ ĠĠ", "Ġ t", "i n",...
llama_model_loader: - kv 43: tokenizer.ggml.bos_token_id u32 = 163584
llama_model_loader: - kv 44: tokenizer.ggml.eos_token_id u32 = 163585
llama_model_loader: - kv 45: tokenizer.ggml.padding_token_id u32 = 163839
llama_model_loader: - kv 46: tokenizer.chat_template str = {%- macro render_content(msg) -%}\n ...
llama_model_loader: - kv 47: general.quantization_version u32 = 2
llama_model_loader: - kv 48: general.file_type u32 = 7
llama_model_loader: - type f32: 365 tensors
llama_model_loader: - type q4_0: 180 tensors
llama_model_loader: - type q8_0: 551 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 543.62 GiB (4.55 BPW)
load: 0 unused tokens
load: printing all EOG tokens:
load: - 163585 ('[EOS]')
load: - 163586 ('<|im_end|>')
load: - 163593 ('[EOT]')
load: - 163839 ('[PAD]')
load: special tokens cache size = 256
load: token to piece cache size = 1.0606 MB
print_info: arch = deepseek2
print_info: vocab_only = 0
print_info: no_alloc = 0
print_info: n_ctx_train = 262144
print_info: n_embd = 7168
print_info: n_embd_inp = 7168
print_info: n_layer = 61
print_info: n_head = 64
print_info: n_head_kv = 1
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 576
print_info: n_embd_head_v = 512
print_info: n_gqa = 64
print_info: n_embd_k_gqa = 576
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 18432
print_info: n_expert = 384
print_info: n_expert_used = 8
print_info: n_expert_groups = 1
print_info: n_group_used = 1
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = yarn
print_info: freq_base_train = 50000.0
print_info: freq_scale_train = 0.015625
print_info: n_ctx_orig_yarn = 4096
print_info: rope_yarn_log_mul = 1.0000
print_info: rope_finetuned = unknown
print_info: model type = 671B
print_info: model params = 1.03 T
print_info: general.name = n/a
print_info: n_layer_dense_lead = 1
print_info: n_lora_q = 1536
print_info: n_lora_kv = 512
print_info: n_embd_head_k_mla = 192
print_info: n_embd_head_v_mla = 128
print_info: n_ff_exp = 2048
print_info: n_expert_shared = 1
print_info: expert_weights_scale = 2.8
print_info: expert_weights_norm = 1
print_info: expert_gating_func = sigmoid
print_info: vocab type = BPE
print_info: n_vocab = 163840
print_info: n_merges = 163328
print_info: BOS token = 163584 '[BOS]'
print_info: EOS token = 163585 '[EOS]'
print_info: EOT token = 163586 '<|im_end|>'
print_info: PAD token = 163839 '[PAD]'
print_info: LF token = 198 'Ċ'
print_info: FIM PAD token = 163839 '[PAD]'
print_info: EOG token = 163585 '[EOS]'
print_info: EOG token = 163586 '<|im_end|>'
print_info: EOG token = 163593 '[EOT]'
print_info: EOG token = 163839 '[PAD]'
print_info: max token length = 512
load_tensors: loading model tensors, this can take a while... (mmap = true, direct_io = false)
load_tensors: CPU_Mapped model buffer size = 556663.41 MiB
load_tensors: CPU_REPACK model buffer size = 544320.00 MiB
....................................................................................................
common_init_result: added [EOS] logit bias = -inf
common_init_result: added <|im_end|> logit bias = -inf
common_init_result: added [EOT] logit bias = -inf
common_init_result: added [PAD] logit bias = -inf
llama_context: constructing llama_context
llama_context: setting new yarn_attn_factor = 1.0000 (mscale == 1.0, mscale_all_dim = 1.0)
llama_context: n_seq_max = 1
llama_context: n_ctx = 262144
llama_context: n_ctx_seq = 262144
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = enabled
llama_context: kv_unified = false
llama_context: freq_base = 50000.0
llama_context: freq_scale = 0.015625
llama_context: CPU output buffer size = 0.62 MiB
llama_kv_cache: CPU KV buffer size = 17568.00 MiB
llama_kv_cache: size = 17568.00 MiB (262144 cells, 61 layers, 1/1 seqs), K (f16): 17568.00 MiB, V (f16): 0.00 MiB
sched_reserve: reserving ...
sched_reserve: CPU compute buffer size = 981.01 MiB
sched_reserve: graph nodes = 4791
sched_reserve: graph splits = 1
sched_reserve: reserve took 8.48 ms, sched copies = 1
mtmd_cli_context: chat template example:
<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|><|im_user|>user<|im_middle|>Hello<|im_end|><|im_assistant|>assistant<|im_middle|><think></think>Hi there<|im_end|><|im_user|>user<|im_middle|>How are you?<|im_end|><|im_assistant|>assistant<|im_middle|>
clip_model_loader: model name: Kimi K2.5
clip_model_loader: description:
clip_model_loader: GGUF version: 3
clip_model_loader: alignment: 32
clip_model_loader: n_tensors: 335
clip_model_loader: n_kv: 28
clip_model_loader: has vision encoder
clip_ctx: CLIP using CPU backend
load_hparams: projector: kimik25
load_hparams: n_embd: 1152
load_hparams: n_head: 16
load_hparams: n_ff: 4304
load_hparams: n_layer: 27
load_hparams: ffn_op: gelu
load_hparams: projection_dim: 7168
--- vision hparams ---
load_hparams: image_size: 896
load_hparams: patch_size: 14
load_hparams: has_llava_proj: 0
load_hparams: minicpmv_version: 0
load_hparams: n_merge: 2
load_hparams: n_wa_pattern: 0
load_hparams: image_min_pixels: 1568
load_hparams: image_max_pixels: 3211264
load_hparams: model size: 908.42 MiB
load_hparams: metadata size: 0.12 MiB
main: loading model: /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf
WARN: This is an experimental CLI for testing multimodal capability.
For normal use cases, please use the standard llama-cli
encoding image slice...
alloc_compute_meta: CPU compute buffer size = 264.77 MiB
alloc_compute_meta: graph splits = 1, nodes = 1034
warmup: flash attention is enabled
=== MTMD_DEBUG_EMBEDDINGS ===
Shape: [7168, 1944]
Token 0 (first 16 values): -0.031418 0.018485 0.076531 0.027357 0.050587 -0.002738 -0.014581 0.131898 -0.018651 -0.064568 0.038107 -0.068829 -0.001136 -0.065361 0.036655 -0.092244
Token 0 (last 16 values): -0.046038 0.025009 -0.036792 0.040030 0.005790 -0.034914 -0.026276 -0.044675 0.006448 0.010157 -0.003982 -0.000043 0.091428 -0.022575 -0.079341 0.009261
Stats: mean=0.000517, std=0.063481, min=-1.903979, max=1.135730, sum=7206.027344
=== END MTMD_DEBUG_EMBEDDINGS ===
image slice encoded in 15268 ms
decoding image batch 1/1, n_tokens_batch = 1944
image decoded (batch 1/1) in 64037 ms
<think>The user wants me to describe the contents of the image. Looking at the image, I see a repeating pattern of what appears to be a stylized icon or logo.
The pattern consists of:
1. A purple/lavender colored circular gear-like shape with multiple teeth or cogs around the perimeter
2. Inside the circle, there's a white silhouette of what looks like a classical building or structure with columns (perhaps resembling a temple, courthouse, or government building)
3. The background appears to be a dark green or olive color
4. There's a wavy, translucent overlay in shades of orange, yellow, and red that runs horizontally across the image, creating a distorted or glitch-like effect over the repeating pattern
The pattern is tiled repeatedly across the entire image, creating a wallpaper-like effect. The wavy distortion overlay gives it a somewhat psychedelic or digital art aesthetic.
Let me provide a detailed description of these elements.</think>The image displays a **repeating tiled pattern** consisting of stylized icons set against a dark olive-green background. Each tile features:
**Main Icon Elements:**
- A **purple/lavender gear or cogwheel** with rounded teeth forming a circular border
- Inside each gear is a **white silhouette of a classical building** resembling a Greek or Roman temple with columns and a triangular pediment
**Overlay Effect:**
- A **horizontal, wavy, translucent distortion band** runs across the entire image in shades of orange, yellow, and amber
- This creates a "glitch" or heat-wave effect that slightly distorts the underlying pattern, making the icons appear to shimmer or ripple like a mirage
**Overall Composition:**
The pattern is arranged in a tight grid, with the gear icons touching or nearly touching each other, creating a wallpaper-like texture. The color palette is limited primarily to dark green, purple, and white, with the warm-toned wavy overlay adding contrast and visual movement to the otherwise static geometric repetition. The aesthetic suggests a blend of industrial/technical imagery (gears) with classical architecture, filtered through a digital or psychedelic visual effect.
llama_perf_context_print: load time = 731834.48 ms
llama_perf_context_print: prompt eval time = 93832.96 ms / 1957 tokens ( 47.95 ms per token, 20.86 tokens per second)
llama_perf_context_print: eval time = 55847.56 ms / 428 runs ( 130.48 ms per token, 7.66 tokens per second)
llama_perf_context_print: total time = 787807.93 ms / 2385 tokens
llama_perf_context_print: graphs reused = 425
The patch for the cur->nb[1] and cur->nb[2] fixed it, no ggml_cont and the output is correct:
gpu w/ patch, no ggml_cont
$ MTMD_DEBUG_EMBEDDINGS=1 ./build/bin/llama-mtmd-cli -m /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf --chat-template /mnt/srv/snowdrift/fp16/Kimi-K2.5/chat_template.jinja --jinja --mmproj /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/mmproj-Kimi-K2.5-F16.gguf --threads 54 --flash-attn on --image Kimi-K2.5/raccoon.png -p "Describe the contents of this image" --seed 42 --no-warmup 2>&1 | tee ggml-rope-cur.log
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
build: 7986 (16010cba6) with GNU 14.2.1 for Linux x86_64
common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit on
llama_params_fit_impl: projected memory use with initial parameters [MiB]:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 3090): 24135 total, 288978 used, -265106 free vs. target of 1024
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 3090): 24135 total, 288217 used, -264345 free vs. target of 1024
llama_params_fit_impl: projected to use 577196 MiB of device memory vs. 47743 MiB of free device memory
llama_params_fit_impl: cannot meet free memory targets on all devices, need to use 531500 MiB less in total
llama_params_fit_impl: context size reduced from 262144 to 4096 -> need 20614 MiB less memory in total
llama_params_fit_impl: with only dense weights in device memory there is a total surplus of 27843 MiB
llama_params_fit_impl: filling dense-only layers back-to-front:
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 3090): 62 layers, 11789 MiB used, 12081 MiB free
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 3090): 0 layers, 6062 MiB used, 17809 MiB free
llama_params_fit_impl: converting dense-only layers to full layers and filling them front-to-back with overflow to next device/system memory:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 3090): 2 layers ( 0 overflowing), 15805 MiB used, 8066 MiB free
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 3090): 60 layers (59 overflowing), 20136 MiB used, 3735 MiB free
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 11.35 seconds
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) (0000:06:10.0) - 23871 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:06:11.0) - 23871 MiB free
llama_model_loader: loaded meta data with 49 key-value pairs and 1096 tensors from /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.size_label str = 384x14B
llama_model_loader: - kv 3: general.license str = other
llama_model_loader: - kv 4: general.license.name str = modified-mit
llama_model_loader: - kv 5: general.tags arr[str,1] = ["image-text-to-text"]
llama_model_loader: - kv 6: deepseek2.block_count u32 = 61
llama_model_loader: - kv 7: deepseek2.context_length u32 = 262144
llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168
llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432
llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 64
llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 1
llama_model_loader: - kv 12: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 13: deepseek2.rope.scaling.factor f32 = 64.000000
llama_model_loader: - kv 14: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 15: deepseek2.rope.scaling.yarn_beta_fast f32 = 32.000000
llama_model_loader: - kv 16: deepseek2.rope.scaling.yarn_beta_slow f32 = 1.000000
llama_model_loader: - kv 17: deepseek2.rope.freq_base f32 = 50000.000000
llama_model_loader: - kv 18: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 19: deepseek2.expert_used_count u32 = 8
llama_model_loader: - kv 20: deepseek2.expert_group_count u32 = 1
llama_model_loader: - kv 21: deepseek2.expert_group_used_count u32 = 1
llama_model_loader: - kv 22: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 23: deepseek2.leading_dense_block_count u32 = 1
llama_model_loader: - kv 24: deepseek2.vocab_size u32 = 163840
llama_model_loader: - kv 25: deepseek2.attention.q_lora_rank u32 = 1536
llama_model_loader: - kv 26: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 27: deepseek2.attention.key_length u32 = 576
llama_model_loader: - kv 28: deepseek2.attention.value_length u32 = 512
llama_model_loader: - kv 29: deepseek2.attention.key_length_mla u32 = 192
llama_model_loader: - kv 30: deepseek2.attention.value_length_mla u32 = 128
llama_model_loader: - kv 31: deepseek2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 32: deepseek2.expert_count u32 = 384
llama_model_loader: - kv 33: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 34: deepseek2.expert_weights_scale f32 = 2.827000
llama_model_loader: - kv 35: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 36: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 37: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
llama_model_loader: - kv 38: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 39: tokenizer.ggml.pre str = kimi-k2
llama_model_loader: - kv 40: tokenizer.ggml.tokens arr[str,163840] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 41: tokenizer.ggml.token_type arr[i32,163840] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 42: tokenizer.ggml.merges arr[str,163328] = ["Ġ Ġ", "ĠĠ ĠĠ", "Ġ t", "i n",...
llama_model_loader: - kv 43: tokenizer.ggml.bos_token_id u32 = 163584
llama_model_loader: - kv 44: tokenizer.ggml.eos_token_id u32 = 163585
llama_model_loader: - kv 45: tokenizer.ggml.padding_token_id u32 = 163839
llama_model_loader: - kv 46: tokenizer.chat_template str = {%- macro render_content(msg) -%}\n ...
llama_model_loader: - kv 47: general.quantization_version u32 = 2
llama_model_loader: - kv 48: general.file_type u32 = 7
llama_model_loader: - type f32: 365 tensors
llama_model_loader: - type q4_0: 180 tensors
llama_model_loader: - type q8_0: 551 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 543.62 GiB (4.55 BPW)
load: 0 unused tokens
load: printing all EOG tokens:
load: - 163585 ('[EOS]')
load: - 163586 ('<|im_end|>')
load: - 163593 ('[EOT]')
load: - 163839 ('[PAD]')
load: special tokens cache size = 256
load: token to piece cache size = 1.0606 MB
print_info: arch = deepseek2
print_info: vocab_only = 0
print_info: no_alloc = 0
print_info: n_ctx_train = 262144
print_info: n_embd = 7168
print_info: n_embd_inp = 7168
print_info: n_layer = 61
print_info: n_head = 64
print_info: n_head_kv = 1
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 576
print_info: n_embd_head_v = 512
print_info: n_gqa = 64
print_info: n_embd_k_gqa = 576
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 18432
print_info: n_expert = 384
print_info: n_expert_used = 8
print_info: n_expert_groups = 1
print_info: n_group_used = 1
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = yarn
print_info: freq_base_train = 50000.0
print_info: freq_scale_train = 0.015625
print_info: n_ctx_orig_yarn = 4096
print_info: rope_yarn_log_mul = 1.0000
print_info: rope_finetuned = unknown
print_info: model type = 671B
print_info: model params = 1.03 T
print_info: general.name = n/a
print_info: n_layer_dense_lead = 1
print_info: n_lora_q = 1536
print_info: n_lora_kv = 512
print_info: n_embd_head_k_mla = 192
print_info: n_embd_head_v_mla = 128
print_info: n_ff_exp = 2048
print_info: n_expert_shared = 1
print_info: expert_weights_scale = 2.8
print_info: expert_weights_norm = 1
print_info: expert_gating_func = sigmoid
print_info: vocab type = BPE
print_info: n_vocab = 163840
print_info: n_merges = 163328
print_info: BOS token = 163584 '[BOS]'
print_info: EOS token = 163585 '[EOS]'
print_info: EOT token = 163586 '<|im_end|>'
print_info: PAD token = 163839 '[PAD]'
print_info: LF token = 198 'Ċ'
print_info: FIM PAD token = 163839 '[PAD]'
print_info: EOG token = 163585 '[EOS]'
print_info: EOG token = 163586 '<|im_end|>'
print_info: EOG token = 163593 '[EOT]'
print_info: EOG token = 163839 '[PAD]'
print_info: max token length = 512
load_tensors: loading model tensors, this can take a while... (mmap = true, direct_io = false)
load_tensors: offloading output layer to GPU
load_tensors: offloading 60 repeating layers to GPU
load_tensors: offloaded 62/62 layers to GPU
load_tensors: CPU_Mapped model buffer size = 555458.51 MiB
load_tensors: CUDA0 model buffer size = 9733.81 MiB
load_tensors: CUDA1 model buffer size = 19508.51 MiB
....................................................................................................
common_init_result: added [EOS] logit bias = -inf
common_init_result: added <|im_end|> logit bias = -inf
common_init_result: added [EOT] logit bias = -inf
common_init_result: added [PAD] logit bias = -inf
llama_context: constructing llama_context
llama_context: setting new yarn_attn_factor = 1.0000 (mscale == 1.0, mscale_all_dim = 1.0)
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_seq = 4096
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = enabled
llama_context: kv_unified = false
llama_context: freq_base = 50000.0
llama_context: freq_scale = 0.015625
llama_context: n_ctx_seq (4096) < n_ctx_train (262144) -- the full capacity of the model will not be utilized
llama_context: CUDA_Host output buffer size = 0.62 MiB
llama_kv_cache: CUDA0 KV buffer size = 9.00 MiB
llama_kv_cache: CUDA1 KV buffer size = 265.50 MiB
llama_kv_cache: size = 274.50 MiB ( 4096 cells, 61 layers, 1/1 seqs), K (f16): 274.50 MiB, V (f16): 0.00 MiB
sched_reserve: reserving ...
sched_reserve: CUDA0 compute buffer size = 6062.75 MiB
sched_reserve: CUDA1 compute buffer size = 362.00 MiB
sched_reserve: CUDA_Host compute buffer size = 36.01 MiB
sched_reserve: graph nodes = 4791
sched_reserve: graph splits = 240 (with bs=512), 121 (with bs=1)
sched_reserve: reserve took 12.79 ms, sched copies = 1
mtmd_cli_context: chat template example:
<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|><|im_user|>user<|im_middle|>Hello<|im_end|><|im_assistant|>assistant<|im_middle|><think></think>Hi there<|im_end|><|im_user|>user<|im_middle|>How are you?<|im_end|><|im_assistant|>assistant<|im_middle|>
clip_model_loader: model name: Kimi K2.5
clip_model_loader: description:
clip_model_loader: GGUF version: 3
clip_model_loader: alignment: 32
clip_model_loader: n_tensors: 335
clip_model_loader: n_kv: 28
clip_model_loader: has vision encoder
clip_ctx: CLIP using CUDA0 backend
load_hparams: projector: kimik25
load_hparams: n_embd: 1152
load_hparams: n_head: 16
load_hparams: n_ff: 4304
load_hparams: n_layer: 27
load_hparams: ffn_op: gelu
load_hparams: projection_dim: 7168
--- vision hparams ---
load_hparams: image_size: 896
load_hparams: patch_size: 14
load_hparams: has_llava_proj: 0
load_hparams: minicpmv_version: 0
load_hparams: n_merge: 2
load_hparams: n_wa_pattern: 0
load_hparams: image_min_pixels: 1568
load_hparams: image_max_pixels: 3211264
load_hparams: model size: 908.42 MiB
load_hparams: metadata size: 0.12 MiB
main: loading model: /mnt/srv/snowdrift/gguf/Kimi-K2.5-GGUF/aes_sedai/Kimi-K2.5-Q4_X.gguf
WARN: This is an experimental CLI for testing multimodal capability.
For normal use cases, please use the standard llama-cli
encoding image slice...
alloc_compute_meta: CUDA0 compute buffer size = 264.77 MiB
alloc_compute_meta: CPU compute buffer size = 17.50 MiB
alloc_compute_meta: graph splits = 1, nodes = 1034
warmup: flash attention is enabled
=== MTMD_DEBUG_EMBEDDINGS ===
Shape: [7168, 1944]
Token 0 (first 16 values): -0.029358 -0.105713 -0.161133 0.100708 -0.075317 -0.264465 -0.062134 0.100281 0.069458 0.216919 -0.102356 -0.134155 -0.098579 -0.098022 -0.130737 0.051514
Token 0 (last 16 values): -0.015167 0.048523 0.111225 0.024429 -0.022324 -0.014832 0.134521 -0.149582 -0.187012 0.097748 -0.097107 0.166382 0.219757 0.160767 -0.041840 0.179504
Stats: mean=0.000452, std=0.078249, min=-4.953125, max=8.621094, sum=6304.381348
=== END MTMD_DEBUG_EMBEDDINGS ===
image slice encoded in 1017 ms
decoding image batch 1/1, n_tokens_batch = 1944
image decoded (batch 1/1) in 83232 ms
<think>The user wants me to describe the contents of the image. Let me analyze what I see:
1. **Main subject**: A raccoon is the central focus of the image.
2. **Position**: The raccoon is lying on a tree branch, draped over it in a relaxed or lazy manner.
3. **Physical appearance**:
- Gray fur with the characteristic black "mask" around its eyes
- White markings on its face, particularly around the nose and mouth area
- Ringed tail (though partially visible)
- Small, rounded ears with white fur on the edges
- Front paws hanging down from the branch
4. **Setting**:
- Outdoor, natural environment
- Tree branch is thick and brown with some moss or lichen
- Background is blurred (bokeh effect) showing green foliage and trees
- Appears to be a forest or wooded area
5. **Lighting**: Natural daylight, soft lighting
6. **Mood**: The raccoon looks relaxed, possibly sleepy or resting. Its posture suggests it's comfortable and at ease.
The image captures a candid moment of wildlife in a natural habitat. The raccoon's body language suggests it's taking a break or observing its surroundings from a safe perch.
I should provide a clear, detailed description covering the subject, its appearance, its position/action, and the surrounding environment.</think>The image shows a **raccoon** draped lazily over a thick, horizontal tree branch in what appears to be a forest or wooded environment.
**The raccoon:**
- Has the species' characteristic grayish-brown fur with darker and lighter variations
- Displays the distinctive black "mask" markings around its eyes, contrasted with white fur on its snout and cheeks
- Is lying with its belly on the branch, front paws hanging down loosely, and head resting forward
- Has small, rounded ears with white trim and a slightly pointed snout
- Appears relaxed or sleepy, with a calm, somewhat melancholic expression
**The setting:**
- The tree branch is sturdy and brown with patches of moss or lichen
- Background features soft, out-of-focus greenery (bokeh effect) including leaves and tree trunks
- Natural daylight illuminates the scene, highlighting the texture of the raccoon's fur
- The overall atmosphere is peaceful and natural, suggesting a wildlife habitat or nature reserve
The composition captures an intimate, candid moment of the animal at rest in its natural arboreal environment.
llama_perf_context_print: load time = 317102.77 ms
llama_perf_context_print: prompt eval time = 84821.87 ms / 1957 tokens ( 43.34 ms per token, 23.07 tokens per second)
llama_perf_context_print: eval time = 43858.18 ms / 523 runs ( 83.86 ms per token, 11.92 tokens per second)
llama_perf_context_print: total time = 361103.84 ms / 2480 tokens
llama_perf_context_print: graphs reused = 520
I can update this PR with that fix applied @ggerganov
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
| ggml_row_size(cur->type, n_dim), | ||
| ggml_row_size(cur->type, n_dim*n_head), | ||
| cur->nb[1], | ||
| cur->nb[2], |
There was a problem hiding this comment.
@ngxson making sure you see this change and the one below in the second view too. Adjusting this removed the need for the ggml_cont above.
* Move dequant_model to after the text_config merge Add new kimi-k2.5 keys to mtmd convert Update V_MMPROJ tensor mapping for new mm_projector.proj keys Update V_M_IMP_NORM for new mm_projector.pre_norm key * Fix a couple of oversights * Add image support for Kimi-K2.5 * Revert changes to KimiVLForConditionalGeneration * Fix an assert crash * Fix permute swapping w / h on accident * Kimi-K2.5: Use merged QKV for vision * Kimi-K2.5: pre-convert vision QK to use build_rope_2d * Kimi-K2.5: support non-interleaved rope for vision * Kimi-K2.5: fix min / max pixel * Kimi-K2.5: remove v/o permutes, unnecessary * Kimi-K2.5: update permute name to match * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Move dequant_model to after the text_config merge Add new kimi-k2.5 keys to mtmd convert Update V_MMPROJ tensor mapping for new mm_projector.proj keys Update V_M_IMP_NORM for new mm_projector.pre_norm key * Fix a couple of oversights * Add image support for Kimi-K2.5 * Revert changes to KimiVLForConditionalGeneration * Fix an assert crash * Fix permute swapping w / h on accident * Kimi-K2.5: Use merged QKV for vision * Kimi-K2.5: pre-convert vision QK to use build_rope_2d * Kimi-K2.5: support non-interleaved rope for vision * Kimi-K2.5: fix min / max pixel * Kimi-K2.5: remove v/o permutes, unnecessary * Kimi-K2.5: update permute name to match * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Move dequant_model to after the text_config merge Add new kimi-k2.5 keys to mtmd convert Update V_MMPROJ tensor mapping for new mm_projector.proj keys Update V_M_IMP_NORM for new mm_projector.pre_norm key * Fix a couple of oversights * Add image support for Kimi-K2.5 * Revert changes to KimiVLForConditionalGeneration * Fix an assert crash * Fix permute swapping w / h on accident * Kimi-K2.5: Use merged QKV for vision * Kimi-K2.5: pre-convert vision QK to use build_rope_2d * Kimi-K2.5: support non-interleaved rope for vision * Kimi-K2.5: fix min / max pixel * Kimi-K2.5: remove v/o permutes, unnecessary * Kimi-K2.5: update permute name to match * Update convert_hf_to_gguf.py Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> * Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>















Adding support for https://huggingface.co/moonshotai/Kimi-K2.5
Since this model includes compressed-tensors (INT4 for the conditional experts), I moved the
dequant_modelto theprepare_tensorscall at @compilade's suggestion. The model conversion fails otherwise because thequantization_configis nested under thetext_configin the config.json.Additionally, this model adds some new keys for the vision tower, prefixed as
vt_, and the preprocessor_config.json has the expected fields nested in themedia_proc_cfgkey.This PR does not include the "hacked" Q4_0 changes by @jukofyork, referred to in this comment.
I have added a first pass at vision support, heavily aided by LLM assistance. I entirely expect @ngxson to tear it to shreds or call me a dummy and show me an easier way to add that vision support :)