Skip to content

Commit 33a004e

Browse files
committed
llama : more metal-friendly KV cache PAD
1 parent b1f8af1 commit 33a004e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11508,7 +11508,7 @@ static int llama_decode_internal(
1150811508
// a heuristic, to avoid attending the full cache if it is not yet utilized
1150911509
// after enough generations, the benefit from this heuristic disappears
1151011510
// if we start defragmenting the cache, the benefit from this will be more important
11511-
kv_self.n = std::min(kv_self.size, std::max(256u, GGML_PAD(llama_kv_cache_cell_max(kv_self), 256)));
11511+
kv_self.n = std::min(kv_self.size, std::max(128u, GGML_PAD(llama_kv_cache_cell_max(kv_self), 128)));
1151211512
//kv_self.n = llama_kv_cache_cell_max(kv_self);
1151311513
}
1151411514
}

0 commit comments

Comments
 (0)