-
Notifications
You must be signed in to change notification settings - Fork 158
Merge vulkan code from mainline up to commit of 6/28/2025 #563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* Vulkan Optimizations and Fixes (#8959) * Optimize Vulkan REPEAT performance * Use Vulkan GLSL fused multiply-add instruction where possible * Add GGML_VULKAN_PERF option to output performance data per operator * Rework and fix Vulkan descriptor set and descriptor pool handling * Fix float32 concat f16 shader validation error * Add Vulkan GROUP_NORM eps parameter * Fix validation error with transfer queue memory barrier flags * Remove trailing whitespaces vulkan : do not use tensor->extra (#9407) * vulkan : do not use tensor->extra This patch allows using the Vulkan backend with the RPC backend as tensor->extra is no longer used. Ref: #8536 * Adapt GGML_VULKAN_CHECK_RESULTS to extra removal (ikawrakow#2) --------- Co-authored-by: 0cc4m <[email protected]> # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan : fix build (#0) ggml-ci Improve Vulkan shader build system (#9239) * Improve Vulkan shader builds system - Add dependency to vulkan-shaders-gen to rebuild shaders when changing the shader compilation utility. - Add option to generate debug info for Vulkan shaders to provide shader source to Vulkan shader profiling tools * remove not required self dependency ggml : fix build break for the vulkan-debug (#9265) - windows build : Ok. - linux build : Ok. Signed-off-by: Changyeon Kim <[email protected]> vulkan: correctly report support for OP_CONT (ggml/946) test-backend-ops fails because ggml_cont aborts when invoked passing an unsupported type. This commit makes ggml_cont tests pass Signed-off-by: Salvatore Mesoraca <[email protected]> vulkan: add dryrun support to sin and cos ops (ggml/947) sin and cos failed test-backend-ops because they tried to dereference a context pointer that is null on dry runs. This commit prevents that segfault. Signed-off-by: Salvatore Mesoraca <[email protected]> # Conflicts: # ggml/src/ggml-vulkan.cpp Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early. (#9118) * Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early. * fix compile issues * Fix issues where the last submit wasn't executed or handled properly. * remove trailing whitespace * Repair GGML_VULKAN_CHECK_RESULTS * Increase submit counter only if actual work has been submitted and increase submit count to 100. * Fix some nodes are not checked with GGML_VULKAN_CHECK_RESULTS enabled. # Conflicts: # ggml/src/ggml-vulkan.cpp Enable use to the rebar feature to upload buffers to the device. (#9251) vulkan : argsort barriers must be under uniform control flow (ggml/951) a return before a barrier (that happens only in some threads in a workgroup) leads to UB. While the old code actually works on some devices, it fails on some others (i.e. "smaller" GPUs). BTW, I think it would be better to set specialization constants when the graph is built, in that way the local workgroup could be sized appropriately. But it would take a lot of work. Signed-off-by: Salvatore Mesoraca <[email protected]> vulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOPS to log (ggml/961) vulkan : multithread pipeline creation (ggml/963) vulkan : mul_mat: fix UB with small warps (ggml/952) When the device's warp size is less than 16, it is possible for loadstride_a (mul_mm.comp:114) and loadstride_b (mul_mm.comp:115) to be set to 0. Because they are calculated as: the workgroup size, multiplied by LOAD_VEC_* (which can be 1) and divided by 16. And the workgroup size is set to be the same as the warp/subgroup size. The loadstride_* variables are used as increments in the loops that populate the buffers used for the multiplication. When they are 0 they cause an infinite loop. But infinite loops without side-effects are UB and the values of loadstride_* are known at compile time. So, the compiler quietly optimizes all the loops away. As a consequence, the buffers are not populated and the multiplication result is just a matrix with all elements set to 0. We prevent the UB by making sure that the workgroup size will never be less than 16, even if our device has a smaller warp size (e.g. 8). Signed-off-by: Salvatore Mesoraca <[email protected]> vulkan : retry allocation with fallback flags (whisper/2451) Co-authored-by: Samuel Morris <[email protected]> vulkan : improve ggml_vk_create_buffer error handling (#9898) vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (#10226) vulkan: Throttle the number of shader compiles during the build step. (#10222) Fixes #9582 Spawning too many concurrent copies of glslc leads to "Failed to create pipes" errors on Linux. This change applies the same throttling we use for multithreaded pipeline creation. # Conflicts: # ggml/src/vulkan-shaders/vulkan-shaders-gen.cpp vulkan: Optimize contiguous copies (#10254) * tests: Fix memory bandwidth calculation for perf tests Add a flops calculation for flash attention. Add one GGML_OP_CPY perf test. * vulkan: Optimize contiguous copies Add a variant of the copy shader for when the tensors are contiguous. Avoid the complex addressing calculations, and do four elements per invocation to hide some other overhead. Apply similar changes to the scale shader, since scale is always contiguous. Add a "progress bar" for shader compiles. # Conflicts: # tests/test-backend-ops.cpp vulkan: Use macros to make the mat mul pipeline creation more concise (#10259) Also add vk_matmul_pipeline2 to hold f16/f32 accumulator versions of a pipeline. This isn't really used yet. vulkan: Optimize binary ops (#10270) Reuse the index calculations across all of src0/src1/dst. Add a shader variant for when src0/src1 are the same dimensions and additional modulus for src1 aren't needed. Div/mod are slow, so add "fast" div/mod that have a fast path when the calculation isn't needed or can be done more cheaply. # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/acc.comp ggml : vulkan logs (whisper/2547) vulkan: Optimize some mat-vec mul quant shaders (#10296) Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses the B loads across the rows and also reuses some addressing calculations. This required manually partially unrolling the loop, since the compiler is less willing to unroll outer loops. Add bounds-checking on the last iteration of the loop. I think this was at least partly broken before. Optimize the Q4_K shader to vectorize most loads and reduce the number of bit twiddling instructions. Vulkan: Fix device info output format specifiers (#10366) * Vulkan: Fix device info output format specifiers * Vulkan: Use zu printf specifier for size_t instead of ld vulkan: remove use of null initializer (#10372) Seems like this isn't working for vulkan-over-metal when the array is sized by a spec constant. Maybe a spirv-cross limitation? vulkan: Optimize soft_max (#10301) * vulkan: Optimize soft_max Large soft_max could already saturate memory, but small/medium sizes were pretty slow. The bulk of the gains for them comes from using a smaller workgroup size, and making the workgroup size match the subgroup size also makes the barriers much cheaper. Cache some values in locals to avoid refetching/recomputing. And stamp out a few "template instantiations" so smaller cases will fully unroll. Add a missing early return for OOB rows. This happens when there are more than 512 rows and the dispatch is 512 x H. * vulkan: Further soft_max optimizations Restore the workgroup size of 512 case, use it for >1024. Use unrollable loops for more iteration counts. vulkan: further optimize mul_mat_vec using larger loads (#10387) * vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec. Add some early returns for nonexistent rows in mul_mat_vec shaders. These can only be hit when dispatching a 2D grid of workgroups. Fix the logic for the 2D grid of workgroups to round up. Enable the pipeline robustness extension if it's available, and use it to disable robustness for these pipelines. The instructions to do the bounds checking contend for the same ALU resources as the bit twiddling dequant instructions. * vulkan: Add GLSL structure aliases for quant types to allow larger loads In Vulkan it's not possible to cast pointer types, so instead you have to declare an aliased binding for the memory with a different type. This commit adds aliases for the quant formats using 16b ints, and in a few places where the struct size is a multiple of 4 also using 32b ints. Currently only q4_k's aliases are used, but others will be used in subsequent commits. * vulkan: use larger loads in q5_k and q6_k shaders. Similar to the optimization I did in q4_k recently, this vectorizes some loads and reduces the number of bit twiddling instructions. * vulkan: use larger K step per iteration in mul_mat_vec. Add vec4 dequantization functions, and use them to do K=8 per iteration in mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B which helps reduce the load on the memory system. The K_PER_ITER==2 logic is still there, just for F16/F32, and really only because they support unaligned sizes. Tweak the num_iters/unrolling logic to be simpler and catch a couple missed unrolling opportunities. vulkan: copy iq4_nl LUT into shared memory (#10409) vulkan: predicate max operation in soft_max shaders/soft_max (#10437) Fixes #10434 vulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484) The vulkan-shaders-gen was not parsing the --no-clean argument correctly. Because the previous code was parsing the arguments which have a value only and the --no-clean argument does not have a value, it was not being parsed correctly. This commit can now correctly parse arguments that don't have values. vulkan: fix group_norm (#10496) Fix bad calculation of the end of the range. Add a backend test that covers the bad case (taken from stable diffusion). Fixes leejet/stable-diffusion.cpp#439. # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: optimize Q2_K and Q3_K mul_mat_vec (#10459) vulkan: skip integer div/mod in get_offsets for batch_idx==0 (#10506) vulkan: further optimize q5_k mul_mat_vec (#10479) vulkan: Handle GPUs with less shared memory (#10468) There have been reports of failure to compile on systems with <= 32KB of shared memory (e.g. #10037). This change makes the large tile size fall back to a smaller size if necessary, and makes mul_mat_id fall back to CPU if there's only 16KB of shared memory. vulkan: define all quant data structures in types.comp (#10440) vulkan: get the first command buffer submitted sooner (#10499) This is an incremental improvement over #9118 to get work to the GPU a bit sooner. The first part is to start with a smaller number of nodes before the first submit, and ramp it up to the current 100 nodes/submit. The second part is to reduce the dryrun overhead for all the nodes that just need to request descriptor space. With these changes I get around 1-2% speedup on RTX 4070 combined with my old Haswell-era CPU. vulkan: Dynamic subgroup size support for Q6_K mat_vec (#10536) * subgroup 64 version with subgroup add. 15% faster scalable version tested for subgroup sizes 16-128 * check for subgroup multiple of 16 and greater than 16 * subgroup sizes are always a power of 2 (KhronosGroup/GLSL#45) * force 16 sequential threads per block * make 16 subgroup size a constant vulkan: optimize and reenable split_k (#10637) Use vector loads when possible in mul_mat_split_k_reduce. Use split_k when there aren't enough workgroups to fill the shaders. vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (#10642) vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (#10206) # Conflicts: # ggml/src/vulkan-shaders/dequant_funcs_cm2.comp # ggml/src/vulkan-shaders/flash_attn_cm2.comp # ggml/src/vulkan-shaders/mul_mm_cm2.comp Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (#10597) * Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader * Improve performance with better q4_k and q5_k dequant and store unrolling * Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection * Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device * Vulkan: Implement accumulator switch for specific mul mat mat shaders * Vulkan: Unroll more loops for more mul mat mat performance * Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic * Disable coopmat support on AMD proprietary driver * Remove redundant checks * Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support * Fix rebase typo * Fix coopmat2 MUL_MAT_ID pipeline selection # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: compile a test shader in cmake to check for coopmat2 support (#10713) # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/ggml-vulkan/CMakeLists.txt # ggml/src/vulkan-shaders/test_coopmat2_support.comp Vulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (#10723) * Vulkan: fix NaN in tanh.comp * Faster NaN-free tanh vulkan: fix compile warnings (#10731) vulkan: disable spirv-opt for coopmat shaders (#10763) There are some bugs in the 1.3.296 SDK, so disable this. It isn't strictly necessary anyway. Add missing dependency on vulkan-shaders-gen, so shaders get recompiled when it changes. Fix coopmat support reporting when glslc doesn't support NV_coopmat2. vulkan: dynamic subgroup size for the remaining k quants (#10745) * q5_k q4_k q3_k q2_k q6_k multi row example * revert as multi row isnt faster for k quants vulkan: request round-to-even for fp16 in im2col/rope_head (#10767) Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls feature allows rounding mode to be requested if the implementation supports it. Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (#10721) * Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats * Fix subgroup size control extension support check Add accf32 and accf16 checks for coopmats * Also disable coopmats on amdvlk Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (#10798) vulkan: small mul_mat_vec optimizations (#10665) * double the number of rows per workgroup * Update ggml-vulkan.cpp * Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats * only increase the number of rows for amd and subgroup size 64 * fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested * use subgroup min and max to check for gcn (requires ggml-org/llama.cpp#10721) * manual merge ggml-vulkan.cpp * set min and max subgroup size in any case * Also double the number of rows for Intel GPUs Change Debug print name add GGML_ROPE_TYPE_MROPE rwkv6: add wkv6 support for Vulkan backend (#10829) * rwkv_wkv6 vulkan shader * RWKV_WKV6 Vulkan op tests passed Signed-off-by: Molly Sophia <[email protected]> * Apply code format changes Signed-off-by: Molly Sophia <[email protected]> * add [[unroll]] and remove unnecessary conditions * add uma support * fix erros in EditorConfig Checker --------- Signed-off-by: Molly Sophia <[email protected]> Co-authored-by: Molly Sophia <[email protected]> # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/wkv6.comp vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809) * ensure mul mat shaders work on systems with subgroup size less than 32 more fixes add test * only s_warptile_mmq needs to be run with 32 threads or more # Conflicts: # .github/workflows/build.yml vulkan : fix soft_max.comp division by zero (whisper/2633) This change prevents a division by zero error when p.KY is 0. vulkan: optimize coopmat2 dequant functions (#10855) Change the code to do 16b loads when possible and extract the appropriate component late, so the code is effectively decoding a pair of elements and then selecting one. This can allow more commoning to happen in the compiler when neighboring elements are loaded. vulkan: build fixes for 32b (#10927) * vulkan: build fixes for 32b Should fix #10923 * vulkan: initialize some buffer/offset variables examples, ggml : fix GCC compiler warnings (#10983) Warning types fixed (observed under MSYS2 GCC 14.2.0): * format '%ld' expects argument of type 'long int', but argument has type 'size_t' * llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers] (emitted for all struct field except first) # Conflicts: # examples/export-lora/export-lora.cpp vulkan: multi-row k quants (#10846) * multi row k quant shaders! * better row selection * more row choices * readjust row selection * rm_kq=2 by default vulkan: Use push constant offset to handle misaligned descriptors (#10987) vulkan: im2col and matmul optimizations for stable diffusion (#10942) * tests: Add im2col perf tests * vulkan: optimize im2col, more elements per thread * vulkan: increase small tile size for NV_coopmat2 * vulkan: change im2col to 512 elements per workgroup vulkan: optimize mul_mat for small values of N (#10991) Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where the batch_strides are overloaded to hold the row strides. Put the loads from the B matrix in the innermost loop because it should cache better. Share some code for reducing the result values to memory in mul_mat_vec_base. # Conflicts: # tests/test-backend-ops.cpp fix: Vulkan shader gen binary path (#11037) Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074) * Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver * Add (TM) to AMD name check fix lora print Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (#11117) * Disable GL_KHR_cooperative_matrix Vulkan extension if not available. * Perform Vulkan extensions checks in a more sensible order * Remove unnecessary #ifdef directive # Conflicts: # ggml/src/vulkan-shaders/test_coopmat_support.comp llama: add support for QRWKV6 model architecture (#11001) Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161) * Vulkan: Remove float16 use in shaders * Fix validation error about subgroup_size_control extension fix: ggml: fix vulkan-shaders-gen build (#10448) * fix: ggml: fix vulkan-shaders-gen build The vulkan-shaders-gen target was not being built correctly in case of cross-compilation. Other outputs need to be built for the cross compile target, but vulkan-shaders-gen needs to be built for the host. * refactor: ggml: Improve vulkan-shaders-gen toolchain setup - Add GGML_SHADERS_GEN_TOOLCHAIN CMake option. - Auto-detect host toolchain if not set. * refactor: ggml: Improve vulkan-shaders-gen toolchain setup Use configure_file to generate host_toolchain.cmake from template * fix: ggml: Fix compile error Fix compile error not finding vulkan-shaders-gen * fix: vulkan-shaders-gen build and path handling Fix build issues with vulkan-shaders-gen: - Add target dependency for correct build order - Use CMAKE_HOST_SYSTEM_NAME for executable suffix - Fix MSVC output directory in host toolchain - Normalize path handling for cross-compilation * fix: improve host compiler detection in vulkan shader build Improve host compiler detection for vulkan shader generation: - Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches - Consolidate compiler detection logic - Fix Windows-specific MSVC detection - Ensure correct compiler search in cross-compilation * refactor: Simplify CMake function for detecting host compiler Simplified the CMake function to improve the process of detecting the host compiler. * fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt Since `vulkan-shader-gen.cpp` only requires the `glslc` executable and not the Vulkan headers or libraries, CMakeLists.txt needs to be corrected. (See: ecc93d0558fc3ecb8a5af69d2ece02fae4710ade) * refactor: Rename host_toolchain.cmake.in - Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in * refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN # Conflicts: # ggml/src/ggml-vulkan/CMakeLists.txt vulkan: scale caching for k quants + misc fixes (#11081) * q6_k scale caching * 16 bit unpack * q4_k test (slow) * revert it * q3_k * q2_k * little stuff * try precalculating products of a and q2_k scales * Revert "try precalculating products of a and q2_k scales" This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b. * unpack should be u16, add vim swap to gitignore (about time) * better q4_k scales * q5_k * better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations * q2_k better dequant * q3_k optimizations * q3_k use hmask simd from cpu avx version * make the caches happy * q3_k separate out calculation * q2_k separate out * little stuff * use calc_superblock everywhere * q2_k optimize scale calculation * more barriers vulkan: optimize coopmat2 q2_k dequant function (#11130) vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (#11206) Do masking on whole dwords, fetch all scales at once. vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (#11166) * vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl Shaders are based on cpy.cu. * vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32 * ggml: copy q->f32 assumes some contiguity in the destination # Conflicts: # ggml/src/ggml-cpu/ggml-cpu.c # ggml/src/vulkan-shaders/copy_from_quant.comp # ggml/src/vulkan-shaders/copy_to_quant.comp vulkan: fix coopmat2 flash attention for non-contiguous inputs (#11281) Add code similar to mul_mm_cm2 to force alignment of strides, to avoid a performance regression. Add noncontiguous FA tests in test-backend-ops. Fixes #11268. # Conflicts: # tests/test-backend-ops.cpp vulkan: fix coopmat2 validation failures (#11284) mul mat and flash attention shaders were loading f32 types directly into A/B matrices, which happens to work but is technically invalid usage. For FA, we can load it as an Accumulator matrix and convert and this is not in the inner loop and is cheap enough. For mul mat, it's more efficient to do this conversion in a separate pass and have the input(s) be f16. coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3. vulkan: fix diag_mask_inf (#11323) With robustbufferaccess disabled, this shader was showing OOB stores. There is a bounds check in the code, but the workgrouop dimensions were reversed vs CUDA and it was running the wrong number of threads. So fix the workgroup dimensions and disable robustness for this pipeline. vulkan: sort shaders for more deterministic binary (#11315) Fixes #11306. Vulkan-run-test: fix mmq_wg_denoms (#11343) There should be a copy-and-paste error here. *mmq_wg_denoms should be used together with *warptile_mmq, instead of wg_denoms. vulkan: compile shaders on-demand (#11406) Reduce first-run startup time and memory consumption. Should fix #11339. vulkan: Catch pipeline creation failure and print an error message (#11436) * vulkan: Catch pipeline creation failure and print an error message Also, fix some warnings from my on-demand compile change. * vulkan: fix pipeline creation logging vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360) * vulkan: initial support for IQ3_S * vulkan: initial support for IQ3_XXS * vulkan: initial support for IQ2_XXS * vulkan: initial support for IQ2_XS * vulkan: optimize Q3_K by removing branches * vulkan: implement dequantize variants for coopmat2 * vulkan: initial support for IQ2_S * vulkan: vertically realign code * port failing dequant callbacks from mul_mm * Fix array length mismatches * vulkan: avoid using workgroup size before it is referenced * tests: increase timeout for Vulkan llvmpipe backend --------- Co-authored-by: Jeff Bolz <[email protected]> # Conflicts: # ggml/src/vulkan-shaders/dequant_iq2_s.comp # ggml/src/vulkan-shaders/dequant_iq2_xs.comp # ggml/src/vulkan-shaders/dequant_iq2_xxs.comp # ggml/src/vulkan-shaders/dequant_iq3_s.comp # ggml/src/vulkan-shaders/dequant_iq3_xxs.comp CUDA: non-contiguous (RMS) norm support (#11659) vulkan: use smaller combined allocations to avoid fragmentation (#11551) # Conflicts: # ggml/src/ggml-alloc.c vulkan: initial support for IQ4_XS quantization (#11501) # Conflicts: # ggml/src/vulkan-shaders/dequant_iq4_xs.comp vulkan: optimize coopmat2 iq2/iq3 callbacks (#11521) * vulkan: optimize coopmat2 iq2/iq3 callbacks * build: trigger CI on GLSL compute shader changes vulkan: print shared memory size (#11719) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: account for lookup tables when checking shared memory size (#11502) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (#11592) vulkan: linux builds + small subgroup size fixes (#11767) * mm subgroup size * upload vulkan x86 builds vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528) * vulkan: initial support for IQ1_S and IQ1_M quantizations * vulkan: define MMV kernels for IQ1 quantizations * devops: increase timeout of Vulkan tests again * vulkan: simplify ifdef for init_iq_shmem # Conflicts: # ggml/src/vulkan-shaders/dequant_iq1_m.comp # ggml/src/vulkan-shaders/dequant_iq1_s.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq1_m.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq1_s.comp vulkan: support multi/vision rope, and noncontiguous rope (#11902) # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/rope_multi.comp # ggml/src/vulkan-shaders/rope_vision.comp vulkan: implement several ops relevant for ggml_opt (#11769) * vulkan: support memset_tensor * vulkan: support GGML_OP_SUM * vulkan: implement GGML_OP_ARGMAX * vulkan: implement GGML_OP_SUB * vulkan: implement GGML_OP_COUNT_EQUAL * vulkan: implement GGML_OP_OPT_STEP_ADAMW * vulkan: fix check_results RWKV_WKV6 crash and memory leaks * vulkan: implement GGML_OP_REPEAT_BACK * tests: remove invalid test-backend-ops REPEAT_BACK tests * vulkan: fix COUNT_EQUAL memset using a fillBuffer command # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/argmax.comp # ggml/src/vulkan-shaders/count_equal.comp # ggml/src/vulkan-shaders/opt_step_adamw.comp # ggml/src/vulkan-shaders/repeat_back.comp # ggml/src/vulkan-shaders/sub.comp # tests/test-backend-ops.cpp vulkan: implement more backpropagation operators (#11914) * vulkan: implement GGML_OP_ROPE_BACK * vulkan: implement GGML_OP_RMS_NORM_BACK * vulkan: implement GGML_OP_SILU_BACK * vulkan: implement GGML_OP_SOFTMAX_BACK # Conflicts: # ggml/src/vulkan-shaders/rms_norm_back.comp # ggml/src/vulkan-shaders/silu_back.comp # ggml/src/vulkan-shaders/soft_max_back.comp Add memset tensor in all backend interface SYCL: implement memset ggml backend buffer interface (#12580) * SYCL: implement memset ggml backend buffer interface * use GGML_ABORT macro * Do not wait for all queues to finish for memset operation # Conflicts: # ggml/src/ggml-sycl.cpp add OP sigmoid (#12056) Co-authored-by: Judd <[email protected]> # Conflicts: # ggml/src/vulkan-shaders/sigmoid.comp vulkan: fix assertion when qy_needs_dequant (#12068) Looks like a copy/paste bug from qx_needs_dequant. vulkan: improve im2col (#11826) * vulkan: improve im2col performance vulkan: matmul dequantization improvements (#12015) * faster dequant for old quants * dont use unpack for iq4_nl * vec2 unpack for q8 vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (#11595) * vulkan: implement specialized MMV kernels for IQ2 quantizations * vulkan: add MMV kernels for IQ3 quants * vulkan: Increase MMV batch size and unroll IQ LUT setup * vulkan: fix init_iq_shmem for WG sizes larger than tables * vulkan: common batch size for all I-quants # Conflicts: # ggml/src/vulkan-shaders/mul_mat_vec_iq2_s.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq2_xs.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq2_xxs.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq3_s.comp # ggml/src/vulkan-shaders/mul_mat_vec_iq3_xxs.comp cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129) ggml-ci # Conflicts: # ggml/src/ggml-cuda.cu # tests/test-backend-ops.cpp mat vec double buffer (#12188) vulkan: fix bug in coopmat1 mul_mat_id (#12316) * tests: run mul_mat_id with a larger N * vulkan: fix bug in coopmat1 mul_mat_id Update build.yml for Windows Vulkan builder to use Vulkan 1.4.304 SDK for VK_NV_cooperative_matrix2 support (#12301) vulkan: Adjust coopmat2 tile sizes and selection heuristic (#12258) vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (#12273) * vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking vulkan: use fp32 in coopmat2 q4_k dequant function (#12309) vulkan: subgroup size tuning (#12087) * vulkan: subgroup size test * Vulkan: Add device architecture enum and logic to recognize AMD generations * vulkan: use new architecture logic to specify subgroup size * Initial vulkan subgroup size tuning for RDNA3 * vulkan: commonize RDNA subgroup tuning * vulkan: override subgroup size if required_subgroup_size = 0 * vulkan: disable warp 32 for RDNA3 * vulkan: fine tuned RDNA1 subgroup sizes * vulkan: adjusted subgroup size map * vulkan: fixed RDNA2 subgroup map --------- Co-authored-by: 0cc4m <[email protected]> vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (#12312) ggml-vulkan: remove unused find_program(glslc) (#12416) It's already found by FindVulkan.cmake in the parent CMakeLists Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (#12434) vulkan: Submit once enough matmul work has been recorded (#12406) I've been seeing significantly worse performance for tg with flash attention enabled vs disabled, and it seems to be related to the submit heuristic. Change the heuristic to check how many bytes worth of weight matrix are used and flush every 100MB, and ramp up after the first few submits. This seems to resolve the issue, and also increases perf for non-FA a bit. vulkan: optimize iq1 coopmat2 dequant functions (#12427) vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (#12472) Vulkan: RTE rounding for cpy to quant (#12480) * Vulkan: RTE rounding for cpy to quant Co-Authored-By: Jeff Bolz <[email protected]> * remove trailing whitespace * avoid duplicating pipeline_cpy_f32_quant * fix copypasting issue * remove duplicated code --------- Co-authored-by: Jeff Bolz <[email protected]> vulkan: Optimize mul_mat_vec p021 and nc shaders (#12505) * tests: add mul_mat perf/functional tests for p021/nc vulkan shaders * vulkan: Optimize mul_mat_vec p021 and nc shaders. These shaders are used in attention calculations, and when the KV cache grows large they start to dominate the run time. For the nc shader (which is called with large 'k' dimension), use unrolling and vector loads. For the p021 shader (which is called with large 'm' and small 'k' dimensions), take advantage of grouped query attention to reuse loads from the A matrix for the whole group, and reduce the number of workgroups (too much overhead from tiny dispatches). Using subgroupAdd in the p021 shader also helps, use that conditionally. # Conflicts: # tests/test-backend-ops.cpp vulkan: fix mul_mat_vec failure in backend tests (#12529) The OOB calculation could be wrong if the last iteration was during one of the unrolled loops. Adjust the unrolling counts to avoid this. Add a couple new backend tests that hit this failure on NVIDIA GPUs. vulkan: fix coopmat shader generation when cross-compiling (#12272) * vulkan: fix coopmat shader generation when cross-compiling Previously the status of coopmat{,2} support isn't passed to the vulkan-shaders-gen project building on the host, which leads to build failure because of the cross-compiling code expecting coopmat{,2} shaders that didn't get generated. Fix this by passing the coopmat{,2} support status to vulkan-shaders subproject. Signed-off-by: Icenowy Zheng <[email protected]> * Only call coop-mat shaders once * Fix whitespace --------- Signed-off-by: Icenowy Zheng <[email protected]> Co-authored-by: bandoti <[email protected]> cmake: improve Vulkan cooperative matrix support checks (whisper/2966) Co-authored-by: Sandro Hanea <[email protected]> cmake : fix whitespace (#0) Vulkan: Add DP4A MMQ and Q8_1 quantization shader (#12135) * Vulkan: Add DP4A MMQ and Q8_1 quantization shader * Add q4_0 x q8_1 matrix matrix multiplication support * Vulkan: Add int8 coopmat MMQ support * Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code * Add GL_EXT_integer_dot_product check * Remove ggml changes, fix mmq pipeline picker * Remove ggml changes, restore Intel coopmat behaviour * Fix glsl compile attempt when integer vec dot is not supported * Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq * Remove redundant comment * Fix integer dot check * Fix compile issue with unsupported int dot glslc * Update Windows build Vulkan SDK version # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/mul_mmq.comp # ggml/src/vulkan-shaders/mul_mmq_funcs.comp # ggml/src/vulkan-shaders/quantize_q8_1.comp # ggml/src/vulkan-shaders/test_integer_dot_support.comp vulkan: fix build when glslc doesn't support coopmat (#12683) Vulkan: Fix mmq int dot float cache size (#12722) vulkan: Implement grouped query attention in the coopmat2 FA shader (#12559) When adjacent batches of Q share the same batches of K/V, batch them into the same workgroup. For example, when: dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1)) previously we would run 32 workgroups computing 1 result each, now we will run 8 workgroups computing 4 results each. This doesn't directly translate to better performance (at least when you have >=32 SMs), but in a subsequent change I'll enable split_k which will scale much better with 4x fewer workgroups. cmake: remove caching from vulkan coopmat checks (#12719) vulkan: Implement split_k for coopmat2 flash attention. (#12627) When using group query attention, we have one workgroup per KV batch and this can be very few workgroups (e.g. just 8 in some models). Enable split_k to spread the work across SMs. This helps a lot when the KV cache is large. # Conflicts: # ggml/src/vulkan-shaders/flash_attn_split_k_reduce.comp vulkan: Fix missing cmake logic for dot product extension (#12721) vulkan: set cmake minimum and project name in vulkan-shaders (#12744) vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (#12630) There seems to be a bubble waking up from waitForFences, which costs a few percent performance and also increased variance in performance. This change inserts an "almost_ready" fence when the graph is about 80% complete and we waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting for the final fence to be signaled. # Conflicts: # ggml/src/ggml-vulkan.cpp cmake: fix ggml-shaders-gen compiler paths containing spaces (#12747) fixes error for compiler paths with spaces Vulkan: Tune Vulkan mmq int dot shader for performance (#12767) vulkan: Use unclamped loads for flash attention mask (#12720) nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple of the number of rows in the matrix. The KV dim is a multiple of the number of columns for the aligned shader. vulkan: fix NaN issue in flash attention shader (#12776) Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum. vulkan: Use fp16 for the flash attention P*V multiplication (#12783) This is consistent with the ggml-cuda behavior and the mul_mat fallback. vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (#12833) q4_k and q5_k had a lot of redundant global loads where the same 16B of scale information is repeatedly loaded and decoded during each loop iteration. This change restructures the loops to more explicitly iterate over whole blocks in the outer loop (with unrolled inner loop) and to copy/decode the scale data into shared memory once at the start of each outer loop. The copy is pipelined so the scale load from global memory is relatively cheap. This improves q4_k/q5_k model prompt processing performance by around 5-7%. I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k and hurt for q4_0. The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped variants isn't used as often as it originally was (e.g. due to the padded_N change), so I trimmed it down to offset some of the new complexity of the semi-manual loop unrolling. vulkan: use aligned loads for flash attention mask (#12853) Rewrite the stride logic for the mask tensor in the FA shader to force the stride to be aligned, to allow using more efficient loads. vulkan: enable coopmat2 FA gqa and split_k optimizations more often (#12931) The grouped query attention optmization doesn't require a power of two ratio, the only thing relying on it was the modulo operation written as bitwise &. split_k need not depend on gqa_ratio - enable it any time there's only one workgroup in the X dimension. The shader gets the split index from the x coord, and multiple workgroups in the X dimension (pre-split) indicates a larger FA operation that wouldn't need splitting. vulkan: support noncontiguous rms_norm (#13031) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: matmul gcn tuning (#13016) * tune matmul for gcn * this one is more power efficient * Update ggml/src/ggml-vulkan/ggml-vulkan.cpp Co-authored-by: 0cc4m <[email protected]> * disable this tune for the proprietary driver --------- Co-authored-by: 0cc4m <[email protected]> vulkan: use uint array index to avoid glslang bug (#13193) vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (#13191) * vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader vulkan: Add bfloat16 support (#12554) * vulkan: Add bfloat16 support This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16. The extension is required for coopmat multiply support, but matrix-vector multiply trivially promotes bf16 to fp32 and doesn't require the extension. The copy/get_rows shaders also don't require the extension. It's probably possible to fall back to non-coopmat and promote to fp32 when the extension isn't supported, but this change doesn't do that. The coopmat support also requires a glslc that supports the extension, which currently requires a custom build. * vulkan: Support bf16 tensors without the bf16 extension or coopmat support Compile a variant of the scalar mul_mm shader that will promote the bf16 values to float, and use that when either the bf16 extension or the coopmat extensions aren't available. * vulkan: bfloat16 fixes (really works without bfloat16 support now) * vulkan: fix spirv-val failure and reenable -O # Conflicts: # ggml/src/vulkan-shaders/test_bfloat16_support.comp vulkan: Additional type support for unary, binary, and copy (#13266) Support f16->f32 copy. Support f16->f16 and f32->f32 unary ops. Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div. # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: Allow up to 4096 elements for mul_mat_id row_ids (#13326) This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf: GGML_ASSERT(nei0 * nei1 <= 3072); The tensor is 8 x 512. Increase this array size to accommodate. vulkan: scalar flash attention implementation (#13324) * vulkan: scalar flash attention implementation * vulkan: always use fp32 for scalar flash attention * vulkan: use vector loads in scalar flash attention shader * vulkan: remove PV matrix, helps with register usage * vulkan: reduce register usage in scalar FA, but perf may be slightly worse * vulkan: load each Q value once. optimize O reduction. more tuning * vulkan: support q4_0/q8_0 KV in scalar FA * CI: increase timeout to accommodate newly-supported tests * vulkan: for scalar FA, select between 1 and 8 rows * vulkan: avoid using Float16 capability in scalar FA # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/flash_attn.comp vulkan: workaround FA compile failures on macos (#13517) vulkan: KHR_coopmat flash attention (#13506) This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more difficult for various reasons so I haven't done it. Performance for this shader is around 2.5x better than for the scalar shader when doing prompt processing. Some of the benefit may be from other optimizations like staging through shared memory, or splitting by rows. # Conflicts: # ggml/src/vulkan-shaders/flash_attn_cm1.comp cmake: simplify vulkan shader test logic (#13263) vulkan: use scalar FA rather than coopmat2 when N==1 (#13554) Add pipeline_acc_f32 vulkan: move common FA code to flash_attn_base.comp (#13556) * vulkan: move common FA code to flash_attn_base.comp * vulkan: move common FA index/stride setup code to flash_attn_base.comp * build fix # Conflicts: # ggml/src/vulkan-shaders/flash_attn_base.comp cmake: use the current build config for vulkan-shaders-gen (#13595) * fix: use the current build config for `vulkan-shaders-gen` * fix: only pass a valid build type to `--config` Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (#13607) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: fix warnings (#13626) * small fixes * remove ifdef use LOG_WARN to replace `std::cerr` (#13657) vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (#13696) vulkan: support CPY from any type to itself (#13695) Reuse the f16/f32 copy shaders, and just scale the number of elements according to the type size. add GGML_LOG_WARN vulkan: mark IM2COL as supporting non-contig (#13783) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817) Also change it to be controlled by an env var rather than cmake flag vulkan : Remove unexpected ; (ggml/1253) vulkan: fix warnings in perf logger querypool code (#13937) ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (#13813) * * ggml-vulkan: adds op CONV_TRANSPOSE_1D * test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D * Missing barrier added to shader. Number of additional tests reduced to 108. * * Fixes typo in variable name. * Removes extra whitespaces. * Adds int64->int32 casts to prevent possible warnings. * Problem size reduced in tests to pass tests with llvmpipe. * supports_op condition moved from unintended position # Conflicts: # ggml/src/ggml-vulkan.cpp # ggml/src/vulkan-shaders/conv_transpose_1d.comp vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (#14001) * allowing B580 and U9-288V * experimenting code to detect Xe2 * allowing coopmat only for Xe2 GPUs * fixed comment wording * fixed comment wording * removed unnecessary driver check Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (#14099) # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: force device 0 in CI (#14106) Add GGML_LOG_INFO vulkan: Track descriptor pools/sets per-context (#14109) Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8) and move it to the vk_device. Move all the descriptor pool and set tracking to the context - none of it is specific to pipelines anymore. It has a single vector of pools and vector of sets, and a single counter to track requests and a single counter to track use. vulkan: Better thread-safety for command pools/buffers (#14116) This change moves the command pool/buffer tracking into a vk_command_pool structure. There are two instances per context (for compute+transfer) and two instances per device for operations that don't go through a context. This should prevent separate contexts from stomping on each other. # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: mutex around vkQueueSubmit (#14127) This fixes the remaining crash in test-thread-safety on my system. cmake: clean up external project logic for vulkan-shaders-gen (#14179) * Remove install step for vulkan-shaders-gen * Add install step to normalize msvc with make * Regenerate modified shaders at build-time # Conflicts: # .github/workflows/build.yml cmake: remove shader-gen step-targets from ggml-vulkan (#14226) * Remove step-targets from vulkan-shaders-gen * Unset DESTDIR when building vulkan-shaders-gen Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (#14249) Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792) * Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled. * remove #ifdef for debug utils and add queue marker. # Conflicts: # ggml/src/ggml-vulkan.cpp vulkan: update windows SDK in CI (#14334) vulkan: update windows SDK in release.yml (#14344) # Conflicts: # .github/workflows/release.yml cmake: regen vulkan shaders when shaders-gen sources change (#14398) * Add shaders-gen sources as target deps vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427) This setting needs to be passed through to vulkan-shaders-gen vulkan: lock accesses of pinned_memory vector (#14333) vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378) Fix cuda build error test
|
Test Qwen 2.5 7B Q4_K_S and it runs fine, but for deepseek model, I was getting "GGGGGGG" output with -mla 1 -amb 512. Probably related to deepseek related optimization. |
|
For deepseek often one wants to compile with I just tried to compile but got an error, might be because I just updated my rig and now seem to have Details inside: 👈 build command and logs# attempt to build clean
$ rm -rf ./build
$ cmake -B build -DGGML_VULKAN=ON -DGGML_CUDA=OFF -DGGML_RPC=OFF -DGGML_BLAS=OFF -DGGML_CCACHE=OFF
$ cmake --build build --config Release -j $(nproc)
-- The C compiler identification is GNU 15.1.1
-- The CXX compiler identification is GNU 15.1.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.50.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- OpenMP found
-- Using optimized iqk matrix multiplications
-- Enabling IQK Flash Attention kernels
-- Using llamafile
-- Found Vulkan: /lib/libvulkan.so (found version "1.4.313") found components: glslc glslangValidator
-- Vulkan found
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- ARCH_FLAGS = -march=native
-- Configuring done (0.5s)
-- Generating done (0.0s)
-- Build files have been written to: /mnt/astrodata/llm/ik_llama.cpp/build
[ 0%] Generating build details from Git
[ 0%] Building CXX object ggml/src/vulkan-shaders/CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
[ 1%] Building C object examples/gguf-hash/CMakeFiles/sha256.dir/deps/sha256/sha256.c.o
[ 3%] Building C object examples/gguf-hash/CMakeFiles/xxhash.dir/deps/xxhash/xxhash.c.o
[ 3%] Building C object examples/gguf-hash/CMakeFiles/sha1.dir/deps/sha1/sha1.c.o
-- Found Git: /usr/bin/git (found version "2.50.0")
In function ‘SHA1Update’,
inlined from ‘SHA1Final’ at /mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:265:5:
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:219:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
219 | SHA1Transform(context->state, &data[i]);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:219:13: note: referencing argument 2 of type ‘const unsigned char[64]’
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c: In function ‘SHA1Final’:
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:54:6: note: in a call to function ‘SHA1Transform’
54 | void SHA1Transform(
| ^~~~~~~~~~~~~
In function ‘SHA1Update’,
inlined from ‘SHA1Final’ at /mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:269:9:
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:219:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
219 | SHA1Transform(context->state, &data[i]);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:219:13: note: referencing argument 2 of type ‘const unsigned char[64]’
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c: In function ‘SHA1Final’:
/mnt/astrodata/llm/ik_llama.cpp/examples/gguf-hash/deps/sha1/sha1.c:54:6: note: in a call to function ‘SHA1Transform’
54 | void SHA1Transform(
| ^~~~~~~~~~~~~
[ 3%] Built target sha256
[ 3%] Built target sha1
[ 3%] Built target xxhash
[ 3%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.50.0")
[ 4%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
[ 5%] Linking CXX executable ../../../bin/vulkan-shaders-gen
[ 5%] Built target build_info
[ 5%] Built target vulkan-shaders-gen
[ 6%] Generate vulkan shaders
ggml_vulkan: Generating and compiling shaders to SPIR-V
[ 6%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o
[ 7%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o
[ 8%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-vulkan.cpp.o
[ 8%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-vulkan-shaders.cpp.o
[ 9%] Building CXX object ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o
[ 9%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_kquants.cpp.o
[ 10%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 10%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_mul_mat.cpp.o
[ 11%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_flash_attn.cpp.o
[ 11%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_576_512.cpp.o
[ 11%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_iquants.cpp.o
[ 11%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_256_256.cpp.o
[ 12%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_192_128.cpp.o
[ 12%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o
[ 14%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_ktquants.cpp.o
[ 14%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_128_128.cpp.o
[ 15%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_64_64.cpp.o
[ 16%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_legacy_quants.cpp.o
[ 16%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/fa/iqk_fa_96_96.cpp.o
[ 17%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_floats.cpp.o
[ 17%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_1bit.cpp.o
[ 18%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_gemm_iqk_quants.cpp.o
[ 18%] Building CXX object ggml/src/CMakeFiles/ggml.dir/iqk/iqk_quantize.cpp.o
[ 19%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-aarch64.c.o
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c: In function ‘ggml_compute_forward’:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_SIN’ not handled in switch [-Wswitch]
19814 | switch (tensor->op) {
| ^~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_COS’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_COUNT_EQUAL’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_CONV_2D_DW’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_RWKV_WKV6’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:19814:5: warning: enumeration value ‘GGML_OP_OPT_STEP_ADAMW’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c: In function ‘ggml_compute_backward’:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_SIN’ not handled in switch [-Wswitch]
20395 | switch (tensor->op) {
| ^~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_COS’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_COUNT_EQUAL’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_CONV_2D_DW’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_RWKV_WKV6’ not handled in switch [-Wswitch]
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml.c:20395:5: warning: enumeration value ‘GGML_OP_OPT_STEP_ADAMW’ not handled in switch [-Wswitch]
In file included from /usr/include/vulkan/vulkan_hpp_macros.hpp:35,
from /usr/include/vulkan/vulkan.hpp:11,
from /mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp:8:
/usr/include/c++/15.1.1/ciso646:46:4: warning: #warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros" [-Wcpp]
46 | # warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros"
| ^~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp: In function ‘void ggml_vk_print_gpu_info(size_t)’:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp:3541:18: warning: unused variable ‘subgroup_size’ [-Wunused-variable]
3541 | const size_t subgroup_size = (default_subgroup_size != 0) ? default_subgroup_size : subgroup_props.subgroupSize;
| ^~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp:3542:16: warning: unused variable ‘uma’ [-Wunused-variable]
3542 | const bool uma = props2.properties.deviceType == vk::PhysicalDeviceType::eIntegratedGpu;
| ^~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp: In function ‘void ggml_vk_instance_init()’:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp:3644:12: warning: unused variable ‘num_available_devices’ [-Wunused-variable]
3644 | size_t num_available_devices = vk_instance.instance.enumeratePhysicalDevices().size();
| ^~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:269:16: warning: no previous prototype for ‘ggml_backend_tensor_memset’ [-Wmissing-prototypes]
269 | GGML_CALL void ggml_backend_tensor_memset(struct ggml_tensor* tensor, uint8_t value, size_t offset, size_t size) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c: In function ‘ggml_backend_multi_buffer_context_interface’:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:1022:34: error: initialization of ‘_Bool (*)(struct ggml_backend_buffer *, const struct ggml_tensor *, struct ggml_tensor *)’ from incompatible pointer type ‘void (*)(struct ggml_backend_buffer *, uint8_t)’ {aka ‘void (*)(struct ggml_backend_buffer *, unsigned char)’} [-Wincompatible-pointer-types]
1022 | /* .clear = */ ggml_backend_multi_buffer_clear,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:1022:34: note: (near initialization for ‘multi_backend_buffer_i.cpy_tensor’)
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:1006:23: note: ‘ggml_backend_multi_buffer_clear’ declared here
1006 | GGML_CALL static void ggml_backend_multi_buffer_clear(ggml_backend_buffer_t buffer, uint8_t value) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:1024:5: warning: missing initializer for field ‘reset’ of ‘struct ggml_backend_buffer_i’ [-Wmissing-field-initializers]
1024 | };
| ^
In file included from /mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend.c:1:
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-backend-impl.h:50:34: note: ‘reset’ declared here
50 | void (*GGML_CALL reset) (ggml_backend_buffer_t buffer); // reset any internal state due to tensor initialization, such as tensor extras
| ^~~~~
make[2]: *** [ggml/src/CMakeFiles/ggml.dir/build.make:222: ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:2044: ggml/src/CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:146: all] Error 2EDIT fwiw i just forced that explicitly to |
ikawrakow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please no new ops, new enum values, and no refactoring of the CPU backend. I think the Vulkan back-end can be updated to the latest without using the new back-end formalism in mainline.
|
Btw, currently working on my M2-Max laptop, and Safari disintegrates into pieces when trying to view the changes in this PR. |
Pull again. Fixed it. |
|
Heya thanks again for digging into this! I have two different rigs on which I'm testing. It does now build on the AMD RX 7900 XTX Ubuntu 24.04 box now! So good news I was able to compile and run Seems like something is still off as the speeds are off from mainline. Could be I'm using the AMDVLK driver as installed from 👈 sweep-bench comparisons Qwen3-14B-Q4_0 dense no FA# checkout Merge_mainline_vulkan
$ git rev-parse --short HEAD
495103bd
# build
cmake -B build -DGGML_HIP=OFF -DGGML_HIPBLAS=OFF -DGGML_VULKAN=ON -DGGML_RPC=OFF -DGGML_CCACHE=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build --config Release -j $(nproc)
# test
model=/home/w/projects/models/ubergarm/Qwen3-14B-GGUF/Qwen3-14B-Q4_0.gguf
sudo ./build/bin/llama-sweep-bench \
--model "$model" \
-ctk f16 -ctv f16 \
-c 16896 \
-ngl 99 \
--warmup-batch \
--threads 1ik_llama.cpp firecoperana/Merge_mainline_vulkan@495103bd FA=0
llama.cpp@27208bf6 FA=0
On my local rig with a CUDA and ARCH linux installing $ cmake -B build -DGGML_VULKAN=ON -DGGML_CUDA=OFF -DGGML_RPC=OFF -DGGML_BLAS=OFF -DGGML_CCACHE=ON -DCMAKE_BUILD_TYPE=Debug
$ cmake --build build --config Debug -j $(nproc)
[ 24%] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
[ 24%] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o
[ 25%] Linking CXX executable ../../bin/llama-gguf
/mnt/astrodata/llm/ik_llama.cpp/src/unicode.cpp: In function ‘std::wstring unicode_wstring_from_utf8(const std::string&)’:
/mnt/astrodata/llm/ik_llama.cpp/src/unicode.cpp:232:10: warning: ‘template<class _Codecvt, class _Elem, class _Wide_alloc, class _Byte_alloc> class std::__cxx11::wstring_convert’ is deprecated [-Wdeprecated-declarations]
232 | std::wstring_convert<std::codecvt_utf8<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
In file included from /usr/include/c++/15.1.1/locale:47,
from /usr/include/c++/15.1.1/regex:43,
from /mnt/astrodata/llm/ik_llama.cpp/src/unicode.cpp:12:
/usr/include/c++/15.1.1/bits/locale_conv.h:262:33: note: declared here
262 | class _GLIBCXX17_DEPRECATED wstring_convert
| ^~~~~~~~~~~~~~~
[ 25%] Linking CXX executable ../../bin/llama-gguf-hash
[ 26%] Linking CXX shared library libllama.so
/usr/bin/ld: ../../ggml/src/libggml.so: undefined reference to `ggml_backend_rpc_init'
collect2: error: ld returned 1 exit status
make[2]: *** [examples/gguf/CMakeFiles/llama-gguf.dir/build.make:102: bin/llama-gguf] Error 1
make[1]: *** [CMakeFiles/Makefile2:3314: examples/gguf/CMakeFiles/llama-gguf.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: ../../ggml/src/libggml.so: undefined reference to `ggml_backend_rpc_init'
collect2: error: ld returned 1 exit status
make[2]: *** [examples/gguf-hash/CMakeFiles/llama-gguf-hash.dir/build.make:108: bin/llama-gguf-hash] Error 1
make[1]: *** [CMakeFiles/Makefile2:3151: examples/gguf-hash/CMakeFiles/llama-gguf-hash.dir/all] Error 2
[ 26%] Built target llama
make: *** [Makefile:146: all] Error 2However, if I enable the RPC backend with model=/mnt/astrodata/llm/models/ubergarm/Qwen3-14B-GGUF/Qwen3-14B-Q4_0.gguf
./build/bin/llama-sweep-bench \
--model "$model" \
-c 16896 \
-ngl 99 \
--warmup-batch \
--threads 1
llm_load_tensors: ggml ctx size = 0.40 MiB
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 41/41 layers to GPU
llm_load_tensors: Vulkan0 buffer size = 7697.69 MiB
llm_load_tensors: CPU buffer size = 417.30 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 16896
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: mla_attn = 0
llama_new_context_with_model: attn_max_b = 0
llama_new_context_with_model: fused_moe = 0
llama_new_context_with_model: ser = -1, 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: Vulkan0 KV buffer size = 2640.00 MiB
llama_new_context_with_model: KV self size = 2640.00 MiB, K (f16): 1320.00 MiB, V (f16): 1320.00 MiB
llama_new_context_with_model: Vulkan_Host output buffer size = 0.58 MiB
ggml_backend_sched_backend_from_buffer: warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.0.attn_norm.weight, the weight will need to be copied
ggml_backend_sched_backend_from_buffer: warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.0.attn_q_norm.weight, the weight will need to be copied
ggml_backend_sched_backend_from_buffer: warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.0.attn_k_norm.weight, the weight will need to be copied
ggml_backend_sched_backend_from_buffer: warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.0.ffn_norm.weight, the weight will need to be copied
ggml_backend_sched_backend_from_buffer: warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.1.attn_norm.weight, the weight will need to be copiedLemme know if there is a certain version of the vulkan backend that might work better or happy to try more iterations! Thanks! |
|
I noticed something odd too and suspect it's related to vulkan shader. When I run llama server in visual studio, I can match the performance of the mainline, but if I run in command line, I was only getting 1/3 to 1/2 of the speed for token generation. If you have time, you can do some troubleshooting, as I'm not familiar with vulkan at all. "warning: no backend supports op NONE with a weight with buffer type Vulkan0 used in tensor blk.0.attn_norm.weight" happens because vulkan does not support fused rms norm. It only shows in debug version. |
We will worry about the missing fused ops after we get the PR in. There is quite a bit left to do to have the
|
|
Tested on my RTX-4080. If I remove the fused ops (
Flash attention seems to be running on the CPU, so performance drops further with that. TG is on par with mainline for short context, but PP is ~3X lower. |
|
If I change the On mainline I see this: So, for some reason int dot products and cooperative matrix are not enabled. I guess, this may explain the lower performance. |
ggml/src/ggml-vulkan.cpp
Outdated
| case GGML_OP_FLASH_ATTN_EXT: | ||
| { | ||
| ggml_backend_vk_context * ctx = (ggml_backend_vk_context *)backend->context; | ||
| if (!ctx->device->coopmat2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this check? I don't have coopmat2 available, but if I comment out this check I get FA enabled, and it gives me a nice boost in performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed.
|
OK, I'm learning. Need to build using Then I need to comment out the check for coopmat2 on line 9476 in
PP is on par with mainline, TG is on par (or even slightly better) for short context, but performance somehow decreases faster with context length, so we end up with ~70% of mainline TG performance at 16k tokens. I'm told in this comment that I need to update my Nvidia driver to 575, which will give me coopmat2 and almost a factor of 2 speedup. |
The new commit should remove the need to add these in cmake command. Also disable the fused ops for now. |
Thanks I was having trouble getting it setup before, the recent commit fixed it right up. First the amazing news, check this out on the AMD RX 7900 XTX it is up to snuff in early testing: Very nice! I want to try some more models tomorrow but this is getting exciting! I also got it to build and detect things properly on my local ARCH linux NVIDIA 3090TI FE rig, however when it starts up it throws an error: ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 Ti (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
/mnt/astrodata/llm/ik_llama.cpp/ggml/src/ggml-vulkan.cpp:2031: GGML_ASSERT((GGML_KQ_MASK_PAD % rows_cols[0]) == 0) failedAmazing progress in a short time! I tried a couple small R1-0528 quants but not quite there yet: 👈 AMD 7900 XTX DeepSeek-R1-0528 Logcmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_HIPBLAS=0 -DGGML_VULKAN=1 -DGGML_RPC=0 -DGGML_CCACHE=1 -DGGML_SCHED_MAX_COPIES=1 -DGGML_CUDA_IQK_FORCE_BF16=1
cmake --build build --config Release -j $(nproc)
model=/home/w/projects/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ1_S.gguf
#model=/home/w/projects/models/ubergarm/DeepSeek-R1-0528-GGUF/DeepSeek-R1-0528-IQ1_S_R4-00001-of-00003.gguf
./build/bin/llama-sweep-bench \
--model "$model" \
--ctx-size 4608 \
-fa \
-mla 3 -amb 512 \
-ctk q8_0 \
-ngl 99 \
-ot "blk\.(3|4|5|6|7|8)\.ffn_.*=Vulkan0" \
-ot exps=CPU \
--threads 16 \
--no-mmap
# -fmoe # leave this off for now
ggml_vulkan: 0 = Radeon RX 7900 XTX (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
llama_new_context_with_model: n_ctx = 4608
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: mla_attn = 3
llama_new_context_with_model: attn_max_b = 512
llama_new_context_with_model: fused_moe = 0
llama_new_context_with_model: ser = -1, 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 0.025
llama_kv_cache_init: Vulkan0 KV buffer size = 164.06 MiB
llama_new_context_with_model: KV self size = 164.06 MiB, c^KV (q8_0): 164.06 MiB, kv^T: not used
llama_new_context_with_model: Vulkan_Host output buffer size = 0.49 MiB
llama_new_context_with_model: Vulkan0 compute buffer size = 982.00 MiB
llama_new_context_with_model: Vulkan_Host compute buffer size = 480.91 MiB
llama_new_context_with_model: graph nodes = 4641
llama_new_context_with_model: graph splits = 826
main: n_kv_max = 4608, n_batch = 2048, n_ubatch = 512, flash_attn = 1, n_gpu_layers = 99, n_threads = 16, n_threads_batch = 16
| PP | TG | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s |
|-------|--------|--------|----------|----------|----------|----------|
ggml_compute_forward_dup_q: CPU#cache_k_l0 (view)#0 -> cache_k_l0 (view) (copy) is of type f16
ggml_compute_forward_dup_q: CPU#cache_k_l0 (view)#0 -> cache_k_l0 (view) (copy) is of type f16
/home/w/projects/ik_llama.cpp/ggml/src/ggml.c:10783: /home/w/projects/ik_llama.cpp/ggml/src/ggml.c:10783: fatal errorggml_compute_forward_dup_q: CPU#c
ache_k_l0 (view)#0 -> cache_k_l0 (view) (copy) is of type f16
fatal error |
|
I don't quite understand why
Of course not. The Vulkan backend does not support DeepSeek flash attention, so no, no |


Vulkan Optimizations and Fixes (#8959)
Optimize Vulkan REPEAT performance
.....................................................................................
vulkan: lock accesses of pinned_memory vector (#14333)
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378)
Fix cuda build error