Skip to content

Conversation

@am17an
Copy link
Collaborator

@am17an am17an commented Oct 24, 2025

Use CUB for argsort when ncols > 1024 or not enough shared mem available

@am17an am17an requested a review from slaren as a code owner October 24, 2025 10:26
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Oct 24, 2025
@am17an
Copy link
Collaborator Author

am17an commented Oct 24, 2025

cc: @CISC, I did try BailingMoeV2 with this PR, but since it doesn't use argsort at the moment it didn't show any speed-up, might be worth trying again

@am17an am17an merged commit 0bcb40b into ggml-org:master Oct 24, 2025
72 checks passed
@am17an am17an deleted the cuda_cub_argsort branch October 24, 2025 12:46
@CISC
Copy link
Collaborator

CISC commented Oct 25, 2025

I did try BailingMoeV2 with this PR, but since it doesn't use argsort at the moment it didn't show any speed-up, might be worth trying again

It does use argsort (through top_k), however the first implementation was incorrect. The implementation that was merged has much smaller tensors that fit just fine within the 1024 limit.

Edit: See ggml-org/ggml#1367 though, this will be useful for others.

wqerrewetw added a commit to wqerrewetw/llama.cpp that referenced this pull request Oct 25, 2025
* model-conversion : add trust_remote_code for orig model run [no ci] (ggml-org#16751)

This commit add the trust_remote_code=True argument when loading models
using AutoConfig, AutoTokenizer, and AutoModelForCausalLM for the run
original model script.

The motivation for this is that some models require custom code to be
loaded properly, and setting trust_remote_code=True avoids a prompt
asking for user confirmation:
```console
(venv) $ make causal-run-original-model
The repository /path/to/model contains custom code which must be
executed to correctly load the model. You can inspect the repository
content at /path/to/model.

Do you wish to run the custom code? [y/N] N
```

Having this as the default seems like a safe choice as we have to clone
or download the models we convert and would be expecting to run any
custom code they have.

* webui: support q URL parameter (ggml-org#16728)

* webui: support q URL parameter

Fixes ggml-org#16722
I’ve checked that it works with Firefox’s AI tools

* webui: apply suggestions from code review

Co-authored-by: Aleksander Grygier <[email protected]>

* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <[email protected]>

* CUDA: use CUB for arbitary size argsort (ggml-org#16754)

* ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (ggml-org#16742)

* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions

* convert : avoid dequantizing mxfp4 for GPT-OSS (ggml-org#16756)

* vulkan: Optimize SSM_SCAN (ggml-org#16645)

* vulkan: delete dead code (ggml-org#16732)

ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <[email protected]>

* model : set res->t_embd in PLaMo2 models (ggml-org#16766)

---------

Signed-off-by: Giuseppe Scrivano <[email protected]>
Co-authored-by: Daniel Bevenius <[email protected]>
Co-authored-by: Florian Badie <[email protected]>
Co-authored-by: Aleksander Grygier <[email protected]>
Co-authored-by: Aman Gupta <[email protected]>
Co-authored-by: leejet <[email protected]>
Co-authored-by: compilade <[email protected]>
Co-authored-by: Jeff Bolz <[email protected]>
Co-authored-by: Giuseppe Scrivano <[email protected]>
Co-authored-by: Shunta Saito <[email protected]>
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Oct 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants