-
Couldn't load subscription status.
- Fork 13.4k
CUDA: use CUB for arbitary size argsort #16754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc: @CISC, I did try BailingMoeV2 with this PR, but since it doesn't use argsort at the moment it didn't show any speed-up, might be worth trying again |
It does use Edit: See ggml-org/ggml#1367 though, this will be useful for others. |
* model-conversion : add trust_remote_code for orig model run [no ci] (ggml-org#16751) This commit add the trust_remote_code=True argument when loading models using AutoConfig, AutoTokenizer, and AutoModelForCausalLM for the run original model script. The motivation for this is that some models require custom code to be loaded properly, and setting trust_remote_code=True avoids a prompt asking for user confirmation: ```console (venv) $ make causal-run-original-model The repository /path/to/model contains custom code which must be executed to correctly load the model. You can inspect the repository content at /path/to/model. Do you wish to run the custom code? [y/N] N ``` Having this as the default seems like a safe choice as we have to clone or download the models we convert and would be expecting to run any custom code they have. * webui: support q URL parameter (ggml-org#16728) * webui: support q URL parameter Fixes ggml-org#16722 I’ve checked that it works with Firefox’s AI tools * webui: apply suggestions from code review Co-authored-by: Aleksander Grygier <[email protected]> * chore: update webui static build --------- Co-authored-by: Aleksander Grygier <[email protected]> * CUDA: use CUB for arbitary size argsort (ggml-org#16754) * ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (ggml-org#16742) * Fix CUDA grid launch condition for large block_nums.y * add backend ops test * reduce test repetitions * convert : avoid dequantizing mxfp4 for GPT-OSS (ggml-org#16756) * vulkan: Optimize SSM_SCAN (ggml-org#16645) * vulkan: delete dead code (ggml-org#16732) ggml_vk_create_buffer_temp is not used anywhere, and it is the only caller for ggml_vk_pool_malloc. Signed-off-by: Giuseppe Scrivano <[email protected]> * model : set res->t_embd in PLaMo2 models (ggml-org#16766) --------- Signed-off-by: Giuseppe Scrivano <[email protected]> Co-authored-by: Daniel Bevenius <[email protected]> Co-authored-by: Florian Badie <[email protected]> Co-authored-by: Aleksander Grygier <[email protected]> Co-authored-by: Aman Gupta <[email protected]> Co-authored-by: leejet <[email protected]> Co-authored-by: compilade <[email protected]> Co-authored-by: Jeff Bolz <[email protected]> Co-authored-by: Giuseppe Scrivano <[email protected]> Co-authored-by: Shunta Saito <[email protected]>
Use CUB for argsort when ncols > 1024 or not enough shared mem available