Skip to content

UPSTREAM PR #20230: ggml-webgpu: Add supports for GGML_OP_REPEAT#1240

Open
loci-dev wants to merge 3 commits intomainfrom
loci/pr-20230-repeat-webgpu
Open

UPSTREAM PR #20230: ggml-webgpu: Add supports for GGML_OP_REPEAT#1240
loci-dev wants to merge 3 commits intomainfrom
loci/pr-20230-repeat-webgpu

Conversation

@loci-dev
Copy link

Note

Source pull request: ggml-org/llama.cpp#20230

This PR adds supports for GGML_OP_REPEAT to the WebGPU backend. The status of REPEAT for WebGPU in docs/ops.md is changed to "partially supported" because WebGPU doesn't seem to support i16.

Also, this PR includes formatting changes (clang-format) for the modified files. Since ggml-org/llama.cpp#20173 touches the same parts, this PR might need to be merged after that one.

@loci-review
Copy link

loci-review bot commented Mar 11, 2026

No summary available at this time. Visit Loci Inspector to review detailed analysis.

@loci-dev loci-dev force-pushed the main branch 10 times, most recently from 5ac00d6 to 998dd7a Compare March 18, 2026 02:17
@loci-dev loci-dev force-pushed the main branch 4 times, most recently from 945fa3a to 0e8e1d6 Compare March 20, 2026 02:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants