Skip to content

Conversation

@sirus20x6
Copy link
Contributor

Add a SIMD path to ggml_vec_set_f32, broadcasting the fill value with the existing GGML_F32_VEC helpers

Keep the scalar tail for leftover elements and non-SIMD builds

…e across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Oct 11, 2025
@sirus20x6 sirus20x6 marked this pull request as draft October 11, 2025 17:18
@sirus20x6 sirus20x6 marked this pull request as ready for review October 11, 2025 17:24
@sirus20x6
Copy link
Contributor Author

microbenchmarks show sometimes very little change, but sometimes a nice bump

Baseline (pre-SIMD helpers)

add1
  n=128     throughput=90.09 GB/s
  n=1024    throughput=117.64 GB/s
  n=8192    throughput=55.15 GB/s
  n=65536   throughput=57.87 GB/s
  n=524288  throughput=51.29 GB/s
acc
  n=128     throughput=78.97 GB/s
  n=1024    throughput=118.10 GB/s
  n=8192    throughput=57.20 GB/s
  n=65536   throughput=57.96 GB/s
  n=524288  throughput=48.29 GB/s
acc1
  n=128     throughput=88.28 GB/s
  n=1024    throughput=114.89 GB/s
  n=8192    throughput=131.41 GB/s
  n=65536   throughput=114.31 GB/s
  n=524288  throughput=87.01 GB/s
mul
  n=128     throughput=87.03 GB/s
  n=1024    throughput=61.81 GB/s
  n=8192    throughput=40.49 GB/s
  n=65536   throughput=34.54 GB/s
  n=524288  throughput=31.66 GB/s

Current branch (with SIMD helpers)

add1
  n=128     throughput=100.72 GB/s
  n=1024    throughput=141.98 GB/s
  n=8192    throughput=55.42 GB/s
  n=65536   throughput=59.20 GB/s
  n=524288  throughput=51.91 GB/s
acc
  n=128     throughput=80.21 GB/s
  n=1024    throughput=134.74 GB/s
  n=8192    throughput=68.63 GB/s
  n=65536   throughput=56.20 GB/s
  n=524288  throughput=48.49 GB/s
acc1
  n=128     throughput=89.30 GB/s
  n=1024    throughput=142.30 GB/s
  n=8192    throughput=142.24 GB/s
  n=65536   throughput=118.68 GB/s
  n=524288  throughput=90.02 GB/s
mul
  n=128     throughput=86.29 GB/s
  n=1024    throughput=95.78 GB/s
  n=8192    throughput=42.58 GB/s
  n=65536   throughput=32.26 GB/s
  n=524288  throughput=31.22 GB/s

Comment on lines 80 to 102
inline static void ggml_vec_add1_f32(const int n, float * z, const float * x, const float v) {
#if defined(GGML_SIMD)
const int np = (n & ~(GGML_F32_STEP - 1));

GGML_F32_VEC vv = GGML_F32_VEC_SET1(v);

for (int i = 0; i < np; i += GGML_F32_STEP) {
for (int j = 0; j < GGML_F32_ARR; ++j) {
GGML_F32_VEC ax = GGML_F32_VEC_LOAD(x + i + j*GGML_F32_EPR);
GGML_F32_VEC az = GGML_F32_VEC_ADD(ax, vv);
GGML_F32_VEC_STORE(z + i + j*GGML_F32_EPR, az);
}
}

for (int i = np; i < n; ++i) {
z[i] = x[i] + v;
}
#else
for (int i = 0; i < n; ++i) {
z[i] = x[i] + v;
}
#endif
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make the code consistent about how it handles the leftovers. Here we duplicate the scalar code, while in ggml_vec_add_f32 above we use a common loop iterator. I think we should do the same as in ggml_vec_add_f32.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure thing. latest brings simd/scalar functions inline with each other.

@CISC
Copy link
Collaborator

CISC commented Oct 19, 2025

Looks ready to merge, forgotten?

@slaren slaren merged commit 19a5a3e into ggml-org:master Oct 22, 2025
64 of 70 checks passed
@CISC
Copy link
Collaborator

CISC commented Oct 22, 2025

Looks like we got some issues due to this PR being somewhat out of sync with recent CI additions:
https://github.com/ggml-org/llama.cpp/actions/runs/18712897472/job/53365350050

Edit: this one looks like it was already failing before merge:
https://github.com/ggml-org/llama.cpp/actions/runs/18712897511/job/53365351356

@slaren
Copy link
Member

slaren commented Oct 22, 2025

I am not sure what's the issue with the failing systems, so I have reverted this. Feel free to resubmit this change once the issue is fixed.

FMayran pushed a commit to FMayran/llama.cpp that referenced this pull request Oct 23, 2025
…ec_set_f32 for faster fills (ggml-org#16522)

* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <[email protected]>
pwilkin pushed a commit to pwilkin/llama.cpp that referenced this pull request Oct 23, 2025
…ec_set_f32 for faster fills (ggml-org#16522)

* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants