Skip to content

Conversation

@hariharans29
Copy link
Member

@hariharans29 hariharans29 commented Dec 8, 2025

Description

The Silu activation is basically the same as QuickGelu but with the scaling factor (alpha) as 1. In cusomer models containing Silu, the graph optimizer suite correctly fuses the nodes into a QuickGelu with alpha = 1. This optimizes the implementation of QuickGelu when alpha = 1 by avoiding the scaling and vectorizes the subsequent elementwise multiplication.

Tests:
There are already tests for QuickGelu with alpha = 1 and there are no new tests necessary (

// Silu = x*sigmoid(x), i.e., alpha = 1.0f.
)

Performance improvements measured:
Gives about 2.5% throughput boost for a customer model that has a lot of Silu activations.

Motivation and Context

Some low hanging fruit perf improvements that give instant easy perf wins

@hariharans29 hariharans29 changed the title WIP: [MLAS] [DO NOT REVIEW] Implement vectorized Silu operation WIP: [MLAS] [DO NOT REVIEW] Implement vectorized fused Silu operation Dec 9, 2025
@hariharans29 hariharans29 changed the title WIP: [MLAS] [DO NOT REVIEW] Implement vectorized fused Silu operation WIP: [MLAS] Improve performance of Silu activation path within the QuickGelu CPU kernel Dec 10, 2025
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the suggested changes from lintrunner.

@hariharans29 hariharans29 changed the title WIP: [MLAS] Improve performance of Silu activation path within the QuickGelu CPU kernel WIP: [MLAS/CPU EP] Improve performance of Silu activation path within the QuickGelu CPU kernel Dec 10, 2025
Add comment for potential future work
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR optimizes the CPU implementation of the QuickGelu activation function for the special case where alpha=1.0 (equivalent to the Silu activation). The optimization avoids unnecessary scaling operations and adds a vectorized element-wise multiplication function to improve performance.

  • Adds vectorized MlasEltwiseMul function for efficient element-wise multiplication
  • Optimizes QuickGelu computation by skipping scaling when alpha=1.0
  • Replaces scalar multiplication loop with vectorized MlasEltwiseMul call

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
onnxruntime/core/mlas/lib/eltwise.cpp Implements vectorized element-wise multiplication function MlasEltwiseMul<float> following the pattern of existing MlasEltwiseAdd
onnxruntime/core/mlas/inc/mlas.h Adds template declaration for MlasEltwiseMul function
onnxruntime/contrib_ops/cpu/activations.h Modifies QuickGelu kernel to branch on alpha value, avoiding scaling for alpha=1.0 and using vectorized multiplication for final step

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@hariharans29 hariharans29 changed the title WIP: [MLAS/CPU EP] Improve performance of Silu activation path within the QuickGelu CPU kernel [MLAS/CPU EP] Improve performance of Silu activation path within the QuickGelu CPU kernel Dec 11, 2025
hariharans29 and others added 3 commits December 10, 2025 18:26
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@tianleiwu
Copy link
Contributor

The PR optimizes the QuickGelu activation in the CPU Execution Provider, specifically improving the performance for the SiLU (Sigmoid Linear Unit) case where alpha_ is 1.0. It also introduces a new MLAS function MlasEltwiseMul for vectorized element-wise multiplication.

Key Changes

1. QuickGelu Optimization (activations.h)

  • Condition for alpha_: Added a check for alpha_ == 1.0f.
    • Previous behavior: Always multiplied input by alpha_ in a loop, then computed Logistic, then element-wise multiplied input and output.
    • New behavior:
      • If alpha_ != 1.0f: Maintains the original logic (scaling input before Logistic).
      • If alpha_ == 1.0f (SiLU): Skips the scaling loop and directly computes Logistic on the input.
  • Vectorization: Replaced the final scalar loop p_output[i] = p_input[i] * p_output[i] with a call to the new MlasEltwiseMul, enabling SIMD acceleration.

2. New MLAS Function: MlasEltwiseMul (mlas.h, eltwise.cpp)

  • Interface: void MlasEltwiseMul(const T* left, const T* right, T* output, size_t N)
  • Implementation (Float):
    • Uses MLAS_FLOAT32X4 intrinsics to process 4 elements at a time.
    • Includes a scalar fallback for remaining elements.
    • This allows for efficient vectorization (SSE/AVX depending on the platform) of the final multiplication step in QuickGelu.

3. Testing (test_eltwise.cpp)

  • Added MlasEltwiseMulTest to verify the correctness of the new MlasEltwiseMul function.
  • Tests with random inputs and compares against a reference scalar implementation.
  • Covers both QuickGelu scaling use case (implied) and general element-wise multiplication.

Code Review Comments

  • Performance: The changes correctly identify and optimize the SiLU case by removing one pass of memory access (reading/writing for alpha_ scaling). The introduction of MlasEltwiseMul further improves performance by vectorizing the final multiplication.
  • Correctness: The logic for splitting the alpha_ cases preserves the mathematical equivalence. The vectorization logic in eltwise.cpp correctly handles constraints and tail elements.
  • Maintainability: The changes are localized and clean. The new MLAS function is reusable for other operators that might need element-wise multiplication.

Conclusion

The PR looks solid. It effectively optimizes a common activation function (SiLU) and adds a useful primitive to MLAS. The added tests ensure correctness.

Status: Approved (Pending CI results)

@hariharans29 hariharans29 enabled auto-merge (squash) January 15, 2026 00:41
auto-merge was automatically disabled January 15, 2026 17:32

Pull request was closed

@hariharans29 hariharans29 reopened this Jan 15, 2026
@hariharans29 hariharans29 enabled auto-merge (squash) January 15, 2026 18:17
@hariharans29 hariharans29 merged commit 2d2ba6b into main Jan 15, 2026
159 of 166 checks passed
@hariharans29 hariharans29 deleted the hari/mlas_silu branch January 15, 2026 20:16
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jan 20, 2026
…QuickGelu CPU kernel (microsoft#26753)

### Description

The `Silu` activation is basically the same as `QuickGelu` but with the
scaling factor (`alpha`) as 1. In cusomer models containing `Silu`, the
graph optimizer suite correctly fuses the nodes into a QuickGelu with
alpha = 1. This optimizes the implementation of QuickGelu when alpha = 1
by avoiding the scaling and vectorizes the subsequent elementwise
multiplication.

**Tests:**
There are already tests for QuickGelu with alpha = 1 and there are no
new tests necessary
(https://github.com/microsoft/onnxruntime/blob/f98c756b45b81520c6e2a09c370575a013f02cce/onnxruntime/test/contrib_ops/activation_op_test.cc#L126)

**Performance improvements measured:**
Gives about 2.5% throughput boost for a customer model that has a lot of
Silu activations.

### Motivation and Context
Some low hanging fruit perf improvements that give instant easy perf
wins

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
tianleiwu pushed a commit that referenced this pull request Jan 21, 2026
…QuickGelu CPU kernel (#26753)

### Description

The `Silu` activation is basically the same as `QuickGelu` but with the
scaling factor (`alpha`) as 1. In cusomer models containing `Silu`, the
graph optimizer suite correctly fuses the nodes into a QuickGelu with
alpha = 1. This optimizes the implementation of QuickGelu when alpha = 1
by avoiding the scaling and vectorizes the subsequent elementwise
multiplication.

**Tests:**
There are already tests for QuickGelu with alpha = 1 and there are no
new tests necessary
(https://github.com/microsoft/onnxruntime/blob/f98c756b45b81520c6e2a09c370575a013f02cce/onnxruntime/test/contrib_ops/activation_op_test.cc#L126)

**Performance improvements measured:**
Gives about 2.5% throughput boost for a customer model that has a lot of
Silu activations.

### Motivation and Context
Some low hanging fruit perf improvements that give instant easy perf
wins

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
(cherry picked from commit 2d2ba6b)
tianleiwu added a commit that referenced this pull request Jan 23, 2026
### Description
This PR cherry-picks the following changes for the 1.24.0 release.

### Cherry-picked Commits
| Commit | Commit Title | Author |
|---|---|---|
| 744e7fe | Add type definitions, registration, utilities for
INT2/UINT2 support (#26824) | vraspar |
| 530a1fb | [QNN EP] Add BFloat16 dtype support in QNN EP (#26987) |
tirupath-qti |
| 8e050d1 | Implement new experimental lookup-based matrix
multiplication method(TMAC) (#26695) | vraspar |
| 2d2ba6b | [MLAS/CPU EP] Improve performance of Silu activation path
within the QuickGelu CPU kernel (#26753) | Hariharan Seshadri |
| 1c02b79 | [QNN EP] Add support for handling 0-dimension for Concat
Op (#27000) | Ashwath Shankarnarayan |
| cc2b01b | Fix ClipQuantFusion crash when Clip has multiple input
edges (#27016) | Edward Chen |
| bbd3850 | [QNN EP] Support quantized BatchNorm with per-channel DQ
params on QNN HTP (#26959) | qti-yuduo |
| d8f0318 | Add API to get ep graph partitioning info (#26781) |
Adrian Lizarraga |
| b912b18 | [OVEP] OpenVINO EP Features and bug-fixes for ORT-1.24 -
Follow up (#27007) | Preetha Veeramalai |
| ba11af4 | [QNN-EP] Add MatMulNBits translation for GPU (#26340) |
quic-tirupath |
| c03c419 | [MLAS/NEON] Add dedicated kernel for depthwise
convolution for ARM64 using NEON intrinsics (#26688) | Hariharan
Seshadri |
| e7dfd69 | [QNN-EP] Support alternate Layernorm fusion pattern in
QNN preprocess (#26060) | qti-mattsinc |
| 4013dc1 | Implement multithreading in qgemm_kleidi (#26301) |
Melike Kaptan |
| 9f06181 | [CXX] Enable users to specify custom OrtSyncStream via
RunOptions (#26988) | Dmitri Smirnov |
| cfccd64 | Added support for QMX kernels in MLAS (#26849) |
qti-vaiskv |
| 29d9b2f | Tweak external resource importer handle structs (#27040)
| Scott McKay |
| 9d108d0 | [QNN EP] Add QuickGELU operator support for QNN provider
(#27034) | tirupath-qti |
| b35688f | Add INT2 and UINT2 support for QDQ, transpose and cast
ops (#27022) | vraspar |
| 6d34aba | Introducing BF16 Pointwise NCHWc Convolution for Arm64
(#26838) | Rohanjames1997 |
| 36017ad | [EP ABI] Add CreateCustomOpDomains() API for plugin EP to
register custom ops (#27050) | Chi Lo |
| 50a03e4 | Add a new pipeline for CUDA 13 nuget builds (#27023) |
eserscor |
| a0d4439 | [EP ABI] Update Graph_GetGraphView() implementation
(#26711) | Chi Lo |
| 34bb209 | [webgpu] Fix a bug for im2col (#27069) | Wenqin Yang |
| 46e8d45 | [QNN EP] Add FusedMatMul operator support (#27044) |
tirupath-qti |
| 5e7e7a3 | Disable Float32_2Bits_Asymmetric_256x256 test (#27046) |
vraspar |
| 39f966e | Fix Doxygen documentation build error in
onnxruntime_c_api.h (#27083) | Nick Eubanks |
| 8a7a797 | Print tensor for new packed type of 2 bits (#27064) |
Tianlei Wu |
| 01f40e6 | Fix GPU JAR testing on Linux (#27011) | eserscor |
| b6ed7f3 | Fix warning around ununsed code in QNN Android Emulator
builds by clang (#27026) | Hariharan Seshadri |
| d7daa45 | Raise the timeout for the ios simulator job (#27045) |
Hariharan Seshadri |
| 7e1d818 | upgrade emsdk to 4.0.23 (#27029) | Yulong Wang |
| 347b990 | Fix failing mainline build on Arm64 linux (#27101) |
Rohanjames1997 |
| f481b17 | Add dedicated API to support extracting compatibility
string from model metadata (#27015) | adrastogi |

---------

Signed-off-by: Liqun Fu <liqun.fu@microsoft.com>
Signed-off-by: bfilipek <bartlomiej.filipek@intel.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Signed-off-by: Christian Bourjau <christian.bourjau@quantco.com>
Signed-off-by: melkap01 <melike.kaptan@arm.com>
Co-authored-by: vraspar <vrajang@outlook.com>
Co-authored-by: tirupath-qti <tirupath@qti.qualcomm.com>
Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
Co-authored-by: Liqun Fu <liqun.fu@microsoft.com>
Co-authored-by: carzh <wolfivyaura@gmail.com>
Co-authored-by: Hector Li <hecli@microsoft.com>
Co-authored-by: carzh <carolinezhu@microsoft.com>
Co-authored-by: Vrajang Parikh <vrparikh@microsoft.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Yuduo Wu <yuduow@qti.qualcomm.com>
Co-authored-by: Adrian Lizarraga <adlizarraga@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: jatinwadhwa921 <jatin.wadhwa@intel.com>
Co-authored-by: saurabh <saurabh1.kale@intel.com>
Co-authored-by: Ankit Maheshkar <ankit.maheshkar@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Javier Martinez <javier.e.martinez@intel.com>
Co-authored-by: Bartlomiej Filipek <bartlomiej.filipek@intel.com>
Co-authored-by: bopeng1234 <bo.peng@intel.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com>
Co-authored-by: TejalKhade28 <tejal.khade@intel.com>
Co-authored-by: Vishnudas Thaniel S <vishnudas.thaniel.s@intel.com>
Co-authored-by: Yaru Du <yaru.du@intel.com>
Co-authored-by: Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
Co-authored-by: Dvoretckii, Mikhail <mikhail.dvoretckii@intel.com>
Co-authored-by: Pallavi Gupta <pallavi.gupta@intel.com>
Co-authored-by: Jianhui Dai <jianhui.j.dai@intel.com>
Co-authored-by: Jiajia Qin <jiajiaqin@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: Fei Chen <feich@microsoft.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
Co-authored-by: Akupadhye <aupadhye@qti.qualcomm.com>
Co-authored-by: Wang Ning <ning4.wang@intel.com>
Co-authored-by: Maximilian Müller <44298237+gedoensmax@users.noreply.github.com>
Co-authored-by: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Wanming Lin <wanming.lin@intel.com>
Co-authored-by: quic-calvnguy <quic_calvnguy@quicinc.com>
Co-authored-by: Jie Chen <jie.a.chen@intel.com>
Co-authored-by: xhcao <xinghua.cao@intel.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: quic-hungjuiw <quic_hungjuiw@quicinc.com>
Co-authored-by: Ian Hunter <ianfhunter@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kunal-vaishnavi <115581922+kunal-vaishnavi@users.noreply.github.com>
Co-authored-by: Jeff Kilpatrick <jkilpatrick@qti.qualcomm.com>
Co-authored-by: Jeff Kilpatrick <jkilpat@qti.qualcomm.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Nenad Banfic <46795300+nenad1002@users.noreply.github.com>
Co-authored-by: derdeljan-msft <derdeljan@microsoft.com>
Co-authored-by: n1harika <niharika.sathish@intel.com>
Co-authored-by: Ryan Metcalfe <ryan.metcalfe@intel.com>
Co-authored-by: Jaswanth Gannamaneni <jaswanth.gannamaneni@intel.com>
Co-authored-by: Klimenko, Mikhail <mikhail.klimenko@intel.com>
Co-authored-by: liang <gxgaoliang@126.com>
Co-authored-by: Garth Long <garth.long@intel.com>
Co-authored-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Co-authored-by: Akshay Sonawane <111780983+apsonawane@users.noreply.github.com>
Co-authored-by: Christopher Warrington <chwarr@microsoft.com>
Co-authored-by: Ishwar Raut <iraut@nvidia.com>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Xinpeng Dou <15529241576@163.com>
Co-authored-by: adrastogi <aditya.rastogi@microsoft.com>
Co-authored-by: Aditya Rastogi <adityar@ntdev.microsoft.com>
Co-authored-by: qti-hungjuiw <hungjuiw@qti.qualcomm.com>
Co-authored-by: Pradeep Sakhamoori <psakhamoori@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: mingyue <131847423+mingyueliuh@users.noreply.github.com>
Co-authored-by: Susanta Bhattacharjee <susanta.bhattacharjee@intel.com>
Co-authored-by: Jozef Wludzik <jozef.wludzik@intel.com>
Co-authored-by: Rajeev Sekar <rajeevsekar21@gmail.com>
Co-authored-by: Mayuresh M Varerkar <mayuresh.m.varerkar@intel.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Wenqin Yang <wenqin.yang@intel.com>
Co-authored-by: xieofxie <xieofxie@126.com>
Co-authored-by: hualxie <hualxie@microsoft.com>
Co-authored-by: Joshua Lochner <admin@xenova.com>
Co-authored-by: Christian Bourjau <cbourjau@users.noreply.github.com>
Co-authored-by: Xiaofei Han <xiaofeihan@microsoft.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: chunghow-qti <chunghow@qti.qualcomm.com>
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
Co-authored-by: Jiawei Shao <jiawei.shao@intel.com>
Co-authored-by: czekun <chen.zekun@intel.com>
Co-authored-by: Jaskaran Singh Nagi <jaskaran.singh.nagi@intel.com>
Co-authored-by: quic-tirupath <quic_tirupath@quicinc.com>
Co-authored-by: qti-mattsinc <mattsinc@qti.qualcomm.com>
Co-authored-by: Melike Kaptan <melike.kaptan@arm.com>
Co-authored-by: Damien Dooley <damien.dooley@arm.com>
Co-authored-by: qti-vaiskv <vaiskv@qti.qualcomm.com>
Co-authored-by: Rohanjames1997 <rohan.james4@gmail.com>
Co-authored-by: eserscor <erscor@microsoft.com>
Co-authored-by: eserscor <247253654+eserscor@users.noreply.github.com>
Co-authored-by: Nick Eubanks <nieubank@microsoft.com>
Co-authored-by: adrastogi <8368026+adrastogi@users.noreply.github.com>
Co-authored-by: Rohanjames1997 <rohanjms@amazon.com>
@tianleiwu tianleiwu added cherry-picked Cherry-picked for a cherrypicks branch and removed release:1.24.0 labels Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-picked Cherry-picked for a cherrypicks branch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants