Skip to content

Conversation

@tirupath-qti
Copy link
Contributor

Description

  • QNN NPU backend supports BFloat16 dtype for many operators
  • QNN EP adds a new session option "htp_bf16_enable" to enable Users to signal processing the Float32 graph in BFloat16 precision
  • When User specifies "htp_bf16_enable", the QNN EP lowers incoming Float32 Ort graph into BFloat16 QNN graph.
  • The ORT CPU fallback still receives Float32 partitions.
  • The lowered QNN graph still accepts float32 inputs, outputs and constant initializers. The QNN EP inserts Cast operators to do the necessary precision switch.

Motivation and Context

  • This enables computing accuracy sensitive float32 models in bfloat16 precision on Qualcomm NPU accelerator to improve inference time w.r.t computing in float32 precision.

 - QNN NPU backend supports BFloat16 dtype for many operators
 - QNN EP adds a new session option "htp_bf16_enable" to enable
   Users to signal processing the Float32 graph in BFloat16 precision
 - When User specifies "htp_bf16_enable", the QNN EP lowers incoming
   Float32 Ort graph into BFloat16 QNN graph.
 - The ORT CPU fallback still receives Float32 partitions.
 - The lowered QNN graph still accepts float32 inputs, outputs and
   constant initializers. The QNN EP inserts Cast operators to do
   the necessary precision switch.
@tianleiwu
Copy link
Contributor

From AI

Summary

This PR introduces BFloat16 (BF16) support to the QNN Execution Provider, specifically for the HTP backend. Currently, this is opt-in via a new session option htp_bf16_enable. When enabled, the EP "lowers" a Float32 graph to use BF16 precision on the NPU, effectively treating Float32 operations as BF16.

Key Changes

  • Session Option: htp_bf16_enable added to control the feature.
  • Graph Translation: QnnModelWrapper updated to convert Float32 tensors/operations to BF16 when constructing the QNN graph.
  • Boundary Handling: Helper logic inserts necessary casts so that inputs/outputs remain Float32 compatible (for CPU fallback or interface consistency).
  • Testing: Comprehensive bf16_handling_test.cc added.

Review Analysis

Correctness

  • Approach: The "lowering" approach (running FP32 graph as BF16) is a common technique for accelerators. The implementation correctly handles this by intercepting the graph construction rather than rewriting the ONNX model file.
  • Boundary Safety: It's crucial that CPU fallback nodes receive FP32. The PR mentions this is handled ("ORT CPU fallback still receives Float32 partitions").
  • Constants: Handling constant initializers (converting them to BF16) is critical for memory saving and is included.

Performance

  • Throughput: BF16 on NPU should offer significant speedups over FP32 (if FP32 was even supported efficiently) and arguably better accuracy/ease-of-use than INT8 quantization for some models.
  • Memory: Storing weights in BF16 cuts memory usage by half compared to FP32.

Conclusion

This feature bridges a gap for users needing higher precision than INT8 but better performance than FP32. The implementation via a session option is safe and non-intrusive. LGTM.

@yuslepukhin yuslepukhin requested a review from Copilot January 13, 2026 20:28
@yuslepukhin
Copy link
Member

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU CUDA CI Pipeline, Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows OpenVINO CI Pipeline, Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@yuslepukhin yuslepukhin added the ep:QNN issues related to QNN exeution provider label Jan 13, 2026
@yuslepukhin
Copy link
Member

/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc: In function ‘void onnxruntime::test::RunBF16ModelTest(const GetTestModelFn&, const std::vector&, onnxruntime::test::ExpectedEPNodeAssignment, int, float)’:
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc:51:58: error: unused parameter ‘input_shape’ [-Werror=unused-parameter]
51 | const std::vector<int64_t>& input_shape,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc: At global scope:
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc:50:13: error: ‘void onnxruntime::test::RunBF16ModelTest(const GetTestModelFn&, const std::vector&, onnxruntime::test::ExpectedEPNodeAssignment, int, float)’ defined but not used [-Werror=unused-function]
50 | static void RunBF16ModelTest(const GetTestModelFn& build_test_case,
| ^~~~~~~~~~~~~~~~
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc:39:23: error: ‘onnxruntime::test::GetTestModelFn onnxruntime::test::BuildBF16ConvTestCase(const onnxruntime::test::TestInputDef&, const onnxruntime::test::TestInputDef&)’ defined but not used [-Werror=unused-function]
39 | static GetTestModelFn BuildBF16ConvTestCase(const TestInputDef& input_def,
| ^~~~~~~~~~~~~~~~~~~~~
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc:28:23: error: ‘onnxruntime::test::GetTestModelFn onnxruntime::test::BuildBF16MatMulTestCase(const onnxruntime::test::TestInputDef&, const onnxruntime::test::TestInputDef&)’ defined but not used [-Werror=unused-function]
28 | static GetTestModelFn BuildBF16MatMulTestCase(const TestInputDef& input1_def,
| ^~~~~~~~~~~~~~~~~~~~~~~
/mnt/vss/_work/1/s/onnxruntime/test/providers/qnn/bf16_handling_test.cc:17:23: error: ‘onnxruntime::test::GetTestModelFn onnxruntime::test::BuildBF16AddTestCase(const onnxruntime::test::TestInputDef&, const onnxruntime::test::TestInputDef&)’ defined but not used [-Werror=unused-function]
17 | static GetTestModelFn BuildBF16AddTestCase(const TestInputDef& input1_def,
| ^~~~~~~~~~~~~~~~~~~~

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

- Ensure that bf16 only works when soc model is 88 or above
- Handling initializers in QNN that are not part of ONNX to be converted
  to bfloat16 from float32.
- Implemented a RAII guard for RestoreFp32AfterValidation
@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@qti-ashwshan
Copy link
Contributor

Looks like there were a couple warnings on the test file and failing linux pipeline. Added a patch for it.

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@qti-ashwshan
Copy link
Contributor

Rebased main to get latest fix for test failures

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@adrianlizarraga
Copy link
Contributor

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@tianleiwu tianleiwu enabled auto-merge (squash) January 15, 2026 18:27
@tianleiwu tianleiwu merged commit 530a1fb into microsoft:main Jan 15, 2026
88 checks passed
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jan 20, 2026
### Description 
- QNN NPU backend supports BFloat16 dtype for many operators
- QNN EP adds a new session option "htp_bf16_enable" to enable Users to
signal processing the Float32 graph in BFloat16 precision
- When User specifies "htp_bf16_enable", the QNN EP lowers incoming
Float32 Ort graph into BFloat16 QNN graph.
 - The ORT CPU fallback still receives Float32 partitions.
- The lowered QNN graph still accepts float32 inputs, outputs and
constant initializers. The QNN EP inserts Cast operators to do the
necessary precision switch.

### Motivation and Context
- This enables computing accuracy sensitive float32 models in bfloat16
precision on Qualcomm NPU accelerator to improve inference time w.r.t
computing in float32 precision.

---------

Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
tianleiwu pushed a commit that referenced this pull request Jan 21, 2026
### Description
- QNN NPU backend supports BFloat16 dtype for many operators
- QNN EP adds a new session option "htp_bf16_enable" to enable Users to
signal processing the Float32 graph in BFloat16 precision
- When User specifies "htp_bf16_enable", the QNN EP lowers incoming
Float32 Ort graph into BFloat16 QNN graph.
 - The ORT CPU fallback still receives Float32 partitions.
- The lowered QNN graph still accepts float32 inputs, outputs and
constant initializers. The QNN EP inserts Cast operators to do the
necessary precision switch.

### Motivation and Context
- This enables computing accuracy sensitive float32 models in bfloat16
precision on Qualcomm NPU accelerator to improve inference time w.r.t
computing in float32 precision.

---------

Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
(cherry picked from commit 530a1fb)
tianleiwu added a commit that referenced this pull request Jan 23, 2026
### Description
This PR cherry-picks the following changes for the 1.24.0 release.

### Cherry-picked Commits
| Commit | Commit Title | Author |
|---|---|---|
| 744e7fe | Add type definitions, registration, utilities for
INT2/UINT2 support (#26824) | vraspar |
| 530a1fb | [QNN EP] Add BFloat16 dtype support in QNN EP (#26987) |
tirupath-qti |
| 8e050d1 | Implement new experimental lookup-based matrix
multiplication method(TMAC) (#26695) | vraspar |
| 2d2ba6b | [MLAS/CPU EP] Improve performance of Silu activation path
within the QuickGelu CPU kernel (#26753) | Hariharan Seshadri |
| 1c02b79 | [QNN EP] Add support for handling 0-dimension for Concat
Op (#27000) | Ashwath Shankarnarayan |
| cc2b01b | Fix ClipQuantFusion crash when Clip has multiple input
edges (#27016) | Edward Chen |
| bbd3850 | [QNN EP] Support quantized BatchNorm with per-channel DQ
params on QNN HTP (#26959) | qti-yuduo |
| d8f0318 | Add API to get ep graph partitioning info (#26781) |
Adrian Lizarraga |
| b912b18 | [OVEP] OpenVINO EP Features and bug-fixes for ORT-1.24 -
Follow up (#27007) | Preetha Veeramalai |
| ba11af4 | [QNN-EP] Add MatMulNBits translation for GPU (#26340) |
quic-tirupath |
| c03c419 | [MLAS/NEON] Add dedicated kernel for depthwise
convolution for ARM64 using NEON intrinsics (#26688) | Hariharan
Seshadri |
| e7dfd69 | [QNN-EP] Support alternate Layernorm fusion pattern in
QNN preprocess (#26060) | qti-mattsinc |
| 4013dc1 | Implement multithreading in qgemm_kleidi (#26301) |
Melike Kaptan |
| 9f06181 | [CXX] Enable users to specify custom OrtSyncStream via
RunOptions (#26988) | Dmitri Smirnov |
| cfccd64 | Added support for QMX kernels in MLAS (#26849) |
qti-vaiskv |
| 29d9b2f | Tweak external resource importer handle structs (#27040)
| Scott McKay |
| 9d108d0 | [QNN EP] Add QuickGELU operator support for QNN provider
(#27034) | tirupath-qti |
| b35688f | Add INT2 and UINT2 support for QDQ, transpose and cast
ops (#27022) | vraspar |
| 6d34aba | Introducing BF16 Pointwise NCHWc Convolution for Arm64
(#26838) | Rohanjames1997 |
| 36017ad | [EP ABI] Add CreateCustomOpDomains() API for plugin EP to
register custom ops (#27050) | Chi Lo |
| 50a03e4 | Add a new pipeline for CUDA 13 nuget builds (#27023) |
eserscor |
| a0d4439 | [EP ABI] Update Graph_GetGraphView() implementation
(#26711) | Chi Lo |
| 34bb209 | [webgpu] Fix a bug for im2col (#27069) | Wenqin Yang |
| 46e8d45 | [QNN EP] Add FusedMatMul operator support (#27044) |
tirupath-qti |
| 5e7e7a3 | Disable Float32_2Bits_Asymmetric_256x256 test (#27046) |
vraspar |
| 39f966e | Fix Doxygen documentation build error in
onnxruntime_c_api.h (#27083) | Nick Eubanks |
| 8a7a797 | Print tensor for new packed type of 2 bits (#27064) |
Tianlei Wu |
| 01f40e6 | Fix GPU JAR testing on Linux (#27011) | eserscor |
| b6ed7f3 | Fix warning around ununsed code in QNN Android Emulator
builds by clang (#27026) | Hariharan Seshadri |
| d7daa45 | Raise the timeout for the ios simulator job (#27045) |
Hariharan Seshadri |
| 7e1d818 | upgrade emsdk to 4.0.23 (#27029) | Yulong Wang |
| 347b990 | Fix failing mainline build on Arm64 linux (#27101) |
Rohanjames1997 |
| f481b17 | Add dedicated API to support extracting compatibility
string from model metadata (#27015) | adrastogi |

---------

Signed-off-by: Liqun Fu <liqun.fu@microsoft.com>
Signed-off-by: bfilipek <bartlomiej.filipek@intel.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Signed-off-by: Christian Bourjau <christian.bourjau@quantco.com>
Signed-off-by: melkap01 <melike.kaptan@arm.com>
Co-authored-by: vraspar <vrajang@outlook.com>
Co-authored-by: tirupath-qti <tirupath@qti.qualcomm.com>
Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
Co-authored-by: Liqun Fu <liqun.fu@microsoft.com>
Co-authored-by: carzh <wolfivyaura@gmail.com>
Co-authored-by: Hector Li <hecli@microsoft.com>
Co-authored-by: carzh <carolinezhu@microsoft.com>
Co-authored-by: Vrajang Parikh <vrparikh@microsoft.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Yuduo Wu <yuduow@qti.qualcomm.com>
Co-authored-by: Adrian Lizarraga <adlizarraga@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: jatinwadhwa921 <jatin.wadhwa@intel.com>
Co-authored-by: saurabh <saurabh1.kale@intel.com>
Co-authored-by: Ankit Maheshkar <ankit.maheshkar@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Javier Martinez <javier.e.martinez@intel.com>
Co-authored-by: Bartlomiej Filipek <bartlomiej.filipek@intel.com>
Co-authored-by: bopeng1234 <bo.peng@intel.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com>
Co-authored-by: TejalKhade28 <tejal.khade@intel.com>
Co-authored-by: Vishnudas Thaniel S <vishnudas.thaniel.s@intel.com>
Co-authored-by: Yaru Du <yaru.du@intel.com>
Co-authored-by: Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
Co-authored-by: Dvoretckii, Mikhail <mikhail.dvoretckii@intel.com>
Co-authored-by: Pallavi Gupta <pallavi.gupta@intel.com>
Co-authored-by: Jianhui Dai <jianhui.j.dai@intel.com>
Co-authored-by: Jiajia Qin <jiajiaqin@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: Fei Chen <feich@microsoft.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
Co-authored-by: Akupadhye <aupadhye@qti.qualcomm.com>
Co-authored-by: Wang Ning <ning4.wang@intel.com>
Co-authored-by: Maximilian Müller <44298237+gedoensmax@users.noreply.github.com>
Co-authored-by: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Wanming Lin <wanming.lin@intel.com>
Co-authored-by: quic-calvnguy <quic_calvnguy@quicinc.com>
Co-authored-by: Jie Chen <jie.a.chen@intel.com>
Co-authored-by: xhcao <xinghua.cao@intel.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: quic-hungjuiw <quic_hungjuiw@quicinc.com>
Co-authored-by: Ian Hunter <ianfhunter@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kunal-vaishnavi <115581922+kunal-vaishnavi@users.noreply.github.com>
Co-authored-by: Jeff Kilpatrick <jkilpatrick@qti.qualcomm.com>
Co-authored-by: Jeff Kilpatrick <jkilpat@qti.qualcomm.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Nenad Banfic <46795300+nenad1002@users.noreply.github.com>
Co-authored-by: derdeljan-msft <derdeljan@microsoft.com>
Co-authored-by: n1harika <niharika.sathish@intel.com>
Co-authored-by: Ryan Metcalfe <ryan.metcalfe@intel.com>
Co-authored-by: Jaswanth Gannamaneni <jaswanth.gannamaneni@intel.com>
Co-authored-by: Klimenko, Mikhail <mikhail.klimenko@intel.com>
Co-authored-by: liang <gxgaoliang@126.com>
Co-authored-by: Garth Long <garth.long@intel.com>
Co-authored-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Co-authored-by: Akshay Sonawane <111780983+apsonawane@users.noreply.github.com>
Co-authored-by: Christopher Warrington <chwarr@microsoft.com>
Co-authored-by: Ishwar Raut <iraut@nvidia.com>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Xinpeng Dou <15529241576@163.com>
Co-authored-by: adrastogi <aditya.rastogi@microsoft.com>
Co-authored-by: Aditya Rastogi <adityar@ntdev.microsoft.com>
Co-authored-by: qti-hungjuiw <hungjuiw@qti.qualcomm.com>
Co-authored-by: Pradeep Sakhamoori <psakhamoori@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: mingyue <131847423+mingyueliuh@users.noreply.github.com>
Co-authored-by: Susanta Bhattacharjee <susanta.bhattacharjee@intel.com>
Co-authored-by: Jozef Wludzik <jozef.wludzik@intel.com>
Co-authored-by: Rajeev Sekar <rajeevsekar21@gmail.com>
Co-authored-by: Mayuresh M Varerkar <mayuresh.m.varerkar@intel.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Wenqin Yang <wenqin.yang@intel.com>
Co-authored-by: xieofxie <xieofxie@126.com>
Co-authored-by: hualxie <hualxie@microsoft.com>
Co-authored-by: Joshua Lochner <admin@xenova.com>
Co-authored-by: Christian Bourjau <cbourjau@users.noreply.github.com>
Co-authored-by: Xiaofei Han <xiaofeihan@microsoft.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: chunghow-qti <chunghow@qti.qualcomm.com>
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
Co-authored-by: Jiawei Shao <jiawei.shao@intel.com>
Co-authored-by: czekun <chen.zekun@intel.com>
Co-authored-by: Jaskaran Singh Nagi <jaskaran.singh.nagi@intel.com>
Co-authored-by: quic-tirupath <quic_tirupath@quicinc.com>
Co-authored-by: qti-mattsinc <mattsinc@qti.qualcomm.com>
Co-authored-by: Melike Kaptan <melike.kaptan@arm.com>
Co-authored-by: Damien Dooley <damien.dooley@arm.com>
Co-authored-by: qti-vaiskv <vaiskv@qti.qualcomm.com>
Co-authored-by: Rohanjames1997 <rohan.james4@gmail.com>
Co-authored-by: eserscor <erscor@microsoft.com>
Co-authored-by: eserscor <247253654+eserscor@users.noreply.github.com>
Co-authored-by: Nick Eubanks <nieubank@microsoft.com>
Co-authored-by: adrastogi <8368026+adrastogi@users.noreply.github.com>
Co-authored-by: Rohanjames1997 <rohanjms@amazon.com>
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jan 27, 2026
- QNN NPU backend supports BFloat16 dtype for many operators
- QNN EP adds a new session option "htp_bf16_enable" to enable Users to
signal processing the Float32 graph in BFloat16 precision
- When User specifies "htp_bf16_enable", the QNN EP lowers incoming
Float32 Ort graph into BFloat16 QNN graph.
 - The ORT CPU fallback still receives Float32 partitions.
- The lowered QNN graph still accepts float32 inputs, outputs and
constant initializers. The QNN EP inserts Cast operators to do the
necessary precision switch.

- This enables computing accuracy sensitive float32 models in bfloat16
precision on Qualcomm NPU accelerator to improve inference time w.r.t
computing in float32 precision.

---------

Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ep:QNN issues related to QNN exeution provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants