Skip to content

Conversation

@adrianlizarraga
Copy link
Contributor

@adrianlizarraga adrianlizarraga commented Sep 24, 2025

Description

Cherry-pick the following PRs into the ORT 1.23.1 branch:

Motivation and Context

adrianlizarraga and others added 8 commits September 23, 2025 15:38
CPU MoE Kernel
```
name: SwigluMoEBlock, quant_bits: 0, dtype: FP32, batch: 1, seq_len: 16, max_diff: 2.682209014892578e-07
.name: SwigluMoEBlock, quant_bits: 0, dtype: FP32, batch: 1, seq_len: 32, max_diff: 2.980232238769531e-07
.name: SwigluMoEBlock, quant_bits: 0, dtype: FP32, batch: 2, seq_len: 16, max_diff: 2.980232238769531e-07
.name: SwigluMoEBlock, quant_bits: 0, dtype: FP32, batch: 2, seq_len: 32, max_diff: 4.172325134277344e-07
.MoE CPU kernel time: 15.721677541732786 ms
.
----------------------------------------------------------------------
Ran 5 tests in 30.217s
```
This PR adds block-wise quant kernel for QMoE CPU
### Description
<!-- Describe your changes. -->
This fixes somewhat contrived edgecases that are present in our tests
  - input propagates to output
  - output is produced by an initializer.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Python API upcoming PR does not pass tests without it.
### Description
<!-- Describe your changes. -->
This pull request introduces several enhancements to ONNX Runtime's
Python and C++ APIs, focusing on improved device and memory information
handling, synchronization stream support, and tensor copy functionality.
It adds new Python bindings for device/memory types, exposes more
detailed session input/output metadata, and provides a Python-accessible
tensor copy API. The changes also refactor and extend the C++ API for
better stream and memory info management.

Key changes include:

### Device and Memory Information Enhancements

* Added Python bindings for `OrtMemoryInfoDeviceType`,
`OrtDeviceMemoryType`, and expanded `OrtDevice` to expose the memory
type via a new `mem_type` method. The `OrtMemoryInfo` Python class now
supports both legacy and new V2 constructors and exposes additional
properties such as device memory type and vendor ID.
[[1]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eR1801-R1810)
[[2]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eR1839)
[[3]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eL1941-R2005)
* Extended the Python `InferenceSession` object to provide access to
input/output `OrtMemoryInfo` and `OrtEpDevice` objects through new
properties and methods.
[[1]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eR2702-R2729)
[[2]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR202-R213)
[[3]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR591-R593)
[[4]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR607-R609)

### Synchronization Stream and Execution Provider Device Support

* Introduced Python bindings for `OrtSyncStream`, including creation via
`OrtEpDevice.create_sync_stream()` and retrieval of device-specific
`OrtMemoryInfo` via `OrtEpDevice.memory_info()`.
[[1]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eR1890-R1938)
[[2]](diffhunk://#diff-44e70fbe60cba71c94f1a46ec2b1facaa8e9475232dad6df5ecbea301e76d475R34-R44)
* Refactored the C++ API to generalize `SyncStream` handling, allowing
for unowned streams and improved type safety.
[[1]](diffhunk://#diff-17f64e8b38fcdcd25e90abcabeec4b420956b15fe63868a5d0b270c376bde209L1066-R1084)
[[2]](diffhunk://#diff-cc93f5f9d8078d3d3af14c9bb4c0c59e25a99f3ec75d7772ea20111ed7eb6ddeL672-R677)

### Tensor Copy Functionality

* Added a new Python-level `copy_tensors` function and corresponding C++
binding, enabling efficient copying of tensor data between `OrtValue`
objects, optionally using a synchronization stream.
[[1]](diffhunk://#diff-c46fc0e05521f706449c04aed599ac0229012c007a78b584519e71a57601d63eR1588-R1599)
[[2]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR1155-R1163)
[[3]](diffhunk://#diff-44e70fbe60cba71c94f1a46ec2b1facaa8e9475232dad6df5ecbea301e76d475R84)

### Miscellaneous Improvements and Fixes

* Changed the return type of the `OrtValue.data_ptr` method in the
Python binding from `int64_t` to `uintptr_t` for better cross-platform
compatibility.
[[1]](diffhunk://#diff-666c9002698d1bbd4215237231e5be98d7b33e5054f018dce952407027bd0473L336-R336)
[[2]](diffhunk://#diff-666c9002698d1bbd4215237231e5be98d7b33e5054f018dce952407027bd0473L347-R347)
* Minor improvements to error messages and device type handling in the
Python API (e.g., for `OrtDevice`).
[[1]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR1176)
[[2]](diffhunk://#diff-f0e8ba8cb8cb07b51b3be675bf62cec07e2eae1461341ce5801d33a57c8f57fdR1219-R1221)
* Included necessary C++ includes for plugin stream support.

These changes collectively improve the flexibility and introspection
capabilities of ONNX Runtime's device, memory, and execution provider
interfaces, and make advanced features available to Python users.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Depends on: #26021
<!-- Describe your changes. -->

Add support for `MemcpyFromHost` and `MemcpyToHost` ops with plugin EPs.

- Add CPU EP fallback kernels for the memcpy ops. These are generic
implementations using a data transfer manager.
- Update `SessionState::PopulateKernelCreateInfo()` to fall back to CPU
memcpy kernels if a node's assigned provider doesn't have them.
- Update `MemcpyTransformer` to determine whether providers are
CPU-based or compatible with other providers by looking at the device
type instead of matching against a hardcoded list of provider types.
This accommodates plugin EPs, where the provider type can't be
hardcoded.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Allow plugin EPs to work with models where memcpy ops are required
(i.e., models where connected nodes are not fully assigned to the plugin
EP).
@adrianlizarraga adrianlizarraga changed the base branch from main to rel-1.23.1 September 24, 2025 00:19
yuslepukhin
yuslepukhin previously approved these changes Sep 24, 2025
@jywu-msft
Copy link
Member

"4: C++ exception with description "Load model from testdata/input_propagated_to_output.onnx failed:/mnt/vss/_work/1/s/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 12, max supported IR version: 11"

seems to indicate ORT can't support IR 12 (ONNX 1.19) yet

chilo-ms and others added 2 commits September 23, 2025 21:42
…lity (#26132)

### Description
In current TRT RTX EP/ TRT EP implementation, when constructing the
`IndexedSubGraph`, for some cases, it will include the node's unused
output as the SubGraph's output. So, it will return the incorrect
`IndexedSubGraph` from its GetCapability to ORT.
Add the logic to prevent adding the unused node's output.

With this fix, we can avoid generating the incorrect EPContext model
where the EPContext node has unused output.
int64_t cols,
float* dequantized_data,
MLAS_THREADPOOL* thread_pool) {
ORT_UNUSED_PARAMETER(thread_pool);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apsonawane I had to add this because thread_pool is unused. I don't know why it doesn't fail on main.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are also other unused variables and redeclaration of variables. I've taken a look, and this is all present on main, but the CI does not fail on main like it does on this branch:

E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(53,52): error C2220: the following warning is treated as an error [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(53,52): warning C4100: 'block_size': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(442,38): error C2220: the following warning is treated as an error [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(442,38): warning C4100: 'activation_output_buffer': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(441,38): warning C4100: 'fc1_output_buffer': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(433,43): warning C4100: 'expert_id': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(431,48): warning C4100: 'token_weights': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_cpu.cc(430,50): warning C4100: 'token_expert_ids': unreferenced formal parameter [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(756,28): warning C4456: declaration of 'k' hides previous local declaration [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(772,26): warning C4456: declaration of 'k' hides previous local declaration [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(922,28): warning C4456: declaration of 'k' hides previous local declaration [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(950,26): warning C4456: declaration of 'k' hides previous local declaration [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]
E:\_work\onnxruntime\onnxruntime\onnxruntime\contrib_ops\cpu\moe\moe_quantization_cpu.cc(961,26): warning C4456: declaration of 'k' hides previous local declaration [E:\_work\_temp\build\RelWithDebInfo\onnxruntime_providers.vcxproj]

@adrianlizarraga
Copy link
Contributor Author

adrianlizarraga commented Sep 24, 2025

"4: C++ exception with description "Load model from testdata/input_propagated_to_output.onnx failed:/mnt/vss/_work/1/s/onnxruntime/core/graph/model.cc:181 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) Unsupported model IR version: 12, max supported IR version: 11"

seems to indicate ORT can't support IR 12 (ONNX 1.19) yet

It's in a unit test added by this PR: #26021 (fyi @yuslepukhin ). The test model was probably created with the newer version of ONNX. I'll disable the test here.

::testing::Values(0, 1, 2, 3, 4));

// Disabled for ORT 1.23.1: Test model created with newer ONNX IR version.
TEST(CApiTest, DISABLED_TestInputPassThroughToOutput) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuslepukhin I disabled these tests because the test models were created with a newer ONNX IR version.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this test does not pass you can not merge Python API

void make_copy<float, bool>(float* mask_data, const bool* mask_index, size_t size) {
for (size_t i = 0; i < size; ++i) {
mask_data[i] = mask_index[i] ? 0.0f : std::numeric_limits<float>::lowest();
mask_data[i] = mask_index[i] ? 0.0f : negative_infinity<float>();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @xadupre, I cherry-picked these changes from #26057

@adrianlizarraga
Copy link
Contributor Author

Replaced by #26140

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants