Skip to content

Address Pad Reflect vulnerability#27652

Merged
yuslepukhin merged 4 commits intomainfrom
yuslepukhin/pad_reflect
Mar 18, 2026
Merged

Address Pad Reflect vulnerability#27652
yuslepukhin merged 4 commits intomainfrom
yuslepukhin/pad_reflect

Conversation

@yuslepukhin
Copy link
Copy Markdown
Member

This pull request addresses a critical validation gap in the "reflect" mode of the Pad operator for both CPU and CUDA backends, ensuring compliance with the ONNX specification and preventing out-of-bounds memory access. The main change is the addition of checks that prevent the pad size from exceeding the maximum allowed value (extent - 1) for each axis, and the introduction of comprehensive regression tests to verify the new behavior.

Validation fixes for reflect-mode padding:

  • Added explicit checks in onnxruntime/core/providers/cpu/tensor/pad.cc and onnxruntime/core/providers/cuda/tensor/pad.cc to ensure that, for reflect mode, both pre-pad and post-pad values do not exceed extent - 1 for each axis, as required by the ONNX spec. This prevents heap out-of-bounds errors and aligns with numpy behavior. [1] [2] [3] [4]

Testing and regression coverage:

  • Added a suite of regression tests in onnxruntime/test/providers/cpu/tensor/pad_test.cc to verify that invalid pad sizes in reflect mode are correctly rejected, including edge cases for 1D and 2D inputs, boundary conditions, and scenarios with slicing. These tests ensure that the operator fails gracefully when pad sizes exceed the allowed limit and succeeds when within bounds.

Other changes:

  • Minor file encoding update in onnxruntime/test/providers/cpu/tensor/pad_test.cc.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a security vulnerability (heap OOB) in the Pad operator's reflect mode by validating that pad sizes don't exceed extent - 1 per axis, per the ONNX spec. The fix applies to both CPU and CUDA backends.

Changes:

  • Added reflect-mode pad size validation in both CPU and CUDA Pad implementations
  • Added comprehensive regression tests covering 1D/2D, pre/post pad, boundary conditions, and slicing scenarios

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
onnxruntime/core/providers/cpu/tensor/pad.cc Added reflect pad size validation checks
onnxruntime/core/providers/cuda/tensor/pad.cc Added reflect pad size validation checks (has duplicate comment)
onnxruntime/test/providers/cpu/tensor/pad_test.cc Added 8 regression tests for reflect-mode validation; introduced BOM

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread onnxruntime/core/providers/cuda/tensor/pad.cc Outdated
Comment thread onnxruntime/test/providers/cpu/tensor/pad_test.cc Outdated
yuslepukhin and others added 2 commits March 13, 2026 17:58
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@tianleiwu tianleiwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix correctly hardens the reflect padding operation against OOB vulnerabilities. The checks are minimal, explicitly compliant with the ONNX specification, and cleanly integrated. Excellent regression testing.

@yuslepukhin yuslepukhin enabled auto-merge (squash) March 16, 2026 22:48
@yuslepukhin yuslepukhin merged commit 5e00772 into main Mar 18, 2026
92 of 93 checks passed
@yuslepukhin yuslepukhin deleted the yuslepukhin/pad_reflect branch March 18, 2026 06:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants