Skip to content

Fix heap-buffer-overflow in constant_pad_nd#18018

Open
psiddh wants to merge 1 commit intopytorch:mainfrom
psiddh:export-D95762335
Open

Fix heap-buffer-overflow in constant_pad_nd#18018
psiddh wants to merge 1 commit intopytorch:mainfrom
psiddh:export-D95762335

Conversation

@psiddh
Copy link
Contributor

@psiddh psiddh commented Mar 9, 2026

Summary:
Fix write-heap-buffer-overflow in set_all_to_value triggered via apply_padding_to_dim, reported by fuzzer (T258811544).

Root causes:

  1. Negative padding values silently cast to huge size_t, causing massive out-of-bounds writes.
  2. When out_data advances past out_data_end, the remaining computation (out_data_end - out_data) wraps around to a huge size_t, causing bounds checks to incorrectly pass.
  3. No error propagation after recursive apply_padding_to_dim calls, allowing the loop to continue writing after a child call has failed.

Fixes:

  • Validate all padding values are non-negative in check_constant_pad_args.
  • Read padding as int64_t and explicitly check >= 0 before casting to size_t.
  • Guard remaining computation with out_data <= out_data_end check at all three bounds-check sites to prevent size_t wraparound.
  • Check ctx.failure_state() after recursive calls and bail out early.
  • Remove dead pad_i >= 0 check (always true for size_t).

Differential Revision: D95762335

Summary:
Fix write-heap-buffer-overflow in set_all_to_value triggered via apply_padding_to_dim, reported by fuzzer (T258811544).

Root causes:
1. Negative padding values silently cast to huge size_t, causing massive out-of-bounds writes.
2. When out_data advances past out_data_end, the remaining computation (out_data_end - out_data) wraps around to a huge size_t, causing bounds checks to incorrectly pass.
3. No error propagation after recursive apply_padding_to_dim calls, allowing the loop to continue writing after a child call has failed.

Fixes:
- Validate all padding values are non-negative in check_constant_pad_args.
- Read padding as int64_t and explicitly check >= 0 before casting to size_t.
- Guard remaining computation with out_data <= out_data_end check at all three bounds-check sites to prevent size_t wraparound.
- Check ctx.failure_state() after recursive calls and bail out early.
- Remove dead pad_i >= 0 check (always true for size_t).

Closes T258811544

Differential Revision: D95762335
Copilot AI review requested due to automatic review settings March 9, 2026 16:37
@psiddh psiddh requested a review from manuelcandales as a code owner March 9, 2026 16:37
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 9, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18018

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Pending, 2 Unrelated Failures

As of commit 19b6aef with merge base 08c3a72 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 9, 2026
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Mar 9, 2026

@psiddh has exported this pull request. If you are a Meta employee, you can view the originating Diff in D95762335.

@github-actions
Copy link

github-actions bot commented Mar 9, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a heap-buffer-overflow in the portable CPU implementation of aten.constant_pad_nd by hardening padding validation and preventing size/pointer arithmetic wraparound during padding application.

Changes:

  • Validate that all padding values are non-negative in check_constant_pad_args.
  • In the padding implementation, read padding as int64_t, validate before casting to size_t, and add out_data <= out_data_end guards before computing remaining space.
  • Propagate failures from recursive padding calls by bailing out early when ctx enters a failure state.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
kernels/portable/cpu/util/kernel_ops_util.cpp Adds argument validation rejecting negative padding values.
kernels/portable/cpu/op_constant_pad_nd.cpp Hardens runtime padding application against wraparound and ensures failure propagation during recursion.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants