Skip to content

Conversation

@GregoryComer
Copy link
Member

@GregoryComer GregoryComer commented Jan 10, 2026

Summary: Add a new pass - DecomposeBatchNorm - which converts standalone (non-fused) batch norm operators to 1x1 depthwise convolution. This prevents delegation graph breaks when batch norm operators can't be fused.

Differential Revision: D90422630

cc @digantdesai @cbilgin

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 10, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16533

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit e81cddd with merge base 4f8dbde (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 10, 2026
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Jan 10, 2026

@GregoryComer has exported this pull request. If you are a Meta employee, you can view the originating Diff in D90422630.

@GregoryComer GregoryComer added module: xnnpack Issues related to xnnpack delegation and the code under backends/xnnpack/ release notes: xnnpack Changes to the XNNPack backend delegate labels Jan 10, 2026
GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Jan 11, 2026
…#16533)

Summary:

Add a new pass - DecomposeBatchNorm - which converts standalone (non-fused) batch norm operators to 1x1 depthwise convolution. This prevents delegation graph breaks when batch norm operators can't be fused.

Differential Revision: D90422630
@GregoryComer
Copy link
Member Author

GregoryComer commented Jan 20, 2026

Updated to add test coverage around dtype, affine=False, memory_format/dim_order, and more comprehensive checks on the conv node created by the pass. Will re-open and request review once CI finshes.

…#16533)

Summary:

Add a new pass - DecomposeBatchNorm - which converts standalone (non-fused) batch norm operators to 1x1 depthwise convolution. This prevents delegation graph breaks when batch norm operators can't be fused.

Differential Revision: D90422630
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Jan 21, 2026

@GregoryComer has imported this pull request. If you are a Meta employee, you can view this in D90422630.

@GregoryComer GregoryComer marked this pull request as ready for review January 21, 2026 01:25
@GregoryComer GregoryComer requested a review from cccclai as a code owner January 21, 2026 01:25
@GregoryComer
Copy link
Member Author

Test failure is on a core ml test, so it appears to be a flake.

@GregoryComer GregoryComer merged commit c92f03f into pytorch:main Jan 21, 2026
145 of 146 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: xnnpack Issues related to xnnpack delegation and the code under backends/xnnpack/ release notes: xnnpack Changes to the XNNPack backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants