Skip to content

Conversation

@Max191
Copy link
Contributor

@Max191 Max191 commented Jan 13, 2026

Adds support for propagating tensor.collapse_shape and tensor.expand_shape
operations through iree_codegen.inner_tiled ops. This enables reshape fusion
to work with GPU MMA operations that use the inner_tiled abstraction.

Two patterns are introduced:

  • FoldProducerCollapseShapeWithInnerTiled: Propagates collapse_shape through inner_tiled by expanding the operation and inserting a collapse on the result.
  • FoldConsumerExpandShapeWithInnerTiled: Propagates expand_shape back through inner_tiled by expanding all operands.

Only outer (iteration) dimensions can be reshaped; inner dimensions that depend
on the MMA layout are preserved.

Also adds the patterns to the BlockDynamicDimensions pass and the PropagateReshapesByExpansion pass.

@Max191 Max191 requested a review from nirvedhmeshram January 13, 2026 20:32
@Max191 Max191 force-pushed the inner-tiled-reshape-propagation-by-expansion branch 2 times, most recently from eee8783 to cee8f73 Compare January 14, 2026 14:39
Copy link
Contributor

@nirvedhmeshram nirvedhmeshram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

  Adds support for propagating tensor.collapse_shape and tensor.expand_shape
  operations through iree_codegen.inner_tiled ops. This enables reshape fusion
  to work with GPU MMA operations that use the inner_tiled abstraction.

  Two patterns are introduced:
  - FoldProducerCollapseShapeWithInnerTiled: Propagates collapse_shape through
    inner_tiled by expanding the operation and inserting a collapse on the result.
  - FoldConsumerExpandShapeWithInnerTiled: Propagates expand_shape back through
    inner_tiled by expanding all operands.

  Only outer (iteration) dimensions can be reshaped; inner dimensions that depend
  on the MMA layout are preserved. Dynamic shapes are handled correctly.

Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
@Max191 Max191 force-pushed the inner-tiled-reshape-propagation-by-expansion branch from cee8f73 to 4826434 Compare January 14, 2026 20:15
@Max191 Max191 merged commit 81adf56 into iree-org:main Jan 15, 2026
73 of 79 checks passed
@Max191 Max191 deleted the inner-tiled-reshape-propagation-by-expansion branch January 15, 2026 16:03
Max191 added a commit that referenced this pull request Jan 15, 2026
…ops (#22860)" (#23137)

Reapply #22723 now that the torch
model failures were fixed by
#23118.

ci-extra: test_torch

Signed-off-by: Max Dawkins <[email protected]>
keshavvinayak01 pushed a commit that referenced this pull request Jan 27, 2026
…23118)

Adds support for propagating tensor.collapse_shape and
tensor.expand_shape
operations through iree_codegen.inner_tiled ops. This enables reshape
fusion
to work with GPU MMA operations that use the inner_tiled abstraction.

Two patterns are introduced:
- FoldProducerCollapseShapeWithInnerTiled: Propagates collapse_shape
through inner_tiled by expanding the operation and inserting a collapse
on the result.
- FoldConsumerExpandShapeWithInnerTiled: Propagates expand_shape back
through inner_tiled by expanding all operands.

Only outer (iteration) dimensions can be reshaped; inner dimensions that
depend
on the MMA layout are preserved.

Also adds the patterns to the BlockDynamicDimensions pass and the
PropagateReshapesByExpansion pass.

---------

Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Keshav Vinayak Jha <[email protected]>
keshavvinayak01 pushed a commit that referenced this pull request Jan 27, 2026
…ops (#22860)" (#23137)

Reapply #22723 now that the torch
model failures were fixed by
#23118.

ci-extra: test_torch

Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Keshav Vinayak Jha <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants