Skip to content

Conversation

@tlopex
Copy link
Contributor

@tlopex tlopex commented Nov 2, 2025

This pr fixes ops likeprod, var, std, take, flip, unflatten and so on

@gemini-code-assist
Copy link

Summary of Changes

Hello @tlopex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly advances the PyTorch frontend's capability to handle and verify decomposed operators within its intermediate representation (IR). By enabling decomposition testing and refining the expected IR for numerous operations such as prod, var, std, take, flip, and unflatten, it ensures that the system accurately processes and optimizes these operations when broken down into their fundamental components. This enhancement contributes to a more robust and precise IR generation pipeline, crucial for downstream compilation and execution.

Highlights

  • Decomposed Operator Support: Enabled run_ep_decomposition=True for a wide array of operator tests, including EmptyLike, OneHot, OnesLike, ZeroInplace, Zeros, ZerosLike, TypeAs, Select, Unflatten, Gather, IndexPut, Flip, Take, Std, Var, and Prod. This ensures that these operators are tested in their decomposed forms.
  • IR Refinements for Core Operations: The Expected Intermediate Representation (IR) for several operators has been updated to reflect their decomposed structure. For instance, R.one_hot is now represented as a sequence of R.arange, R.expand_dims, R.equal, and R.astype, while R.std is decomposed into R.variance and R.sqrt.
  • Standardized IR Parameter Naming: Input parameter names within the Expected IR functions have been made more descriptive (e.g., inp_0 changed to data or x), improving readability and consistency across the tests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates several tests in test_frontend_from_exported_program.py to use decomposed PyTorch operators by enabling run_ep_decomposition. The expected Relax IR is updated to reflect the decomposed operator implementations. The changes for operators like std, take, one_hot, and various zeros/ones ops are correct and align with the goal of the PR. I have one suggestion to improve the consistency of the expected IR for zero-creation operators in the tests.

) -> R.Tuple(R.Tensor((5,), dtype="float32")):
with R.dataflow():
lv: R.Tensor((5,), dtype="float32") = R.zeros_like(inp_0, dtype="void")
lv: R.Tensor((5,), dtype="float32") = R.zeros(R.shape([5]), dtype="float32")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other tests for zero-creation operators like test_zeros, it would be better to use R.full here. torch.empty_like is decomposed to aten.zeros, and in other tests torch.zeros is decomposed to aten.full which is then translated to R.full. Using R.full directly would make the expected IR more canonical and consistent across these tests.

Suggested change
lv: R.Tensor((5,), dtype="float32") = R.zeros(R.shape([5]), dtype="float32")
lv: R.Tensor((5,), dtype="float32") = R.full(R.shape([5]), R.const(0.0, "float32"), dtype="float32")

@tlopex
Copy link
Contributor Author

tlopex commented Nov 2, 2025

cc @mshr-h

@mshr-h mshr-h merged commit 5ca61bb into apache:main Nov 3, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants