Skip to content

Conversation

@MasterJH5574
Copy link
Contributor

  • Add the support of exp for the FX translator.
  • Previously the way FX translator dealt with None in torch tensor slice (e.g., x[:, None, None]) is not right. This PR fixes this issue. Specifically, the None here means dim expansion, and the previous impl mistakenly increases the dim counter when seeing None, which will lead to dim counter out-of-range issue in the end.

@tvm-bot
Copy link
Collaborator

tvm-bot commented Mar 20, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

1 similar comment
@tvm-bot
Copy link
Collaborator

tvm-bot commented Mar 20, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot


graph_model = fx.symbolic_trace(torch_model)
mod = from_fx(graph_model, input_info)
print(mod.script())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left over debug statement?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes sorry for that. Just removed it.

@psrivas2
Copy link
Contributor

LGTM! just one left over print statement

* Add the support of `exp` for the FX translator.
* Previously the way FX translator dealt with `None` in torch tensor
slice (e.g., `x[:, None, None]`) is not right. This PR fixes this issue.
Specifically, the `None` here means dim expansion, and the previous impl
mistakenly increases the dim counter when seeing `None`, which will lead
to dim counter out-of-range issue in the end.
@MasterJH5574 MasterJH5574 force-pushed the unity-dev/2023-03-19-fx-exp-strided-slice branch from fae4a65 to 05ea131 Compare March 20, 2023 13:58
@tqchen tqchen merged commit 2a9709c into apache:unity Mar 20, 2023
tqchen pushed a commit that referenced this pull request Mar 20, 2023
* Add the support of `exp` for the FX translator.
* Previously the way FX translator dealt with `None` in torch tensor
slice (e.g., `x[:, None, None]`) is not right. This PR fixes this issue.
Specifically, the `None` here means dim expansion, and the previous impl
mistakenly increases the dim counter when seeing `None`, which will lead
to dim counter out-of-range issue in the end.
tqchen pushed a commit that referenced this pull request Apr 1, 2023
* Add the support of `exp` for the FX translator.
* Previously the way FX translator dealt with `None` in torch tensor
slice (e.g., `x[:, None, None]`) is not right. This PR fixes this issue.
Specifically, the `None` here means dim expansion, and the previous impl
mistakenly increases the dim counter when seeing `None`, which will lead
to dim counter out-of-range issue in the end.
tqchen pushed a commit that referenced this pull request Apr 1, 2023
* Add the support of `exp` for the FX translator.
* Previously the way FX translator dealt with `None` in torch tensor
slice (e.g., `x[:, None, None]`) is not right. This PR fixes this issue.
Specifically, the `None` here means dim expansion, and the previous impl
mistakenly increases the dim counter when seeing `None`, which will lead
to dim counter out-of-range issue in the end.
tqchen pushed a commit that referenced this pull request Apr 1, 2023
* Add the support of `exp` for the FX translator.
* Previously the way FX translator dealt with `None` in torch tensor
slice (e.g., `x[:, None, None]`) is not right. This PR fixes this issue.
Specifically, the `None` here means dim expansion, and the previous impl
mistakenly increases the dim counter when seeing `None`, which will lead
to dim counter out-of-range issue in the end.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants