Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY]impose a max op limit to the op fusion pass #4002

Merged
merged 2 commits into from
Sep 25, 2019

Conversation

yidawang
Copy link
Contributor

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers.

ATT. The compilation may overflow the stack during lowering when a fused op contains a huge statement. @icemelon9

/*!
* \brief The number of nodes belonging to this group
*/
uint num_nodes{1};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uint is not really a cross platform data type, use uint32_t or uint64_t

@jroesch
Copy link
Member

jroesch commented Sep 25, 2019

LGTM, we should consider a more intelligent strategy for controlling how much fusion happens, right now the Relay fusion algorithm is greedy wrt to fusion. cc @jwfromm

@yidawang
Copy link
Contributor Author

@tqchen

@tqchen tqchen merged commit d21f0ad into apache:master Sep 25, 2019
@yidawang yidawang deleted the fuse_max branch September 27, 2019 00:48
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 30, 2019
* impose a max op limit to op fusion

* use cross platform data type
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 30, 2019
* impose a max op limit to op fusion

* use cross platform data type
wweic pushed a commit to neo-ai/tvm that referenced this pull request Oct 1, 2019
* impose a max op limit to op fusion

* use cross platform data type
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants