Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay] add ShapeFunc for tanh #6898

Merged
merged 3 commits into from
Mar 8, 2021
Merged

Conversation

monklof
Copy link
Contributor

@monklof monklof commented Nov 11, 2020

Enable run static lstm on relay vm by dynamic batch:

  1. add ShapeFunc for tanh
  2. get in_dim from weights in _schedule_dense_small_batch instead of input in case input's shape's second dim is unknown.

@wweic

@wweic
Copy link
Contributor

wweic commented Nov 14, 2020

lgtm. pls fix the test failure. cc @icemelon9 @kevinthesun

@icemelon
Copy link
Member

I have a question. What do you mean "input's shape's second dim is unknown"? TVM is able to compile the kernel even though the dim is a sym var.

@lixiaoquan
Copy link
Contributor

I have a question. What do you mean "input's shape's second dim is unknown"? TVM is able to compile the kernel even though the dim is a sym var.

This in_dim will be used by following AutoTVM template code which requires integer.

@monklof
Copy link
Contributor Author

monklof commented Nov 17, 2020

@wweic @lixiaoquan thanks for the comment.

@icemelon9

I have a question. What do you mean "input's shape's second dim is unknown"? TVM is able to compile the kernel even though the dim is a sym var.

Hi, this is the case when I import static LSTM model from tensorflow with batch dim set to relay.Any, TVM fails to create schedule for dense op even after I add ShapeFunc for tanh.

Crash Stack:
image

It fails because the data's second dim is inferred to Any, as TypeRelation doesn't support infer shape from the input's data, so my work around solution is extracting in_dim from const weight:
image
image

A, _ = C.op.input_tensors
_, in_dim = get_const_tuple(A.shape)
A, weights = C.op.input_tensors
_, in_dim = get_const_tuple(weights.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably we can extent the logic a bit here to check both in_dim from data and weight. If both of them are symbolic, we can skip cfg define for in_dim since autotvm doesn't support dynamic shape kernel in general.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I will update later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kevinthesun updated

@junrushao
Copy link
Member

@monklof any updates? Thanks in advance!

@monklof
Copy link
Contributor Author

monklof commented Jan 22, 2021

@monklof any updates? Thanks in advance!

updated, check the lastest commit

@monklof monklof changed the title add ShapeFunc for tanh [Relay] add ShapeFunc for tanh Feb 23, 2021
@monklof
Copy link
Contributor Author

monklof commented Mar 8, 2021

@kevinthesun could you merge this pr? thanks.

Copy link
Member

@icemelon icemelon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@icemelon icemelon merged commit ca303aa into apache:main Mar 8, 2021
@icemelon
Copy link
Member

icemelon commented Mar 8, 2021

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* add ShapeFunc for tanh

* _schedule_dense_small_batch turn autotvm off when dense's inner dim is unknown

* fix CI pylint
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request May 11, 2021
* add ShapeFunc for tanh

* _schedule_dense_small_batch turn autotvm off when dense's inner dim is unknown

* fix CI pylint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants