Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add More Shape Functions #4179

Merged
merged 11 commits into from
Nov 11, 2019
Merged

Add More Shape Functions #4179

merged 11 commits into from
Nov 11, 2019

Conversation

kevinthesun
Copy link
Contributor

@kevinthesun kevinthesun commented Oct 23, 2019

Add shape functions for:
nn: conv2d_NCHWc, pool2d, global_pool2d, batch_flatten, dense, pad, relu, bias_add, softmax.
transform: layout_transform, expand_dim, transpose, squeeze, reshape_like
tensor: elemwise
reduce

Also fix some type inference to support dynamic shape.

Related RFC: #4118

@icemelon9 @wweic @yongwww

python/tvm/relay/op/_tensor.py Outdated Show resolved Hide resolved
src/lang/data_layout.cc Outdated Show resolved Hide resolved
oshape.Set(2, (dshape_nchw[2] + param->padding[0] * 2
- dilated_ksize_y) / param->strides[0] + 1);
} else {
oshape.Set(2, dshape_nchw[2]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jroesch @tqchen Should we simplify the expression with Any to just be Any so that we can avoid writing special rules for Any every time?

axis_record = _create_axis_record(attrs, inputs)
return [_reduce_shape_func(inputs[0], convert(axis_record))]

_reg.register_shape_func("argmax", False, reduce_shape_func)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I get it correctly, all function in this file has reduce_schedule and reduce_shape_func.
can we use a for loop instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registering schedule/shapefunc line by line should be cleaner when later we introduce some ops which require different sch/shapefunc implementation. Also this way is adopted across all relay op files. I think it's a good idea to keep them consistent

Copy link
Member

@icemelon icemelon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@icemelon icemelon merged commit 6252145 into apache:master Nov 11, 2019
@icemelon
Copy link
Member

Thanks @kevinthesun

zxy844288792 pushed a commit to neo-ai/tvm that referenced this pull request Nov 13, 2019
* Add shape functions

* Fix get_const_tuple

* Fix cpplint

* Fix pylint

* Fix pylint

* rebase and fix

* Check Any for infer type

* Fix expand_dim shape func for zero rank input

* Fix pooling infer type

* Address comment

* Register layout transform attr
@yzhliu
Copy link
Member

yzhliu commented Nov 14, 2019

minor: would it be better to call it register_jit_shape_func to differentiate from these in compile time infer shape? The first time I saw it I thought it was an alternative for type_rel.

@icemelon
Copy link
Member

JIT is not accurate here since these shape function are compiled at compilation time instead of runtime. We can probably use register_rt_shape_func as these functions are used to compute shape at runtime.

@jroesch @zhiics @tqchen What are your throughts?

@zhiics
Copy link
Member

zhiics commented Nov 14, 2019

I would prefer register_rt_shape_func or just keep register_shape_func. Is it bad to think it is similar to type_rel?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants